Jun 3, 2020

My baby steps into "critical states" in a decumulation model

I have only the most superficial, paper thin, and relatively naive understanding of statistics. I know even less about chaos theory and critical states. So, I am uniquely qualified to not write this post. How's that for sand bagging?  But I just finished "Ubiquity - Why Catastrophes Happen" by Mark Buchanan which gave me an idea for how to model hits to a retirement plan that occur like avalanches in a sand pile -- or earthquakes or forest fires -- where there are few if any normal distributions or any kind of predictability around damage magnitude.  Also, I just finished an actuarial paper on "Extreme Value Theory" so my interest was engaged.


The Basic Idea

The basic idea is that if history matters -- as in small inconsequential accidents or stressors get frozen into a system over time so that anything of any magnitude can happen at any time, as in sand pile avalanches -- then a simple "historical game/model" can sometimes provide insights. I had an idea for a game coming out of reading the book so I winged it just for fun. But now that means I have to double down on my sandbagging with some additional disclaimers.

Disclaimers 

This is a naive first pass. The model is not a real thing and does not represent a real-world process or if it does it is not a process here on earth, yet. In addition I may have misunderstood critical states. Mostly I wanted to see if I could do it just to see the effect or maybe to have some code as a stub or placeholder for something in the future. In addition, it dawned on me that the way I set up the model I am probably underplaying risk by quite a bit (e.g., too-long intervals between avalanches; way too long between really big ones).  In the end, I just wanted to put a "complexity shim" [2] into my existing consumption utility model, goof around with it, see what happens, and then maybe add some reality later, if at all.

Past Work with Fat Tails and Their Discontents

We've been here a little bit before on this blog, though. In a past post I acknowledged that using normal distributions (for, say, a return generation process) is a convenient fiction. Convenient because it is easy math and easy coding and it is often "close enough." Fiction because real processes in finance often have fat tails rather than "standard normal" ones. After acknowledging that, I played around for a while with a fat-tailed (left in the case) distribution model for investment returns. Fat tails are often how complexity and chaotic processes manifest in a statistical distribution. The occurrences of mayhem - or supposedly unlikely events - tend to happen way more often in reality than a normal distribution would imply. Hence the fat tail.

Since I don't have a real command of statistics, the only way I knew how to do this then was by using a Gaussian mix where two distributions -- a higher-return narrow-vol distribution (maybe call it the treble clef melody) and a lower (or negative) wide-vol distribution (maybe call it the bass note) -- are combined to replicate something like the distribution of monthly or annual returns of the S&P.  Like this in figure 1...

Figure 1.  Gaussian mix to re-create a fat tail

  - black is the original SPY density for monthly returns

  - blue is the normally distributed "EM dist 1" (high) - random return generation

  - red is the normally distributed "EM dist 2" (low) - random return generation
  - black dotted is the artificially/mathematically reconstructed non-normal Gaussian mix


This, too, has the convenience of being easy to do. But it's still a fiction (all modeling is fiction, but well you know...) because the presumption implicit in the "mix" is that the second distribution also comes from a recurring, statistically describable, probabilistic process...where in retirement finance that presumption is highly doubtful (Dirk Cotton at theretirementcafe.com did a great series on chaos theory in personal finance). More likely the thing that creates the fat tail comes from a complex or chaotic place that does not have repeating patterns or a statistically describable probabilistic process.

In addition, the mix I used in the past was only applied to a return generation process where in truth avalanches of any magnitude can come from anywhere.  Quietly accruing pressures - driving any subsequently released magnitude of mayhem - can come from directions other than the market alone. Here I'm thinking of things like divorce, or massive unplanned spending blows (my roof, for example), real estate bankruptcy spirals, capital haircuts (think Cyprus), pandemics, riots, 1987s, etc. It's not always about the known elements of the market or about return distributions (keeping in mind that not all catastrophes can or should be modeled or planned for or hedged -- comet strikes might be an example).

The Base Model

The base model we've seen before. It's the life consumption utility simulator I have described here. The notable thing is that it is not a wealth model or fail-rate model. It evaluates, rather, life consumption in utility terms.  Consumption, by the way, snaps to available income for remaining life when wealth depletes.  Implicitly there is no classical, mathematical "fail" or ruin here. Some minimum hardscrabble consumption-existence is assumed if wealth depletes and no income is available.

The input assumptions I won't list. They are exactly the same as in this post with the exception that inflation is deterministic here and we are varying the chaos and wealth equivalents across scenarios rather than stochastic inflation.

The Critical-State Model "Shim"

This was a little bit naive, but remember that this is just goofing around and a placeholder for later. The general idea was to track over time -- within the sim-lives within the iterations -- some kind of random stressor that in its occurrence does nothing at the time but is nonetheless "remembered" (historical physics of a sort. The metaphor in the book is grains of sand being dropped on a pile up to the point of avalanche i.e., the sand-pile game.). Then, above some entirely arbitrary threshold level, the possibility, but not the necessity, of mayhem of any magnitude exists and where the release only happens with some residual probability. I had to coerce this a bit by creating some soft link between years of quiescence and magnitude[1]. Not sure that was legit but was OK enough for a first pass if we at least get a power-law-ish shape in return.  The "mayhem" here was a simple percent whack at wealth of anywhere between 10 and 90%.  This could be imagined as divorce or a large-scale spending event, of course, but here it looks more like an unexpected wealth tax...which probably feels like (or is) the same thing.

According to Ubiquity, the magnitude of sand avalanches, earthquakes, and forest fires are not predictable or statistically normally distributed but do tend to have what he calls a power-law shape. The bigger the event, the more vanishingly rare it is.  Also the events over time exhibit what the book describes as wild rhythms of temporal unpredictability.  It was these two features I was trying to coax out my naive complexity-code insertion. Mostly I think it worked because it took on the shape I was looking for. In retrospect the critical states I cooked up were probably too weak but idk. That's for later.

Here is the cooked up frequency of the different magnitudes of what I'll now call a "chaos tax" in this post. This is power-law-ish enough to work with for now (doubling the magnitude gets 3-5x rarer).  X axis is the magnitude of the "tax."

Figure 2. frequency by magnitude


Here, now, is the temporal "rhythm" over a slice of maybe 500 or 1000 years (can't remember what data I pulled) in a 100,000-year test run that shows both the calm and jumps. This is similar to a "rice pile game" that was illustrated in Ubiquity. Again, close enough to work with in this post...

Figure 3. chaos tax "avalanches" over time, t = n x 100


The mean time between any event was about 12 years but the no-mayhem interval duration was actually a random variable distributed in a lognormal-ish shape (not shown). The interval between any two .10 events was ~21 years. The interval between any two .20 events was 55 years. The mean interval between the really rare events was 10s of thousands of years but in a simulation with parallel/alternate lives, .90 will get hit in at least some cases. Plus the mean interval calc was compromised by a small numbers problem. I just don't know if the model is very meaningful when I do this. Certainly the impact of the .90 events wash out a bit when doing 10's of thousands of iterations. The modeling problem is that in any ONE real life that .90 would likely be a death blow. The implications below seem mild because we are simulating across so many alternative universes.


The Setup and the Question

Did I mention yet that this is a fake model not found on earth? Yes? Good. Here is how I set it up:

1. I ran 10000 iterations of a lifetime (random end) in a consumption utility simulator, no chaos-shim
2. I ran #1 for each spend rate between 3.5% and 6% in .5% increments
3. I ran #2 for each allocation to risk between 0 and 100% in 10% steps
4. I charted the output to visually and manually pick out the optimal spend rate and allocation
5. I ran 1->4 again but now with the critical-state-model shim
6. I ran #5 again iteratively, trial and error, to find the level of wealth necessary to get back to #4 lvls

Step #6 is, I think, a "certainty equivalent" wealth metric. I'll have to ask. Either way, this is the question I wanted to get at:
"if I insert this fake complexity "shim" into the base model, how much extra wealth would I have to start with in order to stay at parity with the base case consumption utility?"  And "does it say anything about spend rates or asset allocation?"

The Output

If we limit ourselves to steps 1-5, this is the output for each spend rate (the lines) across all allocations (in 10% steps, the X axis). The grey is the base case. The red is with the complexity shim.

Figure 4. steps 1-->5

There is a lot of stuff going on in that chart so let's pare this down and just pick the spend rate line that had the highest peak for the base case (grey) and for the complexity-shim (red). In addition we will do the same with a visual trial and error process (step 6) intended to see what level of initial wealth gets us back to the base case or close enough (blue).

Figure 5. Certainty Equivalent Wealth trial

Any Conclusions?

The addition of the shim looks like it sucks lifetime consumption utility out of this model. So the conclusions, if we are aware that there is no real earthly conclusion to be made from a highly abstracted/fake model, might look like this:

1. Given this exact model and setup I'd need maybe 25% more wealth at the beginning over all runs
2. If, as a human, my ONE path in life had a 90% hit, I'm likely screwed even with 25% more wealth
3. The insertion of complexity and critical states looks like it needs more equity risk
4. The insertion of complexity looks like it demands a little conservatism in spend rates
5. Stochastic inflation, like in real life (not shown) would make all of this harder.

The real conclusion was more likely in an epigraph to chapter 4:
The purpose of models is not to fit the data but to sharpen the questions. -Samuel Karlin


Notes
-----------------------------------------------------
[1] I also did a slight work-around to avoid having to build my simulator from scratch. That would have been a lot of labor and no one reads this blog anymore (ok, two) so I figured the impact is low. Also I think the outcome is fine and not affected. Happy to describe the work-around to the interested.

[2] because I don't know the space well I am playing fast and loose with terminology. If systems are simple, complicated, complex and chaotic, I am mixing up the latter two indiscriminately. But maybe that is as it should be.

Reference
-----------------------------------------------------
Buchanan, M. Ubiquity, Why Catastrophes happen. 2000, Three Rivers Press

Embrechts, Resnickm & Samorodnitsky, "Extreme Value Theory as a Risk Management Tool," North American Actuarial Journal, V3 nbr2 April 1999

Gleick, James. Chaos, Making a new Science. 1987, 2008 Penguin.




2 comments:

  1. You seem to be spending a lot of effort to avoid buying insurance.

    ReplyDelete
    Replies
    1. Meh. disagree. I'll call it a mis-apprehended premise. I just play with this stuff to get a feel for the "shapes" of whatever I am learning at the time. None of this is planning or definitive stuff. No one knows whether or how I have "insured" anything of my personal stuff or whether I have an age or rate-regime at which insurance sounds like it is a good choice. If one were to be referencing the annuity paradox then one should be explicit, although in that case we'd prob have a legit dialogue. either that or total agreement. idk. hard to tell here.

      Delete