Jan 12, 2017

Five games one can play with a longevity-varying simulator

This post has less to do with any real conclusions about retirement finance or the practical application of analytical tools and more to do with how I squander my time in an early retirement.  You are on your own when it comes to what any of this really means or whether there is any practical use.

I have a amateur-created (that's me) simulator designed for an audience of one (me again) that, among other things (including some idiosyncratic design and assumptions), lets longevity vary by either a Gompertz distribution or the SS 2013 Life table, the latter of which is my default. Most simulators make you pick an end age or a fixed planning duration.  Since longevity uncertainty is one of the big uncertainties I always thought that a fixed duration analysis was a little weird.

Once longevity varies it gets a little harder to interpret, of course, but on the other hand it also gets a little bit more interesting.  Here below are some of the ways I am starting to look at sim output (or rather let's call this the software games I play when I am not cooking or cleaning or picking up after three kids) now that I have a little more flexibility in my simulator.   For the moment let's forget some of the very deep weaknesses in leaning on simulators for any kind of hard conclusions about retirement (See for example Milevsky and Blanchett on simulation shortcomings). This is just for fun.



ASSUMPTIONS

The charts below all use these assumptions:

15,000 sim runs done in a sampling-style approach
Start age = 58 (early retirees are more fun, right?)
$1M endowment
4% inflation adj spend rate
Spending variance with .05 standard dev
Spending variance has a slight skew up
Spending has a slight 1% downtrend (see Blanchett on trends)
There are no spending shocks in these runs
There are no spending inflection points, three are available
Portfolio is 60/28/12 stock/longbond/shortbill
Bonds are modeled as total return using Stern data from Damodaran
Tax advantaged accounts not modeled
I threw in 1k in Soc Sec at age 70
No annuity or pension
Taxes (a factor is in there but it's rudimentary)
Fees = 60 bps
There is a 2% portf return suppression for 1st 10 yrs
Longevity is capped at 105
If needed I use a PV discount rate of .03


GAME 1 - The Scatter Plot Game

This is a more or less standard scatter plot that looks vaguely similar to other simulator output.  Most sims will give you a spaghetti chart of all the paths from current age to the fixed end age.  But in this case, keep in mind that each dot is a data-pair of terminal wealth and terminal age for each of the 15,000 runs.  A standard regression line and a lowess line are added in to keep one's eye off of the crazy outlier dots.   I put a histogram of the terminal age over the top so there is a visual cue of the likelihood of surviving to a particular age.  If, for example, the median age according to life tables is ~81 for me then look at that bar for 80-85 and contemplate the dispersion of failed end-states.  Not so bad.  Survive to 105 and it's more of a Vegas kind of thing.



GAME 2 - The Cumulative Fail Rate Game

In this game I scroll terminal-age by terminal-age through the 15,000 points and for all the points below a given terminal age x I calculate the % of outcomes that fail.  In that way I think I can see how the simulator assesses fail rates by longevity expectation.  While my SS table says "my" mean is 81 and the mode is maybe later like 85 or 87 or something, I can take a quick look at this kind of chart and, based on a range of planning assumptions, gauge a fail rate for some particular longevity expectation.  Maybe the way to look at this kind of thing is to go up from a mean age = 81 to what the fail rate is, then maybe go up from 95 (or about the 3rd quartile in longevity terms) and see what that fail rate is.  I'm not so sure I need to look at the age 105 fail rate. Again, I overlay the chart with an histogram to give a visual cue to likelihood.


GAME 3 - The Heat Map Game

I've always disliked the output of simulators because in the later years the results get crazy: not only are fail rates hard to interpret but those outlier results of ending up with a billion dollars are not helpful.  We all know that outcome is unlikely and it adds no credible meaning to a retirement analysis.  In this game here I use a heat map approach that uses the freq distribution of both terminal wealth and terminal age to color-map the more likely or more frequent occurrences of the dots.  There are other ways to do this I just happened to find this particular R function and used it.  I doubt there are any specific quantitative take-aways in this game.  It's more of a visual-cue-thing to see that the late ages and the weird portfolio results are not particularly germane to the core analysis.  In this case, maybe just look at the yellow and red areas and make sure it is not pointing down or bending down or below zero especially near one's planning horizon.  In that case there would be no real analysis, it would be only: spend less and/or make more.


GAME 4 - The Certainty Equivalent Game

Ok, in this game I am probably a little too far over my skis.  I've only just acquired some rudimentary math and a basic understanding of utility functions.  In the end I do not (yet) really have a good way of understanding and interpreting this. Nor do I know whether I have applied this correctly. But let's press ahead anyway.  For the chart I used a CRRA (constant relative risk aversion) utility function and calculated that against the present value (at age 58 in other words) of terminal wealth outcomes.  I ditched negative wealth at this point and only looked at "terminal wealth > 100k" just to keep the math sane (This is just for the CRRA analysis; I'd have to go back and look but I think I left negative wealth in the mean PV calcs of term wealth...tbd).  Like the above charts, I scrolled through the data terminal-age by terminal-age and averaged the CRRA utility on the ">100k terminal wealth in PV terms" for that cohort (i.e., age 58 to "x" at any given point). I then backed into the certainty equivalent by inverting the Utility function.  That way we get to see certainty equivalent of terminal wealth in pv terms for each ascending age cohort (that cohort can be interpreted as a longevity expectation on the x axis).  The black line is the mean PV of end wealth for each cohort.  And here maybe we have to be careful.  Many sim spaghetti charts will give you a mean of wealth for each planning-year across all the many sims as it progresses towards the fixed end date.  Here in my CE game, on the other hand, the mean wealth is not only in pv terms (i.e., not fv like the other sims) but also it also represents a set of specific terminal wealth "events" rather than just being a way-station along the path to a simulator fixed end-age. Subtle difference but different.  Why PV? Why not? Plus it's a way of normalizing the data to something I can understand a little better.  Why, really, use certainty equivalents of terminal wealth? Hmmm, I'm still working on that. I did say this is a game, right?

Black - Mean value of the present value of terminal wealth for given longevity cohort
Dotted - 1 standard deviation +/- the black line
Purple - certainty equivalent re utility of black line using risk aversion = 1
Blue - certainty equivalent re utility of black line using risk aversion = 2
Red - certainty equivalent re utility of black line using risk aversion = 3
Green - certainty equivalent re utility of black line using risk aversion = 4

U(Wg) = W^(1-g)/(1-g) for > 1
U(Wg) = ln(W) for = 1


GAME 5 - The Sampling Game

In this last game, which is not really a longevity game, I let the simulator run 15,000 sims not by running 15,000 times but by running 30 versions of  500 times. Why the heck would I do that? First, there is no real justification for doing that. Second, the only plausible reasons I could come up with to justify it to myself were: a) it's a little bit faster[1], and b) it reminds me in a tidy and visual way that each of the 15,000 simulator runs is  just a fake and different version of the world.  By running 30x500 I get 30 parallel universes in addition to the 15,000 worlds -- or to carry the metaphor correctly I get 30 universes with 500 worlds each for a total of 15000 worlds.  But the central limit theorem tells me that the mean of the sampling distribution (of the 30 parallel universes) will still be about right (and I still have the 15k run data to get the real nbr anyway). Then, each of the dispersed 30 results reminds me that there is no such thing as a single "fail rate," there are many.  This is a psych trick on myself but I like it.  For example, in the run I did for this post the fail rate was 12.6%. Skip, for the moment, how one would really interpret that number and focus on the concept that your fail rate in your one run at life could be anything but maybe it's closer to 12.6 on average...maybe.  This is thin gruel of course but it is, after all, just a game.

Notes
--------------------------------------------------
[1] The first time I built a simulator it was in Excel (2002 version; you probably didn't think anyone still uses that) and Visual Basic. 10,000 runs took about 45 minutes.  In the straight-up-sim version in R, 10k runs took about 7-9 minutes (but that isn't fair to the next point because I had some debugger tables active that slowed it down a bit).  Now with sampling (and no debug table) its under 40 seconds.  That, btw, is an invitation to play games I probably should not have accepted.  I have serious work to do, like all those dirty clothes on my kids' floors.



No comments:

Post a Comment