Jan 30, 2017

Putting Optimized Dynamic Allocations Back Into a Simulator

Maybe this has been done before, maybe not.  In the end the analysis in this post does not look like it moves the needle a whole lot as far as I can tell so it probably doesn’t matter much anyway especially since I think that spending control is a much stronger lever than asset allocation when responding to changing levels of risk.  The question I am exploring here is: if one were to take the dynamic asset allocation recommendations that "backward induction optimization" results might imply, and that I tried to generate in a prior post [1], would it, if plugged into a forward-looking simulator, do anything interesting to fail rates or the duration of simulated fails in a very simple generic and artificial retirement plan? 


The output of an asset allocation "map" based on backward induction might look like this if one were to believe I got it right[2].  The x axis is plan year and the y axis is wealth level in any given plan year. The optimal asset allocations, based on my creaky amateur backward induction method, are the color zones: so yellow is 100% allocation to the risk asset and dark blue is 100% allocation to risk-low (10 year treasury total return in this case).  This is based on a 30-year fixed duration and 30 years of inflation adjusted spending at 4% (not a big fan of planning based on fixed duration or spending, btw, but that's today's game).

Now what I did at this point was, rather than have a static asset allocation (e.g., 50/50 or 60/40) plugged into my simulator for all eternity, I re-coded my simulator so that I could use the data that the chart was based on and then I took that data at face value to create a dynamic asset allocation for each year of the sim.  In other words, in any given sim-year I used the sim-wealth and the sim-year to find the allocation that the BI chart suggested for that particular wealth level and planning year...just to see what would happen. Note that since I have longevity vary in the sim and it can go beyond 30 years, in order to keep things simple I made years past 30 default to the year 30 allocation recommendations.  

First, here is the base-case (before using dynamic allocation) trial-and-error version of the simulator using standard $1M/4% assumptions[3] but trying different allocations one at a time.  The basic idea is that there might be optimal point where one particular "fixed" allocation might generate lower fail rates (and lower "median duration of fail years") than the others. Table 1 is the output of the trial-and-error.  The results look a little harsh because I am using aggressive assumptions. For example the fail rates are from all terminal age cohorts including the very longest.  Also the return suppression doesn't help.  As always it's possible there are errors but let's skip over that.  The table shows fail rates are lowest around an 80% allocation to the risk asset.  I also have other optimal minima when looking at median fail duration (median number of years a "fail" lasted across all failed sims) and % successful total sim years (count of all successful individual sim years divide by the total of individual sim years) so maybe we can call the optimal fixed-approach allocation to be a broad-ish range from 60-80% allocation to the risk asset (or only for this fake sim set-up that is).




Then, I wanted to see, if I were to use an asset-allocation optimizer, whether using "dynamic" allocation techniques alone would do anything interesting (keep in mind that in real life spending cuts would probably win the day in a declining portfolio). This is what it looked like:

The allocation was dynamic so it doesn't map
to the x axis so I threw the results near the other
values from the base case so there was some context.


The fail rate, using a dynamic approach, was estimated at 26.4% or a little higher than the hand-picked fixed allocation of 80% risk asset (if minimizing the fail%) and the median fail duration was 6 years vs. 8 for an 80% allocation or 7 at a 60% allocation (if minimizing the duration minimum).  The % of successful sim-years was 92.3%.  So, I guess: lose on the fail rate win on the fail duration. But that, in the end is the interesting trade-off because the retirement game is not only about minimizing the risk of fails but also about minimizing the risk of time spent insolvent. In that sense there looks like there might be a little gain with respect to the time spent insolvent but I don't know whether that's statistically significant.  Whether this exercise has proved anything useful to me one way or the other is still up in the air. 



Notes------------------------------------------------

[1] The writeup on BI was here.  There are more than likely errors in modeling and math so take all of this with a big fat grain of salt.  I'm a piker on BI so defer to the experts.

[2] and here you really have to suspend disbelief and assume that I got it right…and for this post we will do that just for the heck of it.

[3] Simulator assumptions include among other things:
$1M endowment
4% constant inflation adj spend
2% return suppression first 10 years
Age 58 start
Random longevity conforming to 2013 SS life table for a 58 year old capped at 105
$1000 in social income at age 70
No spending trends or shocks but some random variability and skew
Some effects for fees and taxes included
Allocation is either fixed or dynamic per discussion above








No comments:

Post a Comment