Mar 19, 2023

Another, and Likely Last, Look at Backward Induction SDP for Spend Rates

My fin-bro David chastises me for sandbagging in my blog but in this post you must take everything after this sentence with a big fat grain of salt. I have no idea if what I am doing here is either correct or legit and the fact that it looks like it works in the end is pretty much dispositive of zip. Plus my notation is sketchy.

Two years ago I tried to do this thing where I inferred optimal spend rates by working backwards from the end of a 30 year interval and used the results of a value function in the last period to help figure out what to spend in the period just prior and then on to the beginning. This is called backward induction and stochastic dynamic programming and is often described as Bellman equations if I have it right. 

Sorta worked then. I went back a week ago to look at my code and I noticed two things: 1) I had no idea what the code was doing, and 2) there were some coding errors. Those two things, combined with incipient dementia (kidding), caused me to decide to exercise my brain and try this again. Silly me. There is no real purpose other than do it and what I noticed in the re-try is that it's a lot of work for results one can get from easier and simpler methods: the juice is probably not worth the squeeze. 

The Baselines... 

...for this effort, i.e., the "easier" methods I mentioned, are these:

1. 1/longevity. This is a method mentioned in a paper by Irlam and Tomlinson (2014) and some articles by Irlam on aacalc.com. Simply stated: one can get very close to optimal outcomes from other more sophisticated methods by dividing 1 by a longevity estimate. The trick is maybe to think about what table what cohort what percentile and whether we are looking at conditional longevity. In this post I'll use a healthy cohort [1] and either the mean conditional expectation or the 95th percentile for ages 65 to 95...trusting that Irlam and Tomlinson are correct on this proxy. This provides a pretty good zone with boundaries that might make sense to a reasonable person that has read the literature. Imo. 

2. A proprietary spending heuristic.  I call it RH40 and I made it up a few years ago. That sounds pretty stupid, which it might be, but it works and it was made in good faith. The formula is 

Spend = age / (40 - age/3)  +/-  a little spiff for risk or not

This originally came from a "divide by 20" rule concocted by Evan Inglis a pension finance actuary. The point of "20" is that it is extremely easy to remember and apply and is roughly based on age and capital market assumptions, per Evan. He said also that maybe at 90 or 95 it is divide by 10 so I complicated things by doing a formula that basically does both 20 and 10 at ages 65-95 and it became RH40. I later tested it and it turns out to be roughly equivalent to solving the Kolmogorov PDE at different ages for a constant 5% lifetime probability of ruin. My point is that in these, the last of my blog posts, I'm not going to do any work with differential equations. I'll use a proxy and I've gotten used to mine. I also in my past post showed that Merton with risk aversion of 2 and a longevity tuned to an SOA table 90th percentile is very close to RH40. So, not a terrible rule.

Note here that there may be a number of other ways to baseline the post. I picked these based on past use and ease. 

The Model

I work backwards over 30 (arbitrary) periods. In the last (30th) period I run a mini-sim of a net wealth process. Like this (and note henceforth I might get the notation wrong and it my not be perfectly tuned to the code but I am just trying to convey generally what is going on): 


where returns are random normal (don't complain) with a distribution based on 4% r and .12 std dev (don't ask, this is just a simplifying thing, my focus is not on portfolios or allocation here and yes 4% is real return). If consumption C were to be > wealth W then C is constrained to W. If W is <= 0 it is constrained up to a tiny floor. This ensures that there is always at least some consumption > 0 so if wealth runs out we can make at least a little money begging at intersections. 

To evaluate the situation along the  backwards path I will use a value function based on CRRA power utility. And here we are potentially stepping into a shit-zone. There are some hints that what I am doing might be ok but nothing anywhere really says that it is. Literally no idea on this. But I'm running with it and reminding you again about the grain of salt at the same time. 

At t=30 the value function I made up is like this: 


where c is consumption and b is bequest or residual wealth per the recursive net W process above. Consumption has to be something rather than nothing but it also should be less than that which is destructive to the utiles in V. "k" is a weighting to consumption or bequest. I have made the c and b weights add to 1. Hope I can do all that. I've seen variations on this in some papers but zero exactly like this so I am understandably worried I am way off road. In my previous post, k was selected 100% arbitrarily. In this post I decided to spend some time to see if there is something other than merely arbitrary that we can do. Which we do. Later. The value function is evaluated for levels of wealth from 100k to 5M in 100k increments and for different spend rates from .02 to .70 in either .001 or .005 increments depending on whatever and sometimes constrained to even narrower intervals. This program takes forever so some of these assumptions are accommodations to the reality of my desktop PC. For each year the wealth process is run either 10k times (t=30) or 1000 times (t=0-29) to keep the run times down. Imperfect? yes. 

Once we step backwards to the previous period, say t=29, we work with the outcome of the next (t=30) to pull the results of that next into the current but at a given particular level of wealth since I am working with many and the results in the mini-sim are unknown before it is run. There is probably a better way to articulate this. Speaking of wealth, in later charts note that I will generally chart at some "constant wealth" level. At different ages I assume always $1M which does not represent the level of wealth remaining at, say 85, after starting at 65. It's more like "what is the spend rate if I had 1M at 85 and were starting from that point." Hopefully that makes sense. 

So, the Value function in periods where t<30 looks more like this and I am not 100% sure I have rendered it correctly: 


The utility function U[] is a power CRRA function. That function is the only one I know and can use and even then I know little about it other than it is a "curved" function and the absolute values mean nothing, only that the curve punishes low consumption/wealth and diminishingly and monotonically rewards additional C/W. The form I use here is like this:


where gamma is a coefficient of risk aversion derived from nuthin[2]. Note that I exclusively use "2" in this post and do not test other risk aversions. I don't have a defense for this right now except in the footnote. This is relatively low risk aversion, btw. 

So, to finish this out, at each time step backwards I am pulling the spend rate associated with max V at some level of wealth, if I remember the code correctly, and storing it. That means after waiting for this thing to run for an hour or so, I have a table of 30 years and different wealth levels with the associated max V and its related spending rate. 

Tuning the Year 30 Weightings 

Two years ago I used k = .20. I don't remember why but it struck now me as a fairly arbitrary choice. This time I took a moment, a long moment since the run times suck, to see what happens -- again assuming my value function approach is valid -- when I vary k. I looked at two things when doing this: 

1) the output of the value function for different values of k, and 

2) the spend rate associated with each V for each value of k. 


1. t=30 Vs for different k

Fig 1

X axis is the k weight where to the left is 0 (low weight to spend) and right is 100%. Y is the max V at each weight. This is a U shape and on the far right is a local max for a heavy weight to bequest. On the left are some that above that local. In other words one needs to be above the local max on the right otherwise one is engaging in relative disutility or value destruction. That range on the left above the line is from about a 10% weight to spend utility to 0%. Note that the weight here is to utiles not to spending proper. And...consumption can't be zero cuz that is dumb. It also has to be whatever is connected to a 10% weight to k or less but more than k=0. In this post I used two values of k: .02 and .05 which are marked in figure 1. 

There is probably some super advanced point here on marginal utility of consumption vs marginal U of bequest but I am not equipped to make it. Rather the amateur game here is to "max V" but keep real, absolute consumption both above zero and relatively high but not so high as to destroy utility. That's pretty loose terms but whatever. Maybe if I do this again... So, real C needs to be where V is above that line. That's how I came up with the 2 and 5% parameters for this post.

2. t=30 Spend for each V

Now, for each blue dot in figure 1, what is the associated spend rate in the max(v) game we are playing? It looks like this in figure 2: 

Figure 2

I noticed a couple things when I did this. First, for super tiny weightings to k the spend rate is running around 1 to 3%. Proves nothing but reminds me of the Ed Thorpe comment about 2% spend being an effective perpetuity. This has always been a kind of lower bound for me, too. My guess is that spend here would go to zero as k goes to zero but I am not sure in this backwards recursion thingy. Probably close either way but it doesn't matter.

The second thing is that spending limits towards 65% on the right with a weighting to k of .999. This was odd at first but it reminded me of the previous post I did on this where it dawned on me that in a two period abstracted model like this that bequest is not really to kids 50 years from now on a deathbed. It is strictly to the next period. The legatee therefore is me (or might be my legatees) and death may or may not literally occur. The other way to see this is to view the bequest as a form of consumption: I consume now and I consume the next period. I always must donate something, anything, to the future or the whole house of cards might collapse and I will starve.  

In this model the spending in absolute real rate terms is always going to be somewhere between 3% and 20% depending on age/period, wealth, and weighting to k of either 2 or 5%. And remember we have not tested asset allocation or risk aversion anywhere here. 

Model Output

Figure 3

Observations on Figure 3

  • I wonder what it would have looked like if I had used 40 or 50 periods vs 30
  • The SDP model looks conservative in the later years (red dots anyway) and less conservative in the early years. 
  • The SDP model appears to be playing the same game as the baseline models
  • The mini-sim is really tiny which is what drives the variability I think
  • The 1/longevity for different percentiles looks like a pretty good set of boundaries here
  • RH40 didn't suck as a predictor, again
  • For the more conservative k, it looks a little like the "4% rule" has a little corroboration?? But then we only tested one portfolio
  • I could have had more baseline comparisons but why? I will trust Irlam and Tomlinson on 1/L
  • The variability could be due to the scale of the mini-sim but in the past I have noticed that using CRRA math creates a "zone" of relative similarity in spend rates all of which are more or less pretty good for a given set of assumptions. I do not prove that here, just a thought. It's a little like when I say asset allocation doesn't matter in a broad range from maybe 40-80% to equities but you'd have to understand how exactly I came to that conclusion. It's only in Utility world I can say that. 
  • Note again that the "years" here are not running down some "initial pile" which might or might not run out which it might if we were looking at it that way. We are instead looking at spend at 1M of wealth for that period/year. 

Observations on this post

  • Why the hell would I do this level of complexity for something where I can draw inferences from other simpler methods? No idea. Intellectual curiosity? Humble bragging on tech virtuosity? Learning something new? Norm the other methods I've used? Stave off dementia? Maybe all of that, idk. Not too sure I'll do this one again.
  • Do I have any new take-aways after this effort? Not really. I already know that in the absence of lifetime income (pensions, SS, annuities) one has to be a little conservative with spending early in the lifecycle. What that is exactly matters in models but in real life most grannies instinctively know to spend carefully, adjust when conditions warrant, be wary of salesmen knocking on the door with super-optimizing systems, and that sometimes suboptimal is ok (not proved here, right?) 
  • This method seemed to work out in general terms. There was no necessary reason to expect that. I don't recommend this, though, and I probably won't use it again. The main "utility" if you will was just shaking out some mental cobwebs and making sure that my past coding errors did not sink any conclusions I made last time, which it didn't.  





-------------------------------------------------

[1] from aacalc.com Society of Actuaries 2012 Individual Annuity Mortality Basic Table with Projection Scale G2and 2005-13 Individual Annuity Experience Report contract years actual/expected rate adjustment

[2] I gather that risk aversion can be derived from questionnaires or observation. Ever single time I have run a model with CRRA and tuned it to my behavior the RA seems to be "2" hence the use of 2 here but remember that that is so arbitrary it is absurd. 




3 comments:

  1. OK.
    If I understand your point that begins "Note again that years ..." correctly, then your spend is loaded towards the end of life; is there an easy way to reverse this so tat spend is front-loaded?

    ReplyDelete
    Replies
    1. I don't quite get the question so I answer one that you didn't ask. This infuriates my kids sometimes or at least it used to. My oldest is 25 now and gets why I do this. So, if one is using an evaluative framework of Utility, which is fraught I think, it changes things. In that setting the availability of life-income stuff (pensions, annuities) means that spend can 100% be more front loaded and in fact it makes sense to deplete wealth early to max out consumption utility. Probably cuz the out-years are so down-weighted by longevity and time preference, idk. All those papers by Milevsky, Leung, LaChance etc have that character. Without that income one needs to reserve for oneself the ability to mitigate life risk (early) but then as life comes in then one is freer. ie, at 95 I'll throw a party and blow it out. If I'm still alive and can blow it out that is. Me now, at 64? I'm a little more circumspect. If I had a giant pension I can spend like wild now and know that I have a backstop when W runs out at 87 or 92. Am I answering the same question u are asking?

      Delete
    2. Yes you are, and thanks for the reply.
      My key takeaway from your answer is that in such a framework the presence of sufficient life income is critical because once it starts paying out it effectively removes the risk of early pot depletion.
      Thanks again.

      Delete