A WDT Model

This content is set up as a separate page because I put a ton of work into this WDT idea and I kept losing sight of past posts as the blog ages and grows. So, this is a placeholder in order to have some continuity and something easy to find. Also I am putting some extra emphasis on this concept because I consider it to be quite integrative of many of the major themes and processes I have studied in retirement finance so far.

My Concept of "Wealth Depletion Time." 

In a number of papers I've read since the beginning of the blog, I kept running into the phrase "wealth depletion time" (WDT) or "wealth depletion age." My guess at the time of reading was that it was more or less the same idea as or similar to portfolio longevity or maybe ruin risk. But then I started to see that it was something quite a bit more subtle than that. The basic idea of WDT as far as I can tell is that there is a hyper-focus on consumption strategies and the evaluation of their lifetime "utility" (rather than wealth) in a world where, instead of traditional simulation that assumes one is "ruined" (at some point zero wealth, zero spending), one is forced instead to change spending to available income when wealth is depleted. Then, in an honest model where lifetime is treated as random, the utility of the possibly-binary consumption path over a random lifetime, for all years of the pre and post-depletion (if any) states, is: discounted, summed, and then averaged to evaluate the "expected discounted utility of lifetime consumption" or EDULC. This is a type of simulation, of course, but it is not calculating a percent of simulated lives ruined, it is evaluating the lifetime utility of consumption in a world where there is not ruin, just a severely adjusted lifestyle that depends on pensions, social security, and purchased annuities when no un-pensionized wealth remains. Note that the focus in this model is on decumulation only.

A schematic of this concept could be rendered like this:




A Compendium of Links and Posts

This is a summary of what I have conjured so far. I'm surprised by how my current understanding is not so different from what I hypothesized at the beginning. I'll attribute the whole effort to a reading of some work by Prof. Milevsky who emphasized both the importance and subtleties related to understanding the concept. I'm more or less glad I went down the path. The effort has helped me "see" retirement finance in a way that I consider to be either comprehensive or integrative or both. That was one of my goals for years.

The first link below has a small bibliography of references to the literature on the subject which is reproduced below. The second link, when I was working on an early deterministic version of the model has, in addition to the schematic above, a pretty decent visualization of the idea. 
Some Background on the Stochastic Version of the WDT Model

The basic idea:

(a) The simulation -- over x,000 iterations, each of which has a random lifetime -- sums discounted utiles of real consumption over each individual life during a particular iteration and then averages over all of the iterations to calculate the "expected discounted utility of lifetime consumption" (EDULC). There is a lot going on in that statement but the major things to note, before I get into the detail, include:

(b) Lifetime is random -- the modeled lifetime for each iteration is random but not "normally distributed" random. The distribution is shaped to match probabilities inherent in the SOA IAM annuitant life table 2012 with G2 extension to 2018 reflecting changes in longevity expectations. I might add a Gompertz-Makeham module [did] to play around with different longevity assumptions at some point. (Note that I recently modified my approach a bit; see the value function below and note[3]),

(c) The primary focus is on consumption not wealth -- The model and software do not make wealth or, in a strict sense, "income" the center of their focus. The essential focus is, rather, on spending and its "utility." Things like wealth, returns, volatility and income are necessary components, of course, since that is what (1) enables and limits the voluntary spending strategy in a pre wealth-depletion state and (2) determines much of the length of the "involuntary" wealth depletion state, if any, prior to death,

(d) There IS a consumption floor -- If there is one thing to pay attention to in the model it is that spending snaps not to zero, as in a typical ruin model, but rather to available income (e.g., SS, purchased annuities, and/or pensions) when wealth runs out. That snap is important and has a hard, blunt force effect on the expected value of discounted lifetime consumption utility calc, and

(e) There is no "risk of ruin" as such -- while this model is based on a simulation framework, note that there is no "failure" or ruin probability per se since assumption (d) generally precludes a hard fail. Yaari, in his 1965 paper, comments that given modern social structures (let's say something like government programs, associative institutions, or family...not to mention behavioral and adaptive considerations where one would anticipate and intervene before a ruin state would occur) it is more or less difficult or impossible for someone to actually "fail" (in wealth terms in his paper) in today's society.[6] The homeless might disagree with that comment but this model will assume, in cases where wealth goes to or near zero and no income is available (another form of ruin), some de minimis social floor (income) that exists independently of (and prior to) pensionized income like social security, annuities, or pensions. So the risk here is less of "ruin" than it is of the potential for an interval (before SS kicks in) of hardscrabble life -- forced on oneself by some combination of consumption behavior (and externalities?), returns, volatility, pooling choices and longevity -- that is mitigated by whatever income is available through family or social programs or what has been purchased during the preceding lifecycle.

The core sim math

The heart of the math framework for the "expected utility" or EDULC part of the sim, the way I've been doing it, is this: 
            
                    Eq1

or, alternatively [3] this: 
             
                   Eq2

These at least explain the expected value function but there is more going on in the model in terms of wealth and returns, income, annuitization and so forth. The following items describe both the formula above and also generally what I am trying to do in the sim as a whole:

Elements of the WDT model

1. E[V(c)] -- is the expected discounted utility of lifetime consumption and is the main output of the simulation which depends on all of the other items described below. V(c) is a random variable because lifetime (T*) is a random variable. 

2. "c" -- is a custom consumption plan of some design, c(t) is consumption in period t. The following is attempted in the sim. For: 
  • Wealth(t) > 0: consumption c(t) is the "custom" plan (could be constant or rules based)
  • Wealth(t) = 0: consumption snaps to available income = SS + pension + annuity [1]
  • Wealth(t) < 0: borrowing is not allowed so this is the same as W=0
3. S -- is the number of iterations. I typically do 10-20k. Run time is not long on a standard PC. (actually more now...see note 3) 

4. k - is a subjective discount on utiles[4]. Since income and spending are modeled in or discounted to real terms this is not large in my current model. In some correspondence with Gordon Irlam, someone I trust, he suggested I keep it small or zero, say .005 or so. I have adopted this assumption for now.

5. g[c(t)] -- is the utility function where g[c(t)] is in the following forms below. I personally often use a gamma of 1 (log utility, which has some interesting attributes) but maybe gamma is better set at 3 or 4 based on what I read about real life experiments in risk aversion parameterization. The constant "-1" (more or less common in the lit) was used for the CRRA instantiation because, as an amateur, I totally could not get my head around discounting negative utiles:

for gamma < > 1
      
            Eq. 3 See note 7

for gamma = 1

           g[c(t)] = ln[c(t)] 

            Eq. 4

6. Wealth -- at time t "W(t)" (not shown in formula but it is in sim) is a process roughly rendered like this:

          W(t) = W(t-1) + r*W(t-1) - c(t) + income(t) - annuity.purchase(t). 

          Eq. 5

where c(t) is as described elsewhere. I'll have to check the order I did this when I get my hard drive back (my software is lost as of right now) but I think this is how I did it. The annuity purchase is as described below. Income is as described below. But note "r." This is something more interesting to pay attention to (next item).

7. Returns -- return modeling is one of either of the following:
  • Normal Distribution -- a normally and randomly distributed return based on arithmetic input r and a standard deviation sd. The time averaging effects of the simulation will force a geometric return outcomes over time. There should be enough sims that reflect bad sequences of returns that we can see at least some sequence risk though it gets dominated by center outcomes. Since I think that sequence risk can be overwhelmed by the number of sims and the shape of the distributions there is, if I recall, a feature to force a more direct binary regime of good and bad returns where r is -x in the first half of expected lifetime and +x in the last half or vice versa. This might have been only in my deterministic spreadsheet sim; I have to wait for my destroyed hard drive to come back. Easy enough to program, though.

  • Fat Tailed Distribution -- I have a module for a fat tailed, randomized, gaussuan mix distribution based, roughly, on X% of return1 sd1 and Y% of return2 sd2. This can force a fat tail that can be matched to what is expected in, say, a 60/40 or 100/0 real-life portfolio based on historical data. The module can also go into extreme terra incognita if one wishes but this has some predictable results. My guess is that in modeling practice this is not as important as the absolute assumptions about the central tendencies of returns even though the idea of central tendency gets corrupted a bit in the presence of the large consequences coming from processes that can be described by fat tailed distributions. TBD... [2]

8. Income -- Income in the sim is only SS + pension + annuity (purchased from W which decrements at time of purchase). These can start at the same or different times but continue for random lifetime T*. I have not modeled any employment or continuing pre-retirement income (or taxation for that matter). SS is inflation adjusted...as is the annuity in the current implementation of the software. Pension income is, right now, a stub that is not implemented yet. 

9. Annuity -- if one opts for an annuity purchase in the consumption plan, the annuity that is calculated/purchased at time t is a hypothetical, mythical beast based on the concept of an an idealized real annuity priced using the purchase age at time t, conditional survival probability assumptions based on that purchase age and the SOA life table, and an annuity discount rate that is based on, well it's based on not much. Right now I'm using around 3% [5] but that should move in the future or maybe be more flexible in the model. There is a baked in load too of, I think, 10%; I'll have to check when and if I get my hard drive back. The math is more or less like the equation below where a(x) is the annuity price at age x (at time t), iPx is a vector of probabilities in future times "i" for start age x, R is the annuity discount, and L is the load. Totally ignore for now whether any of this is realistically purchasable in a real, complete, and non-defaulting market. That's not yet the point:   

          
             Eq. 6


10. T* -- is the random lifetime and is the main, and very important, reason we came to play this game. I used the SOA IAM table for a male if I recall. I modified it using the SOA adjustments for longevity changes over time. The maleness is a proxy because I made the sim for me and the shape of the curve can be more important than the gender. At some point I will work on this a bit more.

11. tPx -- In the 2nd formulation of the expected value function (Eq2 above), this is a vector of conditional survival probabilities along time t for a person aged x (the subject of the simulation). This is calculated off the same SOA IAM table as above. This achieves the same result as random lifetime but now via a probability weighting of the discounted utiles. This helps stabilize the simulation output where random lifetime sometimes requires large numbers of sims. Within an iteration of the sim, rather than running the life from age x to a randomly determined end age, the sim is instead run from age x to infinity (actually age x to 121 where the conditional probabilities are nearing or at zero).

12. Other. Note that this is relatively complex implementation but is also a very simplified model at a meta level. I have no explicit modeling of things like auto-correlation or taxes and fees and so forth. Those last, in particular, could be baked into the return assumptions, though, as a proxy before one were to take on excessively complex coding. I have this theory that the general shape of the model and the underlying processes is way more important than sweating all those details. This is, of course, debatable and self-interested but then again this model is mostly for me and not a commercial or academic proposition so who knows? Another missing piece that should be really obvious if you've been paying attention is the concept of a bequest utility. I am ignoring this for several reasons. First, the bequest, at least in theory, is separable at time zero and so is at least potentially ignorable for those that have no bequest motive. Second, I'm not really sure how to model the bequest utility; Yaari '65 suggests a funky way to do this that I can't really fathom yet. Third, Prof. Milevsky, for both theoretical and common sense reasons, discouraged me once in a chain of correspondence from spending too much time on modeling bequest utility. So, for my own purposes I will assume that bequest is either zero or separable at T0. Convenient, eh?
 
Selected References

The more comprehensive list of references is in the links above

Lachance, M. (2012), Optimal onset and exhaustion of retirement savings in a life-cycle model, Journal of Pension Economics and Finance, Vol. 11(1), pp. 21-52.

Leung, S. F. (2002), The dynamic effects of social security on individual consumption, wealth and welfare, Journal of Public Economic Theory, Vol. 4(4), pg. 581-612.

Leung, S. F (2007), The existence, uniqueness and optimality of the terminal wealth depletion time in life-cycle models of saving under certain lifetime and borrowing constraint, Journal of Economic Theory, Vol. 134, pp. 470-493.

The Utility Value of Longevity Risk Pooling: Analytic Insights, and the Technical Appendix, Milevsky and Huang 2018

Notes and Comments - Uncertain Lifetime, the theory of the consumer, and the lifecycle Hypothesis, Leung 1994

Uncertain Lifetime, Life Insurance, and the Theory of the Consumer. Yaari 1965 .

Approximate Solutions to Retirement Spending Problems and the Optimality of Ruin, Habibm, Huang, Milevsky 2017

NOTES
------------------------------
[1] I have some unrealistic simplifying assumptions for when wealth is less than the current period's consumption but slightly above zero. This may change but I don't think it matters much at this point when we are looking at the broad effects of consumption on utility.

[2] I have seen criticism before on overly simple return modelling. That is why I added the non-normal option with its fat tails. But I don't really think this matters much and the effects, I believe, will be dwarfed quite a bit by other considerations like spending and vol and overall level of returns. I have seen this corroborated by academic researchers way smarter than me. This is why I don't sweat things like autocorrelation and mean reversion overmuch like others do (plus I have no one to please but myself). Also, note that I am not really modelling multiple asset classes and their various correlations. I have only one point of connection in the sim with a portfolio return and std dev (whether normal or fat). However, this means that if I am generally aware of a plausible efficient frontier (stable or otherwise), I can model the portfolio stats (allocations) just fine along the EF though it is a bit more manual than one might want. It also means that I have a free hand to model points off of the efficient frontier if I know how adding alt-risk assets really works in real life in terms of portfolio-level effects over some unknown horizon with some unknown correlation. I can also, btw and for the same reason, "burn down" efficiency by modeling under the EF. In the end there are no portfolio combinations that are not model-able in the 2D EF space with a simple proxy. You just have to know what you are doing and also, fwiw, know something about portfolio covar math, linear vs compound returns, the unstable inputs to MVO, and the effects on long term outcomes of time and multiple periods and consumption over a horizon. It's not really rocket science but does take a little thought and effort. I'm not sure I have it down cold yet but I also think the criticism I've received is sometimes well informed and sometimes not so much. The latter can go (politely) hang. The former can write to me in great depth and the correspondence will be more than welcome.

[3] After doing some work with the simulator I realized that the random lifetime thing (along with the very small differences in the utility values calculated for gamma >1), in addition to being difficult to work with in a programming context, also seems to make the output not stabilize very well without a large number of iterations. In retrospect I could have achieved the same (or similar) thing with a vector of conditional survival probabilities and T being set to infinity...or at least 120 or so. I'm fairly certain that it would produce the same results and be more efficient in simulation.

I'll also mention here that V(c) is still a random variable here because of the randomizing effects of returns and volatility on the start time of WDT. Random T* is expressed via the conditional survival probability vector tPx . So rather than WDT being random both in beginning and "hard" end it is now random at beginning and then trails to infinity but is then also weighted by a probability of longevity given being age x at the time of simulation. Hence V(c) is still a random variable and hence the need for E[V(c)]. If I have that right...

[4] Prof Milevsky pointed out to me that one needs to calculate utility before discounting rather than vice versa due to Jensen's inequality. He also said that when he is being very careful he calls it "expected discounted utility of lifetime consumption" which is why I use that phrase.

[5] I was using 3% at the bottom of the fed cycle. Looks like it is now closer to 4 for my purposes. AAcalc.com uses a weighted avg of the Treasury curve. I'm not quite that sophisticated yet so I norm my discount to immediateannuities.com and aacalc.com and deal with the consequences if I am wrong.

[6] "Now a violation of the wealth constraint S(T) > 0 is clearly a physical possibility, but some people think that the institutional framework makes it virtually impossible for a man in our society to die with a negative net worth. For this reason it is of interest to see what the consumer's optimal plan looks like given that the constraint S(T) > 0 must hold with probability one." Yaari 1965 p 139.

[7] Added 11/20/19: One thing that I fudged on was a wee bit of a hedge on the utility calc. I kinked the function and did not really make that explicit in my cover of the topic. If one were to be explicit it would look like this although I'm not sure the notation is exactly correct:
        
             My attempt at a personal "kinked function"


where c(t) is consumption in time t and gamma is the coefficient of risk aversion. k is a floor I put under the utility function. There is an equivalent kink for gamma = 1. 

Here are a few thoughts on this: 
  • I did this mostly because the effects of infinite disutility are pretty pernicious and demand a coding response.
  • In theory people are not really ruined. Jobs are sought, family steps in, institution of governmental or associative social services come to bear, banks are robbed, etc. No one consumes at zero. Yaari touched on this in '65 though he spoke in Wealth terms: "some people think that the institutional framework makes it virtually impossible for a man in our society to die with a negative net worth. For this reason it is of interest to see what the consumer's optimal plan looks like given that the constraint S(T) >= 0 must hold with probability one." Also, Dirk Cotton has covered an idea similar to this if I recall.
  • It comports with my amateur common sense which counts for little.
  • Patrick Collins in another context called stuff like this "not mathematically necessary" or arbitrary, which it is. i.e., we are "off road."
  • Call it what it is: a kluge or maybe a policy choice. This makes it an amateur distortion and a good reason to be skeptical of the analysis especially as it may or may not compare to other, especially utility-based, models.
  • If I had the courage of my convictions on being arbitrary, and I might in the future, I would maybe additionally modify the function to do something with the uncaptured negative features of very high consumption. E.g., buying 2 lear jets adds very little to consumption utility on the margin but is unnecessary in some way and maybe has an impact on environment or culture that should take a hit in the individual model. That means there'd be a critical point where the function no longer rises monotonically. Like I said, arbitrary and not mathematically necessary. TBD
The effect is similar to the presence of lifetime income or social security except the utility "floor" is set here to a near-but-not-quite ruin level. The basic effect, in very rough illustration before we get to the last bullet, might look like this where the solid line is the kinked function and the dashed line is what a normal CRRA will buy you  


                                          Illustrative kinked function

You be the judge on whether this is naive, amateurish, and/or corrupt. I'm ok with it for now.


No comments:

Post a Comment