Sep 15, 2020

Sense-making in Retirement via Triangulation

I had a chat with a worthy man on Twitter the other day. The idea within the chat was that early retirees, facing up to 50 years of life and a suppressed 10 year prospective-return expectation (he was using Research Associates [RA] capital market assumptions in this case for large cap stocks of some kind that had ~ 2.4% nominal return with a 2% inflation expectation) are in kind of a bind.  In order to attain a very high (we talked about the pros and cons of using 99%) chance of success, one might have to spend as little as 0.25% to succeed according to the conversation.  Since that is effectively a zero spend rate I thought I'd take a look at this question by triangulating my way to an understanding of how I might look at it in different ways using the various tools I have worked with over the past seven years or so.  

1. The Setup

First, let's accept that an investor-retiree with the following:

  • A 50 year fixed horizon
  • No superannuation income assumptions like pensions, SS or annuities
  • No weighting/discounting of the future for mortality assumptions
  • A very, very, very high expectation for success
  • Confidence that MC sim fail rates are a reliable and reasonably interpretable statistic, and 
  • Uses one, pretty conservative data forecasting source for one of several time intervals

is going to face some pretty hard realities on what they can spend. But right there, I already want to start taking a different perspective on this. 

2. The Reservations

For example:

  • I don't know Research Associates except very indirectly but they are one voice among many. 2020 10-year capital market assumptions (large cap US) when I polled other sources ranged from -4% to > 6% all with different inflation assumptions. If I look at longer horizons, equities range within (and beyond) 8-12% nominal. I'll stipulate my interlocutor's 16% variance here for fun but that's another topic altogether.  
  • For my own analysis below, since I am not swallowing RA data assumptions whole, let's call it 10% return nominal for the very long-term and 3.5% for the next 10 years with a 3% (deterministic) inflation assumption. This is a single-switch regime switching proposition, by the way.  Also I'll be adding in in a less risky asset of 3.5% return and 4% std dev since I really want to look at a blended portfolio. Only the risk asset is regime switched. I should have done something with the other but didn't. The "less risky" asset is maybe an aggressive G&C bond type portfolio with benign assumptions about rates, total return and mean reversion in the forthcoming years
  • There was no discussion in our chat of income like pensions or annuities or social security. This is important so I'll add 15k of SS at 70 for the tools that use it.
  • I'll question the use of fail rates at two levels: 1) there are no objective thresholds for fail rates and the use of 99% seems very very high. I touched on this before in this post.[1] That means a fail rate threshold is, in the end, a policy choice, one where I myself used to be "high" but now it's kind of "whatever" ...up to a point, 2) there is some debate about whether fail rates, even if there were objective thresholds, are meaningful in the sense that: a) their use is often divorced from magnitude (denominated in years in the fail state), and b) maybe people don't really fail in a mathematical gambler's ruin sense. People adapt, jobs are taken, family and the institutions of associations and government step in.  Idk.
  • Common sense, my own simulations, the research I read -- as well as in one case the voice of Ed Thorp in some book I read -- all tell me that somewhere around 2% spend is close to, but not necessarily guaranteed to be, a perpetuity. So, when someone proffers a sub-2% spend rate as a way to deal with fail rate math, that is worthy of examination.  
  • In my opinion as ret-fin blogger, there are many alternative ways to evaluate investment and consumption plans other than the fail rates in Monte Carlo simulation. For example, one might evaluate lifetime consumption utility. Another way is through Lifetime Probability of Ruin which is a lot like a fail rate but has some differences (see the last link). Another method is something James Garland offered in (The Fecundity Of Endowments And Long-Duration Trusts)  where spending (from large caps in his paper) one can safely spend, if one is a long duration trust (kinda like an early retiree), somewhere between the dividend yield and the earnings yield. Depending on where that arbitrary point might be, that right there might put me above a 2% spend in 2020. Another evaluative tool is "perfect withdrawal rate." More on that one later.  This bullet point is a bid for a "triangulation methodology." 

3. Triangulation 1 - Lifetime Probability of Ruin

First let's jump the fence on MC sim and start with "lifetime probability of ruin." It's close to the same thing, it just uses the full constellation of asset exhaustion possibilities weighted by the full term structure of mortality. It has a little more depth and rigor, I think. The form I use in simulation is like this 

where g is a function of portfolio longevity and tPx is a survival probability conditioned on age. The Kolmogorov partial differential equation for the same as presented in M Milevsky's 7 Equations book looks like this, without explanation. The above satisfies the below:

The assumptions I'll throw into this one are 1% real return forever (that right there should give pause but I don't have a way to regime switch in this tool...yet), 16% vol, 50 years old, Gompertz mortality with a mode of 88 and dispersion of 8.5 (which is kind of arbitrary but not too far from a blend of SS and SOA mortality tables), no regime change, and 20000 iterations. Looks like this. I knew it'd be bad so I only show 2% spend and below (x axis left to right):


What is the right spend? No idea but we are at 30% fail even at 2% spend, a fail rate which seems high even though I said there are no real thresholds. I guess we are not too far from the estimate from my Twitter friend, maybe a teeny bit better off.  But... The assumptions here bother me: 1% real forever (3-5% might be better or at least a regime change at year 10), 16% vol (a 60/40 fund is lower vol - with ~8% very long-horizon nominal return expectation and maybe 11-12% vol), age 50? (I don't recommend that even though I did it, and there is no income.  Actually I thought this method would show better but at 1% return and 16% vol we are in a bad deal here and should probably fix the portfolio assumptions or go back to work.  This section will be an unfair starter point because in the next example or two I change my assumptions in non-comparable ways. So, not apples to apples. Might go back later, though. 

Fwiw, if I were to trade up to a 4% real return and 12% vol and age 50 the Life-ruin estimate is 11%. I'm mostly OK with that, especially since I am not 50 anymore. It is clearly not 1%.

[ Fwiw, I call this tool my FRET tool (flexible ruin estimation tool) because fret is what I do when I think about ruin rates. The math satisfies the Kolmogorov partial differential equation for lifetime probability of ruin, if that is of interest. See page 141 of my 5-process paper here for more]

4. Triangulation 2 - Consumption Utility

Now let's view the question through a different lens: "expected discounted utility of lifetime consumption" (EDULC). I won't regurgitate the model, it has been described in more detail here.  In lieu of reading the whole link maybe we can stipulate that this is the schematic


Output is "EDULC" denominated in utiles. We'll create a different set of assumptions here (vs triangulation 1) while trying to maintain the spirit of a "hard return regime for 10 years." Some critical assumptions (again, not apples to apples):

  • Two asset portfolio: N1(.10, .18) and N2(.035, .04). The key point is that N1 for years 1-10 is actually N1a(.04,.18) as a separate regime and the difference at the portfolio level of returns is reflected thru the MPT efficient frontier for each allocation step via a correlation coeff of -.10 which is arbitrary. I do not do a soft transition at year 10 but should have; that is maybe important for this test. No idea.
  • Inflation is set to 1.032 using historical averages. That means that the real return for all equity in year 1-10 is ~.008 at the arithmetic-input level. I have an auto-regressive stochastic inflation feature but didn't use it here.
  • Age 50 start. This is really young. Even though I did it myself I do not recommend it.
  • Mortality expectation comes from a SOA Annuitant weighting from a 2012 table
  • 1M start wealth
  • Spending and asset allocation are the independent variables tested separately in ranges
    0-100% risk in 11 steps and spending from a-->b in .005 steps (in this case "a" was 1%, b 4.5%)
  • 10000 iterations each run for combinations of spend rate and asset allocation. So, 880,000 iterations and 62M sim life years.
  • Social Security is 15k starting at 70, inflation adjusted, no annuities or pensions
  • Coefficient of risk aversion is set to 2. This is meaningless except to economists but is approximately my number based on some thin evidence I ginned up on my own. I've heard 3-4 is middle of the road.
  • I have a feature to throw a chaotic, destructive process at wealth using power laws...like earthquakes or forest fires or sand pile avalanches. Not used here. 
  • Technically the price of running out of wealth and consuming zero is infinite disutility. This doesn't happen in the model because: a) we have social security life income, and b) if wealth does deplete before SS kicks in, I stub in some consumption at something like 1k to represent the ability to live at some under-the-bridge kind level rather than zero. We always spend something. 
When I do this and display the summed averaged utiles for each spend rate across the allocations it'd look like this.



The lines are for convenience since this was run for discrete points but technically we can think of this whole chart as a continuous surface in R^3 if I have it right with a top/max around 3% spend and a 30% allocation to risk.  My guess is that the low spend rate and low allocation is due to the long horizon but 3% is still better than .0025%. It is "better" no doubt to the availability of lifetime income which can be kind of a big deal.  3% fwiw happened to be my comfort zone when I was 50...before I'd ever touched a blog. It's higher now for other reasons but not by a lot. 

5. Triangulation 3 - The Garland Method

I won't belabor this method much since I already described it here but this is the idea of tuning spending for long horizons (e.g., endowments or long duration trusts) to some portfolio "fecundity rate" somewhere in the middle of the range between the dividend yield and the earnings yield (of some asset or portfolio being used to fund the spend). That middle happens at some level that I don't care about too much but that he stipulated in his paper was [ x = div yield x 1.3 | x << earnings yield  ] for some probably arbitrary reason I can't remember.  Me? I happen, for reasons all my own, to use the average of the two (DY+EY)/2. If I chart this idea over history it'd look like this: 


If I average DY and EY (arbitrary) and then take the average of the last 3 years (arbitrary and not sure I can do this) I get to about a 3% spend rate. I mean, the trend looks like it sucks but that's what we have today; no idea about tomorrow.  It's better than .0025% anyway and is comparable to the last estimate above. Interestingly in 1994, the year that Wm Bengen did his 4% rule paper, this method would have produced a ~4% spend estimate. That's lucky because the methods used across the two are quite unrelated. It's just interesting. 3 is worse than 4%, of course, and it is trending poorly, but then we are 50 years old here in this post and 3 beats .25%.

6. Triangulation 4 - The Perfect Withdrawal Rate (PWR) Method

This was from a paper by Suarez & Suarez (2015), clarified by Andrew Clare et al (2017), and shows up elsewhere in Estrada, J. and EarlyRetirementNow.com (2017). I won't go into detail but you can read my paper here on page 62 for all the gory detail or check the references in that paper or the ones above. The basic idea is this, quoting myself:

"This PWR distribution, then, if you believe the stable and independent draw on a random “r,” is the entire universe of what we could theoretically spend in an ongoing spend process" ...if we imagine spending every one of [50,000] parallel universe portfolios to zero at end time T. PWR is the distribution of those [50,000] spend rates given the assumptions 

The form is like this

w is the withdrawal rate. The distribution of w comes from simulation. The form is simplified for ending endowment = 0. This took me a while to figure out.  

For this run/post, lets say the following, and this'll be a little aggressive again but we are trying to triangulate here on assumptions that are at least vaguely similar:

  • 50 periods - deterministic longevity 
  • r = N(.01, .16) 
  • 50,000 iterations

The density looks like this, x = spend rate, y is density via R function:


with some stats like this that come out of the distribution in spend rate terms:

  • mean     .0204   
  • median   .0175
  • min        .00044
  • max        .1129
  • 20th percentile is about 1%
  • 3% spend would be about the 80th percentile

So, pick your spend rate at some level that makes sense. I could have layered on stochastic longevity here, by the way, but didn't. Maybe later.  This result is more conservative than the last method above but we have some pretty hard assumptions (and no regime switch) in here and no weighting for longevity. Still 1-3%, if we have a policy that says that is cool, is better than .0025. Maybe a little risky, though. 

-----

Wait...I changed my mind. Let's add stochastic longevity to PWR. So far as I know I am the only man on earth to have done this with PWR (uh, where's my PhD, dude? ;-) ). Here I'll use a Gompertz model with mode = 90 and dispersion ~8.5. This Gompertz mode/disp choice is about the same as the SOA annuitant IAM table so maybe the longevity is a little like me except we use age 50 for start age.  The density looks like this using almost the same x axis interval as above:



with some stats like this in spend rate terms:

  • mean      .034  
  • median   .026
  • min        .00051
  • max        all since some scenarios one dies in year 1
  • 20th percentile is about 1.46% spend
  • 3% spend would be about the 57th percentile

Ok, that's better, not perfect, but still not .0025. The only thing missing is a policy on spending and risk but at least we are into the zone of spend rates that comport better with the other methods above depending on your "spending distribution fear."  I guess longevity expectations matter a little bit, though.

7. Triangulation 5 - Stochastic Present Value

In all of this, the only thing fairly certain is spending over some time frame. How much is a choice but it won't be zero. Also the other certain thing is that we know now our account and consumable asset values.  That set's us up for evaluating an actuarial balance sheet. That sheet is comprised of the assets and liabilities of a household that also include flow items like annuities, social security, and spending. Assets are only those that can be monetized in service of spending.  

Typically the hardest part of this exercise is evaluating the current "price" or value of the spending liability. This equation, borrowed from Robinson and Tahani (2007) is the continuous form of  how I do it discretely; others do slightly different notation:


where the tPx terms is a survival probability weighting conditioned on age, Ct is the consumption at t and R is a cumulative return discount. I can't do SPV for this post because we don't really know a consumption plan.  But if I were to do it, it'd go like this. 
  •  Make return assumptions. These will reflect the discounting. The lower returns mean higher valuation. In my conversation we were talking about nominal 2.4% with 2% inflation either forever in a step function that changes after year 10.
  • Determine the shape and maybe the duration of the spend plan: drift up, drift down, convex, concave, steps...whatever. If one doesn't use survival probabilities one needs to pick a stopping time. The conversation used "50" so maybe start there. One will get a very high valuation, though. This is why I weight by the probability I'll still be around. The discounting does it's part too in this. Note that we are operating here with absolute nominal spending dollars not rates.
  • Run the algo for SPV; output is a distribution of spending present values. lower values to the left and higher to the right.
  • Value assets
  • Inspect the SPV distribution and select (somehow) where you think is a good policy point that represents your risk taking and the present value of your spending. This is pretty subjective: mean? median? 80th percentile? Those are risk choices and no book or paper has a clear threshold. 
  • Compare assets(A) to SPV via a ratio of A/SPV where >1 is feasible < 1 is infeasible. 
Given what I have from the conversation I have to imagine using a median SPV would maybe be infeasible. That doesn't mean no spending it just means it is "technically" infeasible. One then might take a different risk posture in terms of valuing spending or one actually changes the spend plan (what I did in 2012). The other alternative is to get more sophisticated with the discount, especially since it is a step function of suppressed regime T=(1:10) and un-suppressed T=(11:inf). That'll bump up what I presume is a very low return assumption to a higher more complex one which will drop the distribution to the left. This might or might not take us into feasibility. No idea. The other idea, btw, is to go back to work part time while contemplating why you retired in the first place. 

For more on SPV evaluation, see page 53 and 89 of Five Retirement Processes.

8. Final Thoughts

Mostly the methods speak for themselves but I will add the following:

-Don't always trust or use single sources of market data. Use judgement to integrate the several you have sought out,

- Don't trust single methods of inquiry. Use multiple methods and "triangulate" into sense making. I spend more time in this in section 5 of my Five Processes Paper page 113+.

- Prepare to adapt. Retirement is more a continuous statistical process control problem with boundaries than it is a single optimized number one-time-plan thing. It is a process. Optimal plans can go stale pretty quickly.

- at 50 in this post (and 62 for me), there are more methods than these that'll validate reasonable-to-optimal spend rates and sometimes quite a bit more methods that might make me uncomfortable. Even if one had perfect foresight of markets that might look like the post-1989 Nikkei, the spend rate would still not be zero. You still have to eat. Maybe one has to go back to work, lean on family and hustle a bit but you are still spending something. 

- I would be a bit nervous, given the assumptions, but spending about 3% would be acceptable as a policy choice but this year only and I would check in probably on the half year. 

---------------------------

[1] in the cost of certainty I viewed it this way where the x axis is increasing degrees of certainty in lifetime probability of ruin terms and the Y axis is the "wealth units" required to deliver the certainty given the model assumptions. In other words it gets more expensive more quickly to get more certain if one hates fail rate math. 





 





No comments:

Post a Comment