In the absence of annuitization, they say, one needs to plan for a "max lifetime" rather than an "average lifetime." I've seen some papers on this, of course, but I was curious to see for myself what I could come up with for the following proposition:
IF:
- An average lifetime expectation is 85 (base case; but maybe closer to 82 for me)
- Max lifetime is, say, 95 (arbitrary, had to pick something)
- Endowment is 1,000,000 for the base case
- Start age is 60
- Spending is 3% (not 4%!) constant, inflation adjusted for the base case
- 50/50 two asset portfolio with some fee and tax assumptions thrown in
- No SS assumption
- No return suppression
- No spend trends, non-inflation spend variance, or spend shocks
- No stochastic longevity, and
- The base case risk turns out to be .05 fail rate risk
THEN:
1. What is the incremental difference in the endowment required to maintain same risk when the terminal age moves from 85 to 95, or alternatively
2. What is the incremental difference, for same endowment, in the spending required to maintain same risk for the same age shift? i.e., what is the (maybe not "the" but what I want to call "a") "cost" of self-insuring under the assumptions above?
The answer based on a couple quick thumbnail sim runs where fail rates were rounded to whole number percents is:
1. I would need ~35% more (1350000) in the initial endowment to keep the fail risk constant[1]
2. I would need to have an initial spend rate (constant, infl adj) that is ~25% lower.
-----------------
[1] I have not read Waring and Seigel (2007) but according to Sexauer, Peski and Cassidy(2015 - Making Retirement Income Last a lifetime): W&S "estimated that the loss from not pooling is 34.5 percent of total capital saved." While it sounds like a slightly different question is being asked, it also seems of a piece with my post.
No comments:
Post a Comment