May 16, 2022

Some thoughts on force of mortality and hazard rates

I mis-titled. This post is really more about spending but hazard is not un-implicated... 

When endowments -- or long dated trusts or, dare I say, early retirees -- spend the idea is that whether one spends in constant dollars or even within a rule-set one can't spend too much too soon because the money has to last a long time and it has to anticipate a lot of problems: from adverse spending to adverse markets to sequence of returns risk, etc (we can also talk about intergenerational fairness here too). This is true for both constant spend and other rules. Constant-spend, btw, incurs a penalty in the sense that there is a time distribution of unavoidable, over enough time, depletion cliffs. Rules, and rules all the way to the % of portfolio rule, incur either the former in a now slightly deferred way or a distribution of lifestyles (consumption) at time x that might disappoint expectations if one were to happen to land in the left tail of of the consumption distribution at that time. Or we can say: "perpetuities are hard."

Generally speaking I've heard of two solutions to the perpetuity problem: a) Ed Thorp said shoot for 2% spend, and b) papers from Coiner to Dybvig and others suggest basically spending the expected geometric mean return or less.  The former (2%) will sometimes fall short of lasting to eternity in simulation so it isn't a 100% bullet-proof panacea even though it might be a pretty good bet because a modeled fail at year 200 doesn't really matter in some essential way. Rules that sometimes slide above and below 2% might work, too, but those seems like silly self-denial in a way; I mean, the safest spend rate is zero but what exactly does that accomplish for anyone. In addition, the latter (spend the expected geometric return), when conceived as a constant spend, will suffer from the sometimes fatal flaw of assuming iid and stationarity and so, as in real life like we humans live, one would have to examine one's assumptions and their realization at least periodically. Either way we now, sorta maybe, have in this post a "range" of reasonable eternal spending: from 2% to around the expected geometric mean return*.

But that is only for immortals. For us mortals we can spend more than the "2% to geo mean expectation" range above in at least two situations, maybe more:

1. We know we are mortal and therefore a looming death date, random though it may be, allows us to up the pace of consumption a bit...and here we might spend more later than earlier due to longevity coming in a bit, and

2. We may have at least partially allocated ourselves to life income like annuities or life-income-pensions -- basically a type of institutional immortality, if one trusts the institution -- and thus we have access to a shared risk pool. This positioning, counterintuitively, allows one to spend more early rather than late especially if a consumption-utility evaluative framework is used

By the way: I will henceforth casually assume that spending is constant and that we do not have life income. This assumption allows us some pedagogical simplicity and analytic leeway. For example, in a very restrictive framework -- Milevsky's reciprocal Gamma method (2005, 2006) with exponential mortality and assuming mean expected stochastic present value (50/50 chance of fail, which is typically unacceptable in a retirement setting) and using median longevity -- Milevsky can assert that the mean expected stochastic present value of future consumption, or the level of wealth at the beginning of retirement that should ensure success "on average" (dangerous assumption, that), can be represented like this: 


eq 1. mean expected wealth in one particular SPV context

where the denominator is the spending that achieves the mean (50/50 chance of fail, 1/spend = wealth at t=0) of the lifestyle distribution in PV terms. This is not perfect and it is limited to the particular setup but it is transparent and useful for teaching. mu is the expected (continuous) return, sigma squared is the variance and lambda is the instantaneous force of mortality which under the restrictions here would be, according to Prof M, ln(2)/median expected remaining life. That's a hacked heuristic but I'll try to flesh that out later. 

So, one can spend, within this narrow framework, a vol-adjusted expected return plus a little spiff for being mortal rather than immortal. Makes sense to me. It's not precisely Coiner or Dybvic (maybe it's the reciprocal gamma framing that drops the /2 from vol, idk, haven't figured that out yet) but darn close and now we have a human mortality factor: higher vol? spend less. unhealthy or really old? spend more. This is pedagogically useful and brings the force of mortality into the discussion of spending which the literature, generally speaking, does not often address...from Bengen to Merton along with a bazillion other academic papers on spending. Risk aversion? Sure, always room for that. Mortality? Not so much. Even Kolmogorov, a communist within a state that did not really allow for ideas of personal failure in their perfect world (ex the gulag, I guess), understood the force of mortality inside the calculation of the life probability of ruin. Yaari was the king on lifetime and is a proper touchstone; dude needs a Nobel.   

This kind of framing is useful because endowments are not mortal so λ is = zero and spending devolves in that world to just volatility-adjusted-expected-return-based spending. Otoh, this admission also forces me to make sure I understand λ for us older spending mortals as much as it will, if I try really hard and squint my eyes, allow me to equate early mortal retirements to endowments at the same time.  

λ comes from the study of life and death and usually comes from studying mortality tables. I'm not an actuary so I won't take any of this too far. I will also close my eyes to the various differences between cohort and period tables, joint lives vs single, gender differences, etc. Maybe some day, idk. On the other hand, in the study of retirement, it is very very hard to ignore all of this. 

Probability of death can be extracted from actuarial tables where one has to ask questions I just implied that we need to ignore: "What table?" Life? Cohort? Average healthy? Annuitant healthy? Male/female? Joint life? Birth year? Longevity-improvement adjusted numbers? etc...  OR it can be analytically derived (exponential w flat hazard or other with more curved hazard). In general, I will adopt the latter approach since it is easy to code and saves me from choosing and then digging through the tables. 

For example for the conditional survival probability I will almost always go with this generic setup:


eq 2. Gompertz survival probability (p) at age x for t= 1:___

which is from Milevsky 2012.  x is the age at evaluation (50 is arbitrary for this post only), m is the mode and b is a measure of dispersion. This is known as Gompertz (or Makeham-Gompertz if there were an hazard element for accidental death which can be assumed to zero, as it is here) model.  I use this often on the blog  in various models to weight the likelihood of surviving t years. I am ignoring gender (I think we can do that now in 2022) and joint lives. The parameters can be used, btw, to tune the output to fit various tables. Generally speaking a higher mode and narrower dispersion fits annuitants/healthy and a lower mode and wider dispersion can fit average or less healthy life tables. In addition, one can push things to weirdly unhealthy or healthy parameters, too, which is another advantage of going "off table" with the continuous math. 

Now that I bring up this longevity stuff, I realize that one thing I never really understood on my blog were the various relationships in the math of survival. I still don't but here are the definitions and relationships that seem important now that I read it in, say Milevsky 2006. Might get this wrong, TBD:

Fx(t) - this would be the cumulative probability (CDF) of dying for our unisex un-cohorted person aged x, at time t. Can be derived from 1-tPx via eq2.

fx(t) - is the PDF, or probability distribution, probability mass if discrete, of dying in year t for person aged x. fx is also = F'x which, if the deriv denominator increments are 1, is also just the "diff" of Fx assuming we start with t=0, I think. fx adds to 1 of course. 

tPx - this is the conditional survival probability for a unisex un-cohorted person aged x in year t. tPx = (1 - Fx). This is also the output of eq2 above and from whence the others in this list can be derived. This is a very useful function, btw. For example, in attempting to satisfy the Kolmogorov LPR PDE one can multiply a portfolio longevity PDF by tPx "weighting" to get a probability of ruin. Good stuff. It's been a while so I hope I have that right. TBD. The sum of tPx, by the way, for t = 1:120 is Tx which is the expected nbr of years of survival. There are other ways to derive Tx but that one is pretty slick.  

λ or λ(x + t) = fx(t)/(1-Fx(t)) or = fx(t)/tPx and is the hazard rate or instantaneous force of mortality...if I have it right. If you are an actuary: correct me on this, please. 

I'll assume you know what a CDF, 1-CDF and a PDF might look like. They have distinctive shapes but later on that...

While under Milevsky's (2005, 2006) restrictive assumptions the hazard rate or lambda can be estimated via ln(2)/Mx, if we want a more general understanding of hazard, to the wee extent I get this stuff so far at all, there are at least three ways to access it that I, as an amateur, know:

1. The formula above λ(x + t) = fx(t)/(1-Fx(t)) which if you can code eq1 
     is easy to get to and where 1-Fx(t) = tPx. (Milevsky 2006). 

2. A less direct and more cumbersome way is (lacking a table, btw) to calculate 
    the first (t=1) tPx = (1)Px for each x from age __ to __. eq2 
    was my hacked method for x = 50:95

3. The more formal way is probably to go direct to the hazard math like 
     this from Milevsky 2006:


I can get #3 really really close to #1 and #2 in my coding attempts but it's not exact so maybe I am still missing something, idk. That it is a very near overlay is good though. It means I am close enough (if I am not pricing life insurance and just doing amateur hack ret-fin modeling like I usually do). Here is an Excel-ified version of #1, 2 and 3 above. 3 is the grey; the other two are exact overlaps. 





And here comes my point on spending and early retirements or "two" retirements -- but only so far, really, in the context of eq1 which is a bit of flawed representation of spending...so watch out here. I mean, we have other, many many other, evaluative frameworks for spending. I am just trying to make a single simple non-academic maybe-flawed point about the insertion of the idea of hazard or mortality force into spending. Like this:

IF 

- (if and only if ) we view spending as some undefined function of expected return (ie increment spend up with higher r), volatility (decrement with higher V), and mortality (increment with hazard), sorta like eq1 or its near-proxies, and

- we are looking at very early/long retirements, and

- the parameters cooperate, and 

- we don't second guess stationarity and iid, and

- hazard --> 1 as age --> infinity, and

- something else I forgot while jotting this down... Really, I am getting a little fuzzy the longer I do this stuff. What the hell was I going to add here???

THEN 

- for early retirements, the hazard rate, say in the interval between age 50 and 70 and only under the parameters I used which were for long-live-ers, is still under 1% >>[1]<<. That 1% thing, btw, is an entirely arbitrary threshold but I shall now, just for fun, call anything below that "low" or "very low" or "close to zero." If you are into calculus, note that the hazard rate accelerates with age and picks up most of it's not-close-to-zero steam after 70 (in this run and imo, anyway). 

- Low or zero hazard rates in early retirement mean (to me, anyway) that early retirement (50-65 or 50-70) is more like an endowment than it is like a more traditional "geezer**" retirement at 65 or 70-90 where the hazard rates seem to be a little bit more palpable. This line of thinking, if we buy it as something other than me self-affirming my past work, corroborates a post I did something like 5 years ago that posited that there are "two retirements" that are entirely distinct: early and traditional. And if we were to use the eq1 framework, we do in fact have something closer to Coiner or Dybvig than it is to Bengen with his fixed 30 years of mortality and sly implied assumption about death. 

- It is only with spending and mortality (and vol and time) that things get interesting. Remember that the next time an advisor perseverates on asset allocation and generic-single-period-finance-theory accumulation portfolios rather than stuff like spending, disruptive events and multi-period medium-horizon lifetime. 

-----------------------------------------------
 

* I haven't specified if the geometric mean return is estimated at infinity or N-periods. To say this is to bias to the latter but that is another post...

** I can say "geezer" cuz I am getting closer every day...

[1] Ok this is a huge head fake and I hope you realize what I am doing here. In eq1 lambda is based on median longevity so it anticipates a mortal life where I go on to look at force of mortality at each age. So there really are not "two retirements" there is one and we always know that we have death as something big in our future and we will always be able spend more than an endowment. A lot of this post is just: a) play, and b) me consolidating some actuarial math. What does that mean? Idk, just that in a very early retirement we have a long way to go and in a late retirement we don't. I guess that is about as binary as it gets. Whether we can generalize that into some rule or something? probably not. TBD or DM me for a dialogue rather than an answer. Or maybe you have an answer that you want to assert. That's fine too.   

----- References -------------------------

Coiner, Michael. (1990) The Lognormality of University Endowment in the Far Future and its Implications. Economics of Education Review Vol 9 No, 2 157-161

Dybvig, Philip H. and Qin, Zhenjiang, How to Squander Your Endowment: Pitfalls and Remedies (October 11, 2021). Available at SSRN:https://ssrn.com/abstract=3939984

Milevsky, Moshe. The Calculus of Retirement Income: Financial Models for Pension Annuities and Life Insurance. 2006 Cambridge University Press 

Milevsky, Moshe. The 7 Most Important Equations for Your Retirement: The Fascinating People and Ideas Behind Planning Your Retirement. 2012 John Wiley and Sons

Milevsky, Moshe Arye and Robinson, Chris A., A Sustainable Spending Rate Without Simulation. Financial Analysts Journal, Vol. 61, No. 6, pp. 89-100, November/December 2005, Available at SSRN: https://ssrn.com/abstract=872871

2 comments:

  1. This is an interesting perspective, especially so the "then" section. I'll only add a comment for the "if" section based on my rudimentary knowledge of hazard/duration models. Stationarity is really important in long-duration models. We have to assume the data generating process is stable over time or we get regime-switching type problems. This is an easier assumption in the case of say modeling life expectancy after cancer treatment than making assumptions about the stability of macroeconomic regimes. While we would like to assume stability it's a lot harder to support that assumption than most of us admit (e.g., the politics of taxation). But it's fun seeing how you reason about all this so thanks!

    ReplyDelete
    Replies
    1. Agreed. I don't have formal background in stats or modeling or math but I always doubt the stationarity thing. It's like trying to imagine a pensionless Japanese retiree in 1989 modeling an 8% forward expected return on a Nikkei-heavy portfolio and assuming that is a good, and stable, assumption. But maybe we are not looking at a long enough timeframe. Otoh, 30 years is not time that I really have myself for things to come around which they still haven't in that example.

      Delete