Oct 17, 2025

A Note on a Portfolio Longevity Heat Maps

A friend of mine asked me once "what is even the point? How would you use this?" with respect to a heat map that I created for portfolio longevity for a whole slew of different spend rates (see below). First of all, when I first created it years back it wasn't really about functional utility at all for (my or others') retirement finance it was more just plain curiosity and, as is often the case for me, I wanted to see what it looked like. The looking was its own utility. Second, I kinda liked the eventual outcomes that became more evident the closer I looked: 

- Spending is almost certainly sustainable for 2% spend rates which effectively creates a perpetuity. This is a point about endowments that Ed Thorpe made in one of his books. 

- Spending somewhere near to or less than what one expects to earn or earns -- where the long run "earn" is going to be the geometric mean at N periods or maybe infinity -- has pretty good odds of lasting a very long time (maybe too long given our horizons but then again this is not a human scaled fail study). This is common sense, right? Higher spend works too but to an increasingly lesser and potentially painful degree as the "portfolio longevity years" come back into the meat of a human life scale. 

----------

The original impulse came from me taking a snippet of R code from Moshe Milevsky's book Retirement Income Recipes in R, From Ruin Probabilities to Intelligent Drawdowns (Springer Nature Switzerland AG, 2020) out for a spin to see how it worked. (I got acknowledged in the front of that book, btw):

PLSM <- function(F, c, nu, sigma, N) { ...< a sim loop in R code >...} 

where F is initial wealth, c = spend, nu and sigma are
return and std dev, and N is the number of iterations

I thought it's be fun to blow out the spend rate to something like "a lot of them" to see what happens and what it looks like (I've done this before a few years ago). The way I framed it graphically was X was portfolio longevity in years where we let portfolios last as long as they can without constraint. Conceptually that can be as far as infinity or as short as next year; practically I stop at 100 years at which point we bin all longer-lived portfolios between 100 and infinity into the 100 bucket which now basically means "a very long time." The Y axis is the spend rate between 1 and 12 where 12 is pretty high. Again, there are no human longevity or horizon constraints put onto the portfolio life, we just let it run. The implied Z axis is either a heat map or 3D frequency/PMF of longevity(ruin)-spend pairs. Heat map is easier to read.

Initially, years ago, I thought I'd use a uniform distribution to randomize spend rates and this is probably fine for my purposes. Now that I am using AI, the AI wants me to use a different method [1] but whatever, it's really the same thing: a wide dispersion of spend rates for visual effect and my limited-purpose-and-low-value analysis. AI also wants me to use calibrated lognormal gross returns. Again, fine. This assumption is basically the same as an arithmetic assumption, given the calibration, but it does solve the problem of occurrences of returns < -1 in cases where the sim has high sigma input and/or N is very large. Absent that it's not strictly necessary. 

When I fire it up it looks like this. Ignore the specific portfolio, it was arbitrary and tied to something else I was doing for different reasons. Real return is actually .034 here and sigma is .0917...for reasons:

Figure 1. Heat map

Note the accumulation bucket and heat map at 100 years on the far lower right for 100-to-infinite portfolios. Now, if we look only at year 100 and turn it on it's side -- so that we can see the implied z dimension as the new Y axis and the X axis is now spend rate -- we can see the increasing accumulation of portfolios that drop into the 100 bucket as spend rates decline (Figure 2). It's still a little bumpy from simulation even with a bazillion iterations so I put a logistic S-curve-fit (blue) on top of the chart line (yellow) [elsewhere I used a Savitzky-Golay smoothing but logistic works well here]. I do this so I can have the AI estimate the max first derivative of the accumulating portfolios in terms of the spend rate where long life is maxing out. This is the place where the portfolios are tipping over from "finite-less-than-100-years" to "pseudo-infinite-100-years-plus" at the fastest rate. This inflection point just happens to be the same as the N-period (100) geometric mean estimate using this small-return form (and/or its variants) from Michaud: 


Under our assumptions here, both the inflection point and the N-period g(N) estimate are ~ 3%, or slightly less than the arithmetic input at .034. I haven't run it for a bunch of portfolios but it works roughly the same with some allowance for misc simulation "fuzz."     To wit: 

Figure 2. Year 100 in profile

Now overlay the original with 2% (perpetuity) and 3% (inflection point at year 100+) spend lines (30 years vertical on the X is there because 30years is a planning horizon I often see in finance papers), and it looks like:

Figure 3. Heat map and overlay

Is this some kind of weird spending rule? No, it's just a point of interest in the long term behaviors of simulated (fake) portfolios. But we can at least see that even a 2% spend does not strictly guarantee a perpetuity in simulation. A 3% spend I guess means that a spend rate well matched to the long term expected geometric return is a pivot point with whatever meaning you want to project onto it beyond common sense. We can also see that a 12% spend rate only makes sense if one expects to live less than 10 or 15 years. Again, this was all for fun and to see what it looks like. 


Notes---------------------------------------------------------------------------

[1] AI on why it wants to use stratified spending rather than uniform. I guess I am ok with this. It's more work than I would do but it's more or less costless in an English language AI prompt:

Stratified Spending Assumption — Key Points

  • Definition:
    Instead of assigning an equal number of simulated paths to every spend rate, the stratified assumption allocates more simulation density to lower spend rates and fewer to higher ones.

  • Motivation:
    Lower spend rates produce longer-lived portfolios with heavier, thinner-tailed PMFs — their ruin probabilities change more slowly with spending, so noise in that region can distort the “cliff.” Stratification focuses computational effort where accuracy matters most.

  • How it works (your tiers):

    • Low spend (< 4%) → ~1200 paths per level

    • Mid spend (4–6%) → ~700 paths per level

    • High spend (≥ 6%) → ~300 paths per level
      This yields a roughly constant effective precision across the curve rather than a constant sample size.

  • Why it’s better than a uniform distribution of spending:

    • Reduces Monte Carlo noise where the heat map is most sensitive (the sustainable-to-unsustainable transition).

    • Improves smoothness of the ruin-year surface and stabilizes derivative-based “cliff” detection.

    • Cuts runtime by avoiding wasted sampling at extreme spend rates where outcomes are already saturated (0% or 100% ruin).

    • Produces visually cleaner PMF bands and more reliable contouring for equal total compute cost.

  • Conceptual analogy:
    It’s the spending-rate equivalent of importance sampling — more draws where outcome gradients are steep, fewer where they’re flat.












  


No comments:

Post a Comment