Wait, isn't Lifetime Probability of Ruin (LPR) just another Monte Carlo retirement fail-rate thing? No, sort of but not really. LPR is rooted in (or is) the Kolmogorov PDE for Life Probability of Ruin that I first saw in Milevsky's Seven Equations book. Took me a while to figure it out, though. I once even taped the PDE on my fridge for an entire year to see if anything would sink in. Finally, at about the year mark, I had a dream where the coefficient of the 2nd term was spinning in a circle. I woke up and thought: "damn, I know what that is!" and I walked to my desk, coded it as a loop and holy crap it worked. Here is the 7 Equations version:
The epiphany on waking was that the "1" was a unit of consumption subtracted from a wealth process and that we were dealing with a mortality weighted infinite MC sim. After that it was a piece of cake. Plus I had the finite differences solution for the PDE above that Prof Milevsky had once sent me. Except for the wobble of simulation, the two solutions, mine and his, looked perfectly aligned.
Now, the reason I like LPR and why it is different from a standard MC fail-thingy is that the MC fail-thingy, which delivers a "probability of retirement ruin" number, under-imagines the real problem and disrespects the full scope of the idea. By that I mean, in Prof Milevsky's words, LPR considers the "full constellation of asset exhaustion possibilities" to infinity (not just an arbitrary 30 years) as well as the full term structure of mortality to infinity (not just 30 years or to age 95 tho in practice we might limit it to ~120 years of age total and the whole sim doesn't really need to go past about 100 or 200 years and any portfolios that survive to 100 or 200 are to be considered "forever" portfolios. I think I ended up using 100 back in the day but I see 200 below. Can't remember). MC sims usually elide themselves right past all this full composition stuff. Plus, note that compared to finite differences and PDE solutions we can also play around with non normal return distributions tho it is easier to not do that.
Here, btw, is where we are in this mini-series of Prompts:
- GPT Prompt 01 - Spending Strategy Comparisons
- GPT Prompt 02 - Lifetime Probability of Ruin (LPR) <-- [this post]
- GPT Prompt 03 - Dynamic Programming / HJB
- GPT Prompt 04 - Portfolio Longevity Heat Map
- GPT Prompt 05 - Perfect Withdrawal Rate (PWR)
- GPT Prompt 06 - Stochastic Present Value (SPV)
Here is the chatbot description of LPR:
"This analysis evaluates retirement risk using a life-contingent notion of failure rather than a fixed-horizon one. A retirement strategy is said to “fail” if portfolio wealth is exhausted while the retiree is still alive, with both investment returns and lifespan treated as stochastic. Formally, the key quantity of interest is the lifetime probability of ruin, , where is the first time wealth hits zero and is a random time of death drawn from a mortality model. This framework distinguishes sharply between running out of money late in life versus never being alive in that state at all, and it naturally incorporates the effects of spending rules, portfolio risk, guaranteed income, and consumption floors. By focusing on the probability of being alive and financially exhausted—rather than on fixed-age success rates or historical backtests—it provides a clear, actuarially grounded measure of tail risk in retirement."
In continuous time the LPR math might look like this tho the sim is done in discrete steps:
Or... in English: the likelihood that over an infinite interval a portfolio will deplete at some time BUT that likelihood weighted by the probability that one is still alive (conditional on the age that we start the evaluation). That later weighting is over an infinite interval as well but we often stop this nonsense at about 120 years of age or so. That formula is the simulation construct of LPR that, easy to code, satisfies the PDE above and gives us a solid, mathematically and actuarially sound estimate for retirement ruin over a human lifetime, ignoring of course all of the reasons why ruin estimation can suck. But THAT's for another time although since these GPT prompts are likely to be my blog end game here there probably will not be another time. My late friend Dirk Cotton wrote well on the problematics of ruin once. Look for his work if it is still out there.If one were to be a close reader of the blog, one might remember that I called my LPR software, of which this is an example, FRET for "Flexible Ruin Estimation Tool" where the joke was that FRET was what I did by looking at retirement finance problems for too many years. I do not FRET now except maybe when it comes to prostate surgery and its aftermath(s).
Some caveats first, and these are almost exactly the same as in the last post:
- The prompting process seems unstable for each re-prompt and yields varying results and is also conditional on the platform. I used ChatGPT's $20/month platform "Plus." Fwiw, I also save a physical copy so that I can re-force AI to more or less start from scratch each time rather than trusting it to remember stuff. Note that I actually had to re-prompt some of these after not looking at it for 6 months because it literally did not work after it had worked 50 times the last time I worked on it. Beware.
- AI's hallucinate and you have to take output carefully and/or with a grain of salt,
- Chat may evolve and obsolete the prompts,
- It helps to know the underlying theory and methods to second guess what it is doing. Otoh, one can have a dialogue now to figure it out,
- There are certain tasks it will NOT do. For example, it said once that it could do HJB on 2-asset allocation optimization and then when I said "ok, now run it," it said "sorry, bro, can't; it's too hard and will time out." It later created an acceptable work around but one needs to know what it can and can't do. It probably just want's me to pay more,
- I did this more or less "seat of the pants" so there might be assumptions Chat used for me during the many iterations to develop the prompts that won't be resident in memory for you if you test drive. No idea... See my first bullet above,
- These are fairly trimmed down and focus more on simple versions of the theory. No fine tuning for fees or taxes or investor idiosyncrasies etc but those are probably easy enough to retrofit,
- Rigorous testing against a previously coded model has NOT been done. I guarantee I have probably mis-prompted somewhere in here and I don't even know it.
Sample prompt. Given more energy I might tweak this a bit but it seems to work for now. Note my qualms about a 100 or 200 year effective programming horizon are unresolved. Input params are in blue highlights and I guess I could probably consolidate them more neatly at the top but I didn't :
Output Example
Lifetime probability of ruin (survival-weighted): 0.0182989
No comments:
Post a Comment