You may have noticed that I have not been out here on the blog much for the last three years. This gap is from some combination of:
- I saw what I needed to see, I got a sense for the shapes of misc retirement models and math. Felt like I was more or less "done."
- Counterintuitively I do not use, as part of my regular monthly retirement management, the analytics I was building and putting out here for a decade. Too much work and there's no impending personal finance crisis (yet) and how many times can I run a model before I "get it" and it is all internalized anyway. I am a client of one!
- I was using my move to Montana as a big ol' inflection point: left brain stuff to right, sedentary to active etc etc. Basically now that I am here in the west it is only: read, hike, lift, and feed the cat. Not in the mood to code or do quant posts. I still respect my choice but this post is to describe some new things with which I was playing last week.
------
Today in 2025? I do not use all the quant dreck I tried to learn and write about here (learn not teach) for the reasons in bullet 2 above initially, but also:
A. I now stick to very simple tools like a family balance sheet (actuarial BS, with a hat tip to Ken Steiner at howmuchcaniaffordtospendinretirement.blogspot.com), an income statement (basically tracking spending but there is still some income), and a spending control sheet tool (think process control stuff like six sigma or ISO9000...and that is not a recommendation btw...to each their own methods). To the extent that I use any of the analytics in this blog's past, it is embedded in the upper and lower spending control boundaries, boundaries (a policy choice) that are set with a little of the old math but more judgement. But no, I am not running all the stuff I did all the time. and,
B. In order to actually run my models I would have to go back and look at all my code. Most of it is really hard to read and harder to fix or maintain if anything doesn't run which is common due to misc interdependencies (ie with R packages and libraries, code that runs code, etc). Yuck. I don't need that kind of thing. At 67 my coding can stay (mostly) in the past.
---------
But the problem I have with the present state of A and B is that it dawned on me the other day that one of my main themes here was the importance of a continuous management and monitoring of the retirement situation by way of triangulation with ensembles of models ... and now I can't even "triangulate" myself anymore -- beyond the balance sheet when I need it --- without great effort. For that reason I spent a few hours over a couple afternoons last week to take six of the higher utility (to me) models of the past and have AI (ChatGPT plus) come up with something so that:
- I don't have to code anymore
- I have a usable, repeatable prompt
- I can answer the questions I used to ask without any tech packages or library references going stale
- Charting and adjustments and add-on analysis/sensitivity are way easier
- The instructions are in almost natural language most of the time so easier to read and understand
- It would be easy, if I was so inclined, to share the ideas here but now in something not R-code
---------
These are the 6 models that I converted from my memory into executable AI prompts, listed here as "domains:" (See Note [1])
Domain #1 — Spending Strategy Comparison Domain
Purpose:
Evaluate retirement spending strategies under uncertainty using economic consumption-utility principles.
Core Model:
Expected discounted (subjective time-preference) utility of lifetime consumption, typically with CRRA utility (Constant Relative Risk Aversion) and survival weighting from a Gompertz life table.
Framework:
where ρ = subjective discount rate, γ = risk-aversion coefficient, and c(t) = real consumption path.
Outputs:
-
Certainty-equivalent (CE) consumption or spending rate.
-
Comparative charts of CE spending vs withdrawal rates.
-
Sensitivity to γ and ρ across different portfolio models.
Use Case:
Determines which fixed or variable spending strategies yield the highest expected lifetime utility.
Domain #2 — Lifetime Probability of Ruin (LPR) Domain
Purpose:
Estimate the probability that a retiree’s portfolio wealth hits zero before death, given stochastic returns and random lifetime.
Methodology:
Monte Carlo simulation with survival weighting based on Gompertz mortality.
Implements the prefix-sum annuity-factor or first-passage approach to detect ruin events efficiently.
Framework:
Satisfies Kolmogorov’s backward PDE for lifetime probability of ruin via discrete stochastic simulation.
Outputs:
-
Ruin-year probability mass function (PMF) and cumulative probability.
-
Sensitivity analysis by volatility, spend rate, and portfolio mix.
-
Overlay with conditional survival curve.
Interpretation:
Quantifies “risk of running out of money before death,” integrating longevity risk with market risk.
Domain #3 — Dynamic Programming / HJB Domain (see note [1])
Purpose:
Solve the intertemporal optimization of consumption and portfolio risk jointly, using the Hamilton–Jacobi–Bellman (HJB) equation.
Structure:
Two subdomains:
-
Spend Optimization — determines the optimal consumption/withdrawal path by age and wealth, maximizing CRRA utility under stochastic returns and mortality.
-
Risk Allocation by Wealth and Age — solves for optimal risky-asset share as a function of both age and wealth.
Mathematical Form:
ρV=c,πmax[u(c)+Vt+(rW+π(W)(μ−r)−c)VW+21π2σ2W2VWW]
subject to mortality-adjusted discounting.
Outputs:
Optimal consumption and allocation paths; policy surfaces by age and wealth; sensitivity to γ and ρ.
Domain #4 — Portfolio Longevity Heat Map (PLSR) Domain
Purpose:
Visualize portfolio longevity as a function of spend rate using large-scale Monte Carlo simulation.
Methodology:
Computes the PMF of ruin years across spend rates and displays as color-coded heat maps or 3-D surfaces.
Key Features:
-
“Prefix-annuity first-passage” approach.
Stratified spending simulations
-
Common random numbers for smoothness.
-
Optional Savitzky–Golay smoothing and logistic-fit “cliff” analysis to identify sustainability boundaries.
-
Default horizon ≈ 100 years (interpreted as ∞).
Outputs:
Heat maps and 3-D surfaces showing ruin-year probability density by spend rate.
Useful for visual intuition of the sustainability cliff where portfolios transition from sustainable to unsustainable.
Domain #5 — Perfect Withdrawal Rate (PWR) Domain
Purpose:
Compute the constant withdrawal rate that exactly depletes wealth at the horizon (sustainable rate), with or without random lifetime, with or without bequest.
Components:
-
Fixed Horizon Component — formal definition per Suarez (2004), solving for the fixed-term rate that satisfies expected wealth = 0 at horizon N.
-
Random Lifetime Component — user’s extension incorporating stochastic longevity (Gompertz), generalizing PWR to life-contingent sustainability.
Outputs:
Distribution of PWR values under stochastic returns and random lifetime; percentile tables (e.g., P50, P80, P95); comparative plots vs portfolio models.
Interpretation:
Links annuity-style valuation and sustainability, serving as the rate counterpart to the SPV liability view.
Domain #6 — Stochastic Present Value (SPV) Domain
Purpose:
Value a cashflow plan under stochastic discounting and survival weighting—i.e., compute the mortality-weighted stochastic present value distribution.
Framework (continuous form):
SPV(ω)=∫0HC(t)tpxe−∫0trs(ω)dsdt
Discrete form used in simulation:
Assumptions:
-
Gompertz mortality, first-year survival guaranteed.
-
Horizon is clipped or mooted at age 121+
Standard portfolio inputs calibrated to lognormal gross
-
100 000 Monte Carlo paths.
Interpretation: Feasibility Equation:
Extended Module: Stochastic Floor Allocation (A%) with autocorrelated real returns (ρ ≈ 0.4) and Gompertz longevity, simulating the implied asset share hypothetically needed to support a real consumption floor.
Charts:
-
CDF and PDF (density) formats with color-coded μ labels, percentile markers (P50, P80), and parameter boxes showing SPV formulas and Gompertz model.
Conceptual Anchor:
The SPV domain is the valuation side of the feasibility equation, complementing PWR and LPR domains.
----------
Not bad for a few hour's work after three years of indolence! To finish this mini project out I will attempt to wrap all this up into an integrated and executable "whole," ie some kind of strategic, conceptual and functional wrapper where the 6 domains are the "dashboard" for the relatively rare moments where I want to take a peek at speed and fuel to see if I need to nudge anything along retirement road. I'll update here if I get around to the integration part.
Notes -----------------------------------
[1] A) The optimization prompt which I had to re write several times: no matter what externally stored script I used, the AI is so maddeningly fickle with changing what it does -- sometimes it uses HJM maybe, other times it uses a proxy, sometimes it does x or y but I have no idea -- that it is 100% not reliable or maybe even usable. B) the two-risk-asset surface version was something that AI said "yeah I can do that, here is the ready to use executable prompt" and me: "ok, run it" AI: "can't bro" Me: "why not?" AI: "too long and hard, I will time out" Me: "but you created it and said it was ready to roll" AI: <shrugs> It defaulted to a Merton based proxy. That's fine but it is not HJB. Fair tho. I built a real HJB once and it ran for hours. The same may happen with the other prompts...TBD.
No comments:
Post a Comment