1. FRET - Flexible Ruin Estimation Tool
I created a tractable, transparent, simple, and fast solution for estimating lifetime risk of ruin. The results map well to the solutions from a Kolmogorov partial differential equation (PDE) that evaluates the same underlying processes. My tool directly addresses P[T>=L] (probability that a lifetime process is longer than portfolio longevity in years) which also satisfies the partial differential equation. The average difference between the two approaches (using finite differences schemes for the PDE) in terms of estimated outcomes is zero with some minor variation due to the use of simulation for one of the terms. The advantage to the sim-approximation, in addition to general transparency, is that the density and difficulty of partial differential equations are avoided while tractability in using non-normal return distributions is gained*. The estimator tool integrates two probability distributions: one for the chance of still being alive in n years (using a Gompertz mortality model tuned to the SOA annuitant mortality table) and the other for the chance of a net-wealth-process fail over infinity. This approach models the underlying processes simply but with clarity. It also enhances the ability to visualize lifetime ruin risk for a given age, return distribution assumptions, and spend rate. Prototyped in Excel; written in R October 2017.
- Process 3 - Portfolio Longevity
- Process 5 - Continuous Monitoring and Management Processes
- Notes in a 2001 German paper about ruin estimation math
- My ruin approximation tool against 96 test conditions comparing it to the K equation
- Here is the theory behind my joint probability approximation (JPA) tool
- One last look at my joint probability approximation tool for estimating lifetime risk of ruin
- A third look at a joint probability approximation for ruin risk
- A second look at a joint probability approximation for ruin risk
- Ahhhhh...Now I get it! Ruin risk is a joint probability of separate distributions
- On renaming my ruin estimation tool, my guess at its notation and some other considerations
2. Backward Induction via stochastic dynamic programming - Asset Allocation
I built a backward induction engine using stochastic dynamic programming in R. The proximal purpose was to learn how it is done with a secondary goal to come up with an economically rigorous framework for asset allocation optimization that varies with plan year and portfolio size while also optimizing life-cycle planning with respect to both fail rates and fail magnitude when the optimization results are fed back into simulation. All of this, by the way, was based on me trying to replicate some work done by Gordon Irlam in "Portfolio Size Matters" Journal of Personal Finance Vol 13 Issue 2 2014.
- On Hacking Out a (Really) Rough Asset Allocation Optimizer by Using Backward Induction and Dynamic Stochastic Programming
- Putting Optimized Dynamic Allocations Back Into a Simulator
- A Reasonableness Check on My Backward Induction Optimization
- An extension to my asset allocation optimization journey
- One more extension to my asset allocation optimization journey
3. Backward Induction via Stochastic Dynamic Programming - Optimal Spend Rates
This was an attempt to try to use an "optimal control theory" technique (e.g., stochastic dynamic programming and backward induction - BI/SDP related to Bellman equations) to evaluate life-cycle spending choice (or the decumulation half, anyway). In personal finance, this method has been attempted before for life-cycle allocation choice [see Irlam, G. (2014). Portfolio Size Matters. The Journal of Personal Finance. 13(2), 9-16.] but to the the best of my knowledge has not been attempted for spend rates. Results were noisy but fairly consistent with life-cycle econ theory.
4. Reinforcement Learning and AI Applied to Optimal Age-Based Spend Rates
I built a rudimentary AI program to teach itself "optimal spend rates in decumulation" using a dispersion routine, a utility-based value function, a recursive wealth and memory process, and, of course, rewards. Depending on how we frame it, the machine tended to find other optimal solutions known before I started. On the other hand, it may or may not have some applicability to intractable problems or problems where solutions are not already known. RStudio and AWS. Simple, probably reductive, no other sophisticated tools.
- Machine Spending [There are sublinked posts]
5. WDT - Simulation Framework for the "Expected Discounted Utility
of Lifetime Consumption" given Special Awareness of Wealth Depletion
Time Concepts.
After about six months of reading opaque research papers on the concept of, and financial mathematics of, "wealth depletion time" (the span of the planning horizon where wealth is depleted and consumption is forced to any available pensionized income and where the value function is based on relative risk utility) I decided to instantiate the concept (typically represented in continuous time math and differential equations or maybe also as here with simulation) in a simple pedagogic model. Features include life-cycle processes for consumption, fair (immediate/deferred & real) annuities, social security and pension, normal and fat-tailed return generation, auto-regressive stochastic inflation, relative risk utility, etc. The model, combined with an understanding of the analytic expressions along with simulation software that is up to the task, can be a powerful teaching tool for the risks and (utility) rewards of choices to be made within the life-planning cycle.
- A WDT Model [sublinked posts]
6. "5 Processes" A paper that was a synthesis of what I knew in late 2019.
This was an effort to structurally integrate four-five years of blogging by factoring my understanding of retirement finance in all its glory into what I consider the five core process and to then look at them as quantitatively as I am capable as an amateur:
0. Five retirement processes - Introduction
1. Return generation in multi-period time,
2. Stochastic consumption processes,
3. Portfolio longevity in its unconditional sense,
4. Human mortality and conditional survival probability,
5. Continuous management and monitoring processes
168 pages. Typos. Missing page numbers. First draft.
7. Stochastic Present Value of a Spend liability for a household balance sheet.
Rather than using a simulator to calculate retirement ruin probabilities -- a vaguely unrealistic metric -- by projecting (constant?) spending and fake returns into an ersatz and unknowable future, this tool works within the context of a household balance sheet -- where asset values are (mostly) known with certainty -- to estimate the present value of a spend liability...but as a distribution rather than as a deterministic object. This is done by taking the current-time dollar spend amount, along with a custom-designed spend path based on expectations about forthcoming spending in the future, and then simulating that plan out by: a) randomizing the simulated lifetime duration using Gompertz math for mortality probabilities, and b) chaining spending along with randomized inflation, the planned discontinuities, and randomized spend volatility. Then the simulated series is discounted back to real and summed. The resulting distribution of NPVs can be used to select a value for the balance sheet liability just like one would have with deterministic PV (except here we have more choices: mean, median, pth percentile, whatever). In addition, the current assets that are available to fund, if not entirely defease, the liability can be located on the dollar spend distribution in order to estimate a type of probability of success. Since discount rates are a policy choice if they are not randomized into the SPV here it is set up as an input variable designed to test ranges and sensitivities, though this may be changed later.
- Process 2 - Stochastic Consumption Processes [draft]
- Simulated stochastic present value of spending meets an efficient frontier
- More "theory reference points" (Yaari) for my custom spend calc and stochastic present values
- How might I project an SPV spend liability estimate out 3 or 10 years from today?
- How big of a deal is it to randomize the discount rate in a SPV liability calc?
- Impact of Longevity Expectations on the Stochastic PV of Spending
- Simple reality check vs. complex SPV
8. Re write of a finite differences solution to the Kolmogorov equation in R
I took some VBA code doing a finite-differences solution/approximation to a Kolmogorov partial differential equation (the equation is used to evaluate the lifetime probability of ruin) written by someone else and I re-wrote it in R. While PDEs mostly escape me, I did this recode to help me understand how the equation works in practice and to enhance my understanding of the mathematics of retirement, especially since this equation is a gem of distillation of the retirement problem. The FD approximation when run through its paces shows that it is in close proximity to Monte Carlo Simulation as well as "perfect withdrawal rates" (analytic solution to withdrawal rates assuming a known sequence of returns over time) when run in a dynamic simulation.
9. The Perfect Withdrawal Rate Concept and its adaptation to Stochastic Longevity
I adapted a formula seen in several sources (several academic retirement papers and one web site) designed as an analytic solution to safe withdrawal rates and then I added an additional feature that I don't think is being done anywhere else: a stochastic duration of periods that fits a Gompertz model for human longevity. This was a challenge suggested in one of the papers that I couldn't pass up. The base equation is "the maximum withdrawal rate possible over a fixed period of time if one had perfect foresight of investment returns" (i.e., a "perfect withdrawal rate"). As a standalone analytic solution it is both powerful and elegant. It also enables one to visualize sequence of returns risk both when looking at the composition of the equation directly and also by looking at the distribution of withdrawal rates that result from it. To this foundation I added a randomly varying number of periods that are fitted to Gompertz distribution with Mode = M and dispersion = b. The analysis made possible from the resulting distribution of PWRs compares well to both Monte Carlo simulation and a Kolmogorov partial differential equation.
- Prelim. study of PWRs and stochastic longevity
- Trend Following Can Enhance Withdrawal Rates - Part 2
- Trend Following Can Enhance Withdrawal Rates - Part 3
- Trend Following Can Enhance Withdrawal Rates - Part 4
- Some PWR acknowledgements and comments
- Putting an adaptive PWR up against changing longevity estimates
- Visualizing Sequence of Returns Risk
- PWR v Kolmogorov v MC simulator
- Simulation vs PDEs and other analytic methods
- Trial run: effects on "Perfect Withdrawal Rates" from allocations to trend following
- Perfect Withdrawal Rates and Random Lifetime
- Perfect Withdrawal Rates with normal and fat-tailed return distributions
- Revisiting Perfect Withdrawal Rates But with Variable Duration
- Perfect Withdrawal Rates and Asset Allocation
- Perfect withdrawal rates, trend following, and stochastic longevity
10. My own private spending rule of thumb that I call RH40
Given the immense amount of time I have put into multi-period life-cycle finance reading, research, and applied programming, I thought I'd try my hand at creating a custom rule of thumb that, among other things: 1) is easy to remember and portable with no need for any tech or tables other than a simple calculator, 2) requires no input other than age, 3) generally respects the various following concepts: early retirement has an amplified set of risks, longevity is variable, residual longevity and/or legacy require a non-zero terminal wealth plan, and risk aversion changes with age, 4) is a hyper-conservative starting point for an investigation of spend rates, and 5) has some plausible basis in econ theory and the real world evidence-based application of the concept. In a series of tests, the formula, though probably a little conservative, shows pretty well. Withdrawal % = Age / (40 - Age/3)
RiversHedge Unveils its Very Own New-to-the-World RH40 Retirement Spending Formula
RH40 vs. analytically derived withdrawal using ERN math
Excel PMT function vs RH40 for different mortality table assumptions
RH40 in "multiplier" terms
One More Thought on RH40 - Dynamic Longevity
Playing Around With Some RH40 Math for context...
Why my RH40 formula might actually be ok...
11. Custom Monte Carlo Simulator
The MC tool includes some features not currently found in free products in an integrated way although many of the features (not all) show up here and there. For my version, I include things like: stochastic longevity, stock-bond dependent return correlation, a correct total return bond formula, "regime" suppression of returns, historical return suppression, random spending variance and trends as well as large spending shocks, a customizable spending path, rudimentary but customizable tax and fee variables, output history retention, etc. Rewritten in late 2016 in R. 2017: added custom mortality distribution options, sampling methods, alternative custom return distributions, back-end analytics, rudimentary utility analysis of terminal wealth, and dynamic asset allocation based on the table output from a backward induction exercise.
As an aside: I no longer use MC sim in my own planning. The process is weak with respect to robust personal planning for reasons beyond this post. It may be useful as a broad-based dashboard metric, early warning indicator, or 2nd derivative phase-change detector. It is also uniquely well set up to evaluate custom spending plans or unusual tax considerations. This was among the first tools I ever built so is worthy of mention here.
One story before I leave the MC topic. In ~2012, when I was first becoming aware of the risk I carried, I asked an advisor at Wells Fargo to help me out. They ran for me (it took a team of bazillions a week to do whatever they were doing) what I now know to be a dressed-up MC sim. The news was I had a 20% fail risk for age. Hmmm I thought 1% was bad but what did I know then, 20 was ok I suppose but who knows. Then I had doubts. I asked for a re-run with updated parameters. "Uh, no the first was gratis; nbr 2 is $4,000.00." Me: "wtf, I've paid you $_________ in fees over the last 25 years." WFC: <crickets>. Thought about it for a month and while folding laundry: "Yup, I know how to do it." Did it. Re ran it and -- and have corroborated since that -- the fail rate was >=80%. So: 1) fail on customer service, 2) fail on basic analysis, 3) fail on customer retention, 4) fail on common sense (fail by me for overspending). I'm probably missing some other fail before I get to what they did to me during my divorce. Fail Fail Fail Fail. Fail.
12. Critical States Methods Applied to Personal Finance
After reading "Ubiquity - Why Catastrophes Happen" by Mark Buchanan I had an idea for how to model hits to a retirement plan that occur like avalanches in a sand pile -- or earthquakes or forest fires -- where there are few if any normal distributions or any kind of predictability around damage magnitude (also, I just finished an actuarial paper on "Extreme Value Theory" so my interest was engaged). This was my baby-step into complex systems and chaos theory as applied to personal finance.
- My baby steps into "critical states" in a decumulation model
- Comparing my naive complexity model to earthquakes
No comments:
Post a Comment