In a past post or two that I am too lazy to find for the links I recklessly claimed that for a broad range of retiree asset allocations in the center mass of risk the exact allocation doesn't matter a ton or at least doesn't matter much when compared to bigger fish that can be fried like spending. At least I made the claim in good faith based on: a) a bunch of reading of and interpretation of other's work, and b) my own sorta-semi-legit attempt to come up with an optimal allocation framework for age and wealth levels using backward induction and stochastic dynamic programming (borrowed idea and implementation framework; my code) and then sticking the results of the optimization into a simulation framework to see what works best. What worked best, when non-rigorously optimized for both fail rate and fail magnitude in years, was a range of allocations (two simple asset classes were all I could pull off at the time) between 40 or 50% equities to 70-80% equities. For the whole range from zero to 100: less than 40 had dramatically bad news, 40-70 was more or less all the same and good enough, and greater than 70 or 80 was ever so slightly less optimal than 40-70 but not nearly as bad as less than 40 and in some cases, depending on age and level of wealth, was perfectly ok all the way to 100%. You'll have to take my word for now that this is, in fact, what I had concluded at the time.
I was reminded of all this when reading this morning Javier Estrada's article on Maximum Withdrawal rates ("Maximum Withdrawal Rates: A Novel and Useful Tool" Journal of Applied Corporate Finance, Fall 2017) which can also be referred to as Perfect Withdrawal Rates. I did some studies on PWRs last year, by the way, a link to one of which is here. I was looking at his table 2 which shows some summary statistics (to quote: "shows summary statistics for the distribution of MWRs for 11 asset allocations with stock-bond proportions
between 100-0 (all stocks) and 0-100 (all bonds), over 86 rolling 30-year retirement periods, beginning
with 1900-1929 and ending with 1985-2014. All strategies are based on a starting portfolio of $1,000,
annual withdrawals adjusted by inflation, and annual rebalancing to the stock-bond allocations in the first row."). Since I had been ruminating recently on Sharpe Ratios I started to wonder what his table data for mean MWR and standard dev would look like if we calculated the statistic mean-MWR-per-unit-of-variance(deviation) across the 11 allocations. I have no idea if this is legitimate or meaningful but that is the bread and butter of RiversHedge - to take the risk of tackling illegitimate and/or meaningless things in retirement finance. Here is what it looked like when I added the missing row to his table:
This seems like familiar territory. The thing that gets me though is that this took about three seconds to decode while that whole "backward induction into simulation thing" took me weeks, maybe longer. That means I'll have to think about this one for a bit. Check out Prof Estrada's article. It's a good succinct cover of the topic. His footnote #4 references the same source material I did last quarter (Suarez, Clare, Blanchett) except that I added EarlyRetirementNow who does the same thing and same math under a different guise and Estrada himself while Estrada adds Andrew Miller to whom I have linked before but did not identify for MWR content.
No comments:
Post a Comment