Mar 5, 2017

Impact on Fail Rates of Volatility Reduction Strategies

I'm sure this has been done before but it was easier to code it out than look for the related articles. I wanted to see what happens to simulated fail rates when nothing changes but return volatility.  This might be kind of obvious because, in the absence of random simulated inflation, if return vol were to be zero for example (and spend rate < return rate), simulated retirement would almost certainly be successful. But the idea here, suggested by a reader, was: using a simulator "what is the nature of the effect on fail rates of adding to a portfolio either asset classes or tactical allocation or systematic rule-based strategies that might have a significant, material effect on portfolio volatility (when returns are held constant)?"  The answer can probably be intuited before the last sentence is completely read. Since sequence-of-returns risk, where drawing from a portfolio when returns are bad, is a bad deal for retirees, reducing the scale and frequency of the down-returns through volatility reduction strategies is likely to be a good deal. Increased vol would have an opposite effect.  To cut to the chase, here is what it looks like in fake sim-world:



To do this I replaced the part of the simulator that used historical data to generate returns with a programmatic function that samples fake returns using a similar mean return, standard deviation, and skew. The historical data, when allocated 50/50 to risk/safe and run through a simulator 10k times using some generic assumptions[1], has a mean return of .076, a standard deviation of .097, a skew of -.53, and a fail rate of .165.  That was the base case.  Replacing the historical returns with a "return function" tuned to the same distribution shape produced similar fail-rate results and similar moments of the return distribution.  Then it was just a matter of varying the standard deviation to see what happens which is what is shown above. It was interesting (to me anyway) that toggling the skew parameter to zero skew had almost no effect at all but I'll play around with that later.

In practical terms I'm thinking that this is useful information to me. My personal blended portfolio, over the last three years has had something like a 6-7% standard deviation.[2] The systematic rules-based investment strategy I run, on the other hand, has had about a 4% standard deviation (and better return). So, without even looking at a mean-variance map I can know that, all else equal, my efforts are accretive before I even consider the covariance effects between strategy and portfolio … assuming the results of fake sim-world translate well into real life, which I do assume. 

Notes-----------------------

[1] 10000 runs, age 60 start, random longevity using SS life table re-sampled for a 60 year old, 50/50 (70/30 bond/cash) allocation, 4k constant inflation adjusted spend, no return suppression, no SS, spend variance, no spend shocks, no spend trend, etc. 

[2] recalling here for a moment that "risk" in retirement is not really standard deviation or for that matter maybe not even "fail rates" but perhaps something more like unforeseen economic death spirals that lead to bankruptcy.


No comments:

Post a Comment