Nov 26, 2017

Hindsight 1: Monte Carlo simulation

If I were to ever depart these blogging shores (as in leaving blogging, not dying), which is always a possibility, I would want to document, for myself, some things that I know now that I am either glad I know or wish I had known a little earlier.  I'm not sure how long of a list that is but if it gets long then it would probably be as tedious to read as I'm guessing it would be for me to sit down at one sitting and write.  For that reason I'm thinking of an open ended series of short posts I'll call "hindsight." The audience is me and the effort is intended to help me consolidate what I think I know about some stuff I've learned over the last couple of years. I'll seed the ground with the first one on Monte Carlo simulation.



------------------

It's not that MC simulation can't predict the future, which it of course can't, it's that it can sometimes be a sloppy planning tool in the present even if the modelling gets hyper-sophisticated. I'll give you a personal example and this would be true I think even if the simulator in question were twice as sophisticated as it was back in 2010.  In that year, which I'll count as the year I unwisely slid into an early retirement, I had one of the top several retail financial institutions on the planet run a projection for me, a report that was probably 90% made up of the output of MC simulation and for which they wanted to charge me 4 grand if they were to do a second run and around which it took a team of three people to manage the process (three?!?). I had them do this because I had a creeping fear of what I now know is ruin risk.  I didn't know how exposed I was and I had three kids to get through a whole lot of school years and myself through a lifetime and I had been incautious in the decision to retire.  I figured this was an institution with no shortage of human and technical capital (true) and that I did not have the skills to figure it out on my own (not true) so they would be able to help me.

They came back with an answer, of course, from the mysterious and powerful MC behind the curtain.  I can't remember exactly but it was something like a 80% success rate estimate.  That was both startling and reassuring. Startling because at the time I didn't understand simulation and that 100% (which one really wants) is not a reasonable expectation and that there is a non-linear cost to acquire safety. On the other hand, it was reassuring because 80 was better than 70 or 60.  Here's my problem though.  Fast forward to 2017.  I've now had the chance to build four or five simulators or pseudo-simulators (maybe even six depending on how you define simulator) using different techniques, platforms, underlying modeling theory, languages, etc.  When I went back and applied my own 2017 capabilities to the problem but now pretending it is 2010 with the then-relevant assumptions, the average across all my approaches looked something more like a 20% chance of success.  That's a pretty big difference between 80 and 20 (and 20 would have been terrifying...which is what should have happened).  That's why I call MC sims sloppy. The institution and I could have been reversed or even the same.  It all depends on the tool, the customer, the input assumptions, the hidden embedded assumptions, the modelling approach, and probably which way the wind is blowing.  And even if we add more sophistication like serial autocorrelation or regime switching or whatever I still don't think that sloppiness problem goes away. It might but someone would have to prove that to me. I'll at least keep an open mind.

So if sloppy why would I bother with all that work put into figuring out different ways of simulating? For one thing it was mostly out of pique at the institution that helped me and the thought of having to pay 4k for a second run when I was already paying super high fees (that I no longer pay).  That and I just wanted to see if I could do it; I was curious.  Also, I still think that when it is done well it is a good summary risk metric and something that is actionable as an early warning message.

I don't know what the official academically sanctioned way to "do it well" is but for my own purposes I do two things. First I use many models to get some kind of technical consensus across different approaches and assumption sets. You can call this model averaging but I don't really average, I unscientifically cherry pick an answer that seems about right.  Second I track that over time to see if it shifts too hard or fast in a bad direction.  I guess that may not be practical for most retirees that don't do this kind of thing as a type of hobby so I suppose the basic point here is that having a certain amount of skepticism about the tool being offered and finding the energy to go out and get multiple points of view are probably important things to do.

Here's one last maybe unrelated thought on simulation.  I've had the chance now to come at this fail risk thing several ways: from full on programmatic simulation using different types of return and longevity modelling that runs to many many many pages of code (and is now a tangled mess almost un-maintainable by me), to animation of "perfect withdrawal rate" math, to a VBA instantiation of the Kolmogorov equation I rewrote in R, to a one-off historical rolling thing, to (finally) a simple animation of "mw - 1" that just takes up a few lines of code at most.  One would think that that last one would be undesirable due to its almost grotesque (or beautiful I'd rather say) simplicity.  It amuses me though that the output from that grotesquely simple one can come up with the same answers as the others and do so with an unsettling degree of fidelity.  Complexity doesn't always buy you better answers.  And even if it does, whether simple or complex, it can still be a little sloppy

In hindsight, I'm glad I went through all this, though.  I don't necessarily have any better answers now than I did before but having gone through all the research and building has allowed me to better judge financial technology (and financial marketing, and assumptions that are hidden or otherwise). It has also allowed me to see much better the underlying real-world processes that are being modeled in these things.


.

No comments:

Post a Comment