Sep 30, 2017

One last look at my joint probability approximation tool for estimating lifetime risk of ruin

This, hopefully, is the last look at the tool I created to approximate "lifetime ruin risk," which is same thing that the Kolmogorov equation evaluates (but with some opacity in what it is doing) as do most well designed Monte Carlo simulators (also sometimes with a little bit of opacity in the design assumptions, biases, and underlying process). My tool does the same thing as the K-equation and simulators but it does it a little differently by evaluating the joint probability of two things: the probability of being alive in any future year and the probability that an age-independent net wealth process will go insolvent within the years that life represents.  Both of those can easily be derived empirically or by simple algorithms and then joined.


Having just ported the excel prototype to R and automated the mortality math, I didn't want to kill too much more time on this but I at least wanted to see a little more comprehensively if my tool was reasonably robust over a limited range that covered parameters I probably want to work with in the near future.  Since I've already done comparisons between my MC sim (kinda slow) and the K-equation (fast) and found them reasonably well aligned, here I'll just compare the output of the joint probability approximator (JPA) to the K-equation (because of the speed and because the JPA was spawned by and has some affinities with what I learned from K) and see if the results at least pass the smell test over the selected parameterization.

Here is the test framework. It's pretty limited for now but hey, I have kids to feed and drive around.


If you take those two estimation columns and sort them and then chart them in side-by-side bars to look for big or obscure variance it looks like this (JPA in red):

There are some differences, of course, but not too systematically and not too big; maybe some slight underestimation by JPA. On the other hand, I have to realize that I'm not doing a moon landing so I'm calling this a "more than passes the smell test."  The thing actually works.

On the other hand a post like this begs the question of why anyone would really need another ruin-risk quasi-simulator.  I asked myself the same thing.  There are millions of them out there and there are also plenty of simple tools and rules of thumb to back them up.  If I had to dig deep I'd say the main reason I did it was just for the hell of it to see if I could do it.  I also wanted to see if I understood any better the real-world process that I think we are working with here.  On those two counts I think I succeeded.  Here are some other pros and cons that, if pressed, I might say come into play as well:

Pros:

  • It's pretty fast. My first excel simulator that I ever did took hours to run and locked the PC down. In R that's down to 2 minutes per run, maybe more if I am pushing hard. This JPA thing takes three seconds and it has results that are more or less indistinguishable to the other tools when push comes to shove (and extreme customization is not in play).
  • While it is simple it is also very, very transparent in the assumptions, the process it is modeling and how everything is accomplished.  My MC sim is many many pages of code (a lot of comments and reporting stuff not to mention newbie code. Also it is now almost impossible to fix or debug). The code for the K-equation is only about 3-4 pages but I have no idea what it is really doing and I even recoded it myself to see.  The JPA is one page of code and I know exactly what it is doing an why.  I'm not always even sure I can say that about my simulator. No Bonini's paradox here which has its benefits.
  • It helps me visualize the underlying probability process and tradeoffs really well...better, I think, than the traditional spaghetti charts that come out of a simulator.
  • Maybe this (JPA) has been done before or is an "oh yeah, I've seen that a million times in ____" kind of thing but I haven't seen it in the ret-fin lit before (or yet) even though I try to slog through quite a bit of it. So that's kind of cool.  For what it's worth. 

Cons:

  • At it's heart it still leans on a mini simulation to make things work which takes processing time and and creates some variability in the results.
  • It does not lend itself very well, as far as I can tell, to extreme customization (think taxes or spend rules or return regimes or whatever you like) like an MC simulator can be programmed to do. But then again I'll also point to my post on the diminishing returns to complexity in financial modeling...
  • It is not truly portable or simple like equations or a rule of thumb.
No more coding or posts for today. This is enough even for me.  




No comments:

Post a Comment