Feb 26, 2019

Process 5 - Continuous Monitoring and Management Processes

---------------------
Note: as in the previous essays, this is a draft as I hone some of this content. Also, since I view these essays as consolidating and integrating what I've learned about ret-fin so far, I will continue to add to and update this provisional latticework over time in response to new findings or errors.
---------------------

This essay is a continuation of:

Five Retirement Processes - Introduction
Process 1 – Return Generation
Process 2 - Stochastic Consumption Processes
Process 3 - Portfolio Longevity
Process 4 - Human Mortality and Conditional Survival

Summary here.




Process 5 - Continuous Monitoring, Management and Improvement Processes 

Life can only be understood backwards; but it must be lived forwards. -Kierkegaard
"Irrespective of the investor’s initial portfolio management elections—‘buy-and-hold,’ ‘constant-mix,’ ‘floor + multiplier,’ ‘tactical asset allocation,’ ‘bottom-up security selection,’ ‘top-down strategic asset allocation,’ ‘glide-path,’ ‘passive investment management,’ ‘active investment management,’ ‘benchmark relative,’ ‘asset/liability match,’ etc.; and, irrespective of the initial elections for withdrawal management—‘rules based,’ ‘fixed monthly amounts,’ ‘percentage of corpus amounts,’ ‘longevity relative,’ etc., the critical objective is to assure that the portfolio can provide the required cash flows. Investors spend cash—not Information Ratios or Merton Optimums; and they need to know that the portfolio can sustain a suitable standard of living throughout their lifespan. The need to know whether the portfolio is in trouble is a primary justification for establishing an appropriate surveillance and monitoring program." -Collins (2015)

That's a longish epigraph or two to start out with but you must consider the alternative you could have faced. I was re-reading Collins (2015) “Monitoring and Managing a Retirement Income Portfolio” in order to prepare for this post while also keeping track of what I could use for quotes or to bolster my arguments. That approach, if followed to its logical conclusion, would have involved me copy/pasting all 34 pages of someone else's material into my post. Instead I recommend you go read it yourself. It's pretty good and is a highly competent and convincing cover of the topic and I am quite comfortable outsourcing to him the main topic of monitoring retirement income portfolios.  For my purposes here in this post I'll merely essay on: (a) some things that I think are often missing in evaluating and monitoring retirement plans in ongoing operation (as opposed to the initial design) and (b) some tools and methods I've vetted and either use in my own plan, have used and discarded but still like, or might consider using in the future for my own purposes.

I used to think retirement finance was "one thing:" a single number, an answer, an optimum, some kind of a monolith. The media, my past advisors, and a lot of the ret-fin literature did not entirely disabuse me of this notion. But it's not a monolith (it's more likely an infinity of things changing in every unstable instant). A couple years ago, in a blog post, I first tried to split the monolith, for my own purposes, into two retirements: early and traditional. But that, while it was a pretty good separation since I had retired early and the differences are often palpable, was arbitrary and didn't capture the full force of some of the challenges encountered in thinking about risk and uncertainty over long retirements.  So, after a few other attempts at breaking this down, I now try to view ret-fin through the more fractured lens of the "five processes" that I've been trying to work with in this series. That framework seems more coherent and intellectually grounded and useful to me now although I still like the early/traditional split at times, too.  But the "five processes" approach, like “the monolith” or “the split,” still hides, with a mask of equations and a facade of quantitative analytics, some uncomfortable aspects of the flow of a real retirement process as it is lived forward into real life. It may be a pretty good distillation of what goes into a rigorous analysis but "the five processes" still, on the output side, do not produce a unified answer. That’s because the five processes actually exist in the context of two different domains of uncertainty that we even haven't talked about yet: Domain 1 is the domain of the hard uncertainties while Domain 2 is where the really hard uncertainties are found. Let's see if I can unpack what I mean here, one domain at a time. Note that this kind of distinction is not all that original. It is more or less a repackaging of Taleb’s “Mediocristan and Extremistan” from his Incerto series but now used for my retirement finance purposes. 


Domain 1 Retirement - The “Normal” Hard Uncertainties.

I call Domain 1 (this is “normal” retirement risk) the "hard uncertainties" because normal retirement finance is pretty hard, hard enough that Bill Sharpe (Nobel Bill Sharpe) once said that retirement income planning "is a really hard problem. It’s the hardest problem I’ve ever looked at" and Richard Thaler opined that "For many people, being asked to solve their own retirement savings problems is like being asked to build their own cars." I was going to call Domain 1 the "easy" retirement but it's not. 

Here is a Twitter conversation I had recently that can kick off our discussion of Domain 1 where “P” will be a person on Twitter and “Me” is me:

P1: What percent chance of success would you consider acceptable for a retirement plan? 

     [a Twitter survey nearby shows people choosing 80%+]

P2 -> P1: kind of surprised by results so far honestly, but probably shouldn't be. Most people are too conservative about this stuff and neglect to realize they have more control than projections assume.

Me -> P2: Having my own skin in the game in my 50s has concentrated my attention on this kind of thing. Russian roulette [an apt analogy that I picked up from Michael Zwecher's book on Retirement Portfolios] has a decent probability of working out just fine. Consequences are gnarly, tho.

P2 -> me: But this isn't Russian Roulette. You are still in charge. It's not like you wake up one day and go from 70% to 0% probability of success.

P3 -> P2: what's your answer?


P2 -> P3: Personally 60-70. For clients 75-80. Many aren’t comfortable at 70 [emphasis added]

Ignore for the moment that there is no known, or at least as far as I know, “accepted” threshold for success rates in Monte Carlo simulation. P2’s risk positioning at a 40% fail rate level seems really aggressive. And we haven’t even gotten yet to the relatively long list of pros and cons for using simulated sustainability success/fail rates in the first place. I initially ascribed P2’s answer to P3 to his being young, still having W2 income, and not having passed into a real retirement[a] but really the answer is that P2 is simply living and thinking in Domain 1 terms. He just doesn’t realize it yet. This means that P2 is correct in the sense that in domain 1 one can, in fact, see what is coming and does, in fact, have time to react and correct. This kind of thinking is one of the better reasons for having a monitoring system in the first place…so that one can “see it coming” and then adjust.  An extreme example of this adjustment (extreme in the sense that the course adjustment would happen on day 1 rather than later on) is like something I did in my post on “playing a feasibility game against the 1970s” where evaluating a 4% rule starting in 1966 clearly shows infeasibility in year 0 (and then also in every year thereafter). Given that “heads-up” that we got in year 0 by way of our diligent feasibility-monitoring system, we would obviously course-correct right then and there and have a more confident and higher success and higher utility plan by reducing spending to that which is “feasible.” So, instead of 30 years of withdrawals, it would now be infinite, or at least infinite in terms of a human-life scale. Evaluating future years, then, would be no different in our feasibility-monitoring system because we would have the same sort of certainty and good information and time to analyze, correct, and then recover in all those future years. All of this discussion supports P2’s point, by the way. 

The problem is Domain 2, which we’ll get to in a bit, where you can, in fact, wake up one day and go from a “70% to 0% success rate.” Maybe not literally in one day but maybe you can in a year. Here is an example from Dirk Cotton’s blog http://www.theretirementcafe.com/2015/12/positive-feedback-loops-other-road-to.html  P2 in the conversation above lives in domain 1 (and has no real skin in the game, btw) where everything works because we can see everything coming and have time to adjust. Me? I live in domain 2 and have skin in the game and am now and will always be almost utterly blind to some of the curveballs that the universe might throw at me. That’s why I am not so sanguine about accepting 40% fail rates in my plan.  

I won’t recapitulate all of the tools, techniques, and history of domain 1. That kind of thing is more than available in the current and deep ret-fin literature.  Domain 1 is what we’d call retirement finance as it is typically practiced by individuals, practitioners, and some academics. It is the world of either deterministic or normal probability models and math.  Here are some characteristics of Domain 1 retirement finance against which I will counterpose Domain 2 below:


Characteristics of Domain 1, some of this list is redundant but clearly it's not exhaustive:
  • Normally distributed probability and/or deterministic math
  • Risk is hedge-able or insurable because it is known and manageable
  • Easy to model mathematically usually with formal models and/or closed form equations
  • Consequences are mild and anticipatable/manageable
  • Outcomes are predictable over medium to long horizons
  • The players in the game are economically rational
  • There are elegant optimal solutions which, while sometimes naïve, make sense
  • Choices and decisions are always consistent over time
  • This is an Induction-friendly environment
  • One can stand confidently at the very edge of modeled risk and not sweat
  • Set and forget world: optimal solutions work without change or review over the full horizon
  • Solutions are robust in domain 1 (but fragile in domain 2)
  • Symmetries abound in the math
  • Expectations are set and always met
  • Optimal solutions that say “you should borrow” mean “back the truck up to the lender”
  • Luck plays no or limited role
  • Standard deviation make sense in terms of unlikely outcomes; rare events are really rare
  • Feedback loops and cascades do not occur; there is no chaos
  • Having a “margin of error” or some redundancy is mostly unnecessary or inefficient
  • Processes are smooth with no jumps or discontinuities
  • Power laws might exist for positive return outcomes but less so for spending


Domain 2 Retirement - The Really Hard Uncertainties.


Domain 2, on the other hand, a domain that is not often discussed in the ret-fin that I read, is a different world entirely. The closest I have come to seeing this covered is in some of the work by N Taleb (NNT) and some others. To borrow from his language this is the world of black swans. Usually this is understood as a market event like 2008 but NNT disclaims any conviction that 2008 was a black swan because he thinks it was clearly foreseeable.  Black swans are events that are unpredictable and tacit or implicit. They are not known beforehand. They involve things like complexity and feedback loops. Rare events are substantially less rare in domain 2 than a normal distribution would imply. There are jumps and discontinuities. In a retirement finance context, they (discontinuities) can happen in any of the five processes, not just returns. Returns can have massive once-in-a-bazillion year moves in less than a bazillion years. Spending can have massive shocks, feedback loops and cascades leading to bankruptcy whenever it suits the universe to send them our way, usually when something else bad is happening at the same time. Even longevity, for that matter, could see a “shock” if someone happens to come up with a semi-immortality pill. Here are some characteristics of Domain 2:

Characteristics of Domain 2. Not sure I have this 100% right
  • Probability is not distributed normally; it as fat tails and big skews; math is Mandelbrotian
  • There are processes that do not conform to probabilistic frameworks because they are so rare
  • Risk is not hedge-able or insurable because it is not known or foreseeable
  • Not easy to model mathematically; closed form elegant equations are naïve in domain 2
  • Consequences can be severe and life altering
  • Outcomes are predictable over no or only over very short horizons
  • The players in the game are not economically rational and have biases and irrational responses
  • Elegant optimal solutions either make no sense or have to be continuously re-evaluated
  • Choices and decisions are inconsistent across time and similar decision events
  • This is an Induction-unfriendly environment. Past worst cases mean nothing for the future
  • One needs to be robust and step away from the edge of modeled risk in domain 2
  • There is a premium on monitoring systems and processes. Set-and-forget is fragile
  • Solutions in Domain 2 are robust-ish in domain 2 (considered stupid or inefficient in domain 1)
  • Asymmetries abound
  • Expectations mean little
  • Debt is a significant source of fragility
  • Luck reigns
  • Standard deviation makes little sense; rare events happen more often than expected
  • Feedback loops and cascades happen when they happen; chaotic processes unfold whenever
  • Margin of error or redundancy is necessary for robustness and survival
  • Processes are discontinuous with unexpected jumps and breaks
  • This is the land of unknown unknowns; prepare for ambiguity; common sense is a valuable commodity

Living in a Real Life Two Domain World


Domain 1 is, as we said, what we’d probably call “traditional retirement finance.” The literature on that domain is vast and deep[b]. But domain 1 is still pretty hard to manage. It’s hard for a lot of reasons that we can indirectly infer from the Thaler and Sharpe quotes above.  The tools and techniques in domain 1 can range from simple to complex and should be familiar to a large chunk of the advisory community that is tuned into retirement income solutions and analysis as opposed to only security selection or portfolio management.  Whether the tools and models are well adapted to helping people actually succeed in domain 1 is a judgement call depending on the person, the advisor, and the circumstances.  Whether anyone is well adapted to success in domain 2 seems doubtful at times.  The following points are some things I’ve been thinking about recently in terms of why I think the tools and models that I see in the literature can sometimes be mal-adapted to both domain 1 and also, and maybe especially, to domain 2. This is a lead-in, by the way, to the management and monitoring methods I use, or at least consider, for my own personal life in a domain 2 world. 

1. Using any one single model exposes us to model risk.  The models used to gauge retirement risk or spending choice are made by people. That means that they might have things like a point of view, hidden biases, and structural embellishments or lacunae that can influence what you see or don’t see in the output. Collins (2015) summarizes another issue with model-generated probability better than I can.  This is said with respect to Monte Carlo simulation techniques, but I think it’s an applicable comment to modeling in general: “model-based probability is not equivalent to ‘classical’ probability calculations which rely on observation of empirical results such as rolls of a die or tosses of a coin. Rather, model-based probability relies on outputs generated by computer algorithms that approximate, with varying degrees of accuracy, the processes that drive financial asset price changes. Probability assessments are only as good as the models upon which they are based—that is to say, assessments are prone to ‘model risk.’ Thus, a portfolio monitoring and surveillance program should not over rely on outputs produced by risk models; and, any model used to monitor the portfolio should be academically defensible.”

2. We tend to have a weak view of the future no matter what model we use. The past is an impoverished tool and our imagination is underused when it comes to conceiving some of the uncertainty we face.  1987, for example, while it had a good story of recovery afterwards, looked at the time nothing like the previous largest down day in history.  Some of our futures will look nothing like the past yet the past is often what we used to project. Models are often based on historical data or boot-strapped off history or modeled in simulation using our intuition and experience from what has happened before. I often get asked if I have x or y or z feature in my models and simulators, maybe things like auto-correlation or return-switching and regimes. Some of these suggestions are pretty good ideas (that I may or may not have the tech skill to implement) others are just asking me if I am modelling as closely as I possibly can to what the past looks like…which sometimes strikes me as going the wrong direction.  From Taleb (2010): “Our human race is affected by a chronic underestimation of the possibility of the future straying from the course initially envisioned” and “but possible deviations from the course of the past are infinite.”  McGoun (1995) calls it the reference class problem: “there are risks for which there are reliable statistics and those for which there are not…it is unquantifiable variation which creates uncertainty. This is the reference class problem—that there are economically important circumstances that are perceived as risky, but that are also perceived as being without relevant historical precedent.”

3. The models we use tend to be reductive. This may be a little redundant with the previous points on model risk and our impoverished view of the future but if so, then the repetition is a form of emphasis. Simulation-based models are often described as opaque, blunt-force ways to access the same dynamics implicit in “more elegant” closed form math equations where the variables and the relationships between the elements are transparent. That may or may not be true but certainly they (simulation and continuous time equations) are both more dynamic than the deterministic forms they seek to supplant. Whether “supplant” means “better” is another question altogether. But all three forms reduce the world a bit. Whether that is helpful or destructive depends on circumstances, but we should at least be aware that we have simplified reality for sometimes reasonable and sometimes unreasonable reasons that can influence how we might respond to and survive uncertainty. The most obvious example of reductiveness that I see in what I do is the common use -- for reasons of convenience and/or elegance -- of the normal probability distribution (Taleb: “The bell curve satisfies the reductionism of the deluded.”). Don’t get me wrong I use it too because it’s easy and sometimes close enough. But here’s Taleb (2010) again: “Any reduction of the world around us can have explosive consequences since it rules out some sources of uncertainty; it drives us to a misunderstanding of the fabric of the world” and “we ‘tunnel,’ that is, we focus on a few well-defined sources of uncertainty” and “the attributes of the uncertainty we face in real life have little connection to the sterilized ones we encounter in exams and games.”  McGoun (1995) quoting Fellner (1942) puts it this way: “…one may simplify the problem in such a manner as to render the ‘exact’ method applicable, in which case the difference existing between one’s simplified model and the real world has to be taken into account” and McGoun again, for the last word, paraphrasing Knight (1921) “a distribution is unable to capture the complexities of the concept of uncertainty.” Exactly.

4. The dynamic models may not be dynamic enough. Simulation based models purport to mimic the dynamics of time and pretend to (dare I say) predict what might happen in the future. But these models are no more and no less than the representation of some lines of code written by a person.  It’s neither the future nor a prediction, it’s a software game. Closed-form equations using continuous math have done the world a favor by representing dynamics in a way where we can transparently understand both the relationships and the shape of the movement within the model.  But these, too, are games and the games encapsulate a reduced point of view (or a vague indication of forthcoming risk) that is only relevant to the present and the courses of action we can take now.  But we live life in a continuous present and that present is unstable; there are no stable optima or even stable parameters in a lived life. Things change. We can calculate a Merton optimum or run a Monte Carlo simulation but in the next moment the inputs and conclusions can be very different.  Age changes, risk aversion changes, life expectancy changes, goals change, families change, inflation skyrockets, health gets better or worth, someone finds the cure for cancer.  So, both neither a Merton formula nor a simulation take the dynamics far enough for me. They could, however, take it to a “next level” by engaging in some type of a continuous (or at least intermittent) evaluation. This would be a type of constant vigilance as a continuous recalculation. In addition, the model metrics themselves (fail rates, optimal consumption, etc.), if we were to use the language of calculus, might be more interesting anyway in their second derivative form rather than first. This is something that is rarely discussed. A fail rate or suggested spend rate is interesting but what does a 30% fail rate actually mean? 90% might get my attention as a stand alone result but 30% means nothing to me. 30%, however, when seen in the context of a change of rates – say it was 10% last month – has more information content for me. Three readings in a row would be even better since it starts to give me a sense of both the trend (speed) along with any acceleration.  In fact, I think most retirement metrics would probably benefit from this kind of second derivative perspective. I don’t know the mathematics of differential equations but I’m starting to get why people say that the 2nd derivative is much more interesting there. 

5.  Many sources of risk come from outside the models. This is more or less the same thing as saying “models can be reductive” or “there might be model risk,” which are the risks that we described above. But here we are coming those risks it in a slightly different way or at least with a different emphasis.   Let’s take Monte Carlo simulation for example. When we say there is a 30% chance of failure, that failure is mostly connected to either inadequate return or the volatility of the return. We don’t think much about the spend risk. What comes from “outside” the model is something like a mammoth spend shock from something unexpected like, say, a medical or family crisis. Also, outside the model are the possibilities for a debt-fueled self-reinforcing cascade of forced liquidation of income producing assets that can lead to a bankruptcy. These two points have been consistently and repeatedly and helpfully made by Dirk Cotton at theretirementcafe.com, a site I recommend.  Here are some more examples. In a closed form equation like the Merton math or in something like the Kolmogorov partial differential equation for lifetime probability of ruin, what comes from outside is the likelihood that market return “shocks” happen in bigger ways and more frequently than the normal probability assumptions allow…and they, the models or equations, make no allowance whatsoever for either highly variable spending or spending shock events.  Taleb (2010) used the example of a casino where the “known risks” in the models were related to the probability math of gaming. The out-of-model near-business-ending risks, on the other hand, were things like (a) tax violations arising from anomalous employee action where tax documents were hidden under a desk or (b) the kidnapping of an owner’s daughter and the related use of casino funds.   In a retirement context, the casino example might be stated like this: yes, you’ve modeled return volatility but you forgot to think about divorce. 

6. Where is the Economics of the Lifecycle Model? Many models in retail retirement finance have tended in the past (getting better) to either ignore or skim over other disciplines of financial behavior and analysis such as macro-economics and lifecycle model (LCM) considerations not to mention behavioral finance. This may be for a variety of reasons: the disciplines (econ and retail finance) have not had a ton cross-over until the relatively near past; utility and risk aversion are opaque and abstract and hard to measure or interpret for retail use; assumptions about risk aversion being independent and stable might be hard to swallow; finance practitioners may (?) tend to focus more on markets, allocation strategies, and portfolio/wealth metrics than they do on consumption (where they have little control or influence and for which they have few incentives) or even the joint return/consumption choice endemic in LCM, etc etc.  On the other hand, LCM and consumption utility have an important role in rigorously evaluating the joint portfolio/spending decisions made before and during retirement.  A related point in this area is that many weak-hands in model-making seem to be prone to back-testing and over-fitting ad-hoc rules that have no real economic rationale or mathematical necessity (a point made in Collins but something I can attest to from experience – myself and others -- years before I ever read Collins) and that are tested against too-short or too-simple historical lookbacks. This type of endeavor is called “curve fitting.” It usually doesn’t work so well out-of-sample in trading and is unlikely to do so in retirement plans either.  Here is Collins et al (2015): “Reliance on a single historical path of realized returns to develop and codify rules for portfolio control variables such as asset allocation and distribution policy is, at the limit, an elaborate exercise in data mining…any model used to monitor the portfolio should be academically defensible…it may be dangerous to apply retirement withdrawal rules that lack ‘mathematical necessity’ and it is interesting to evaluate results when applying such rules to non-U.S. markets – e.g., to the Nikkei 225 stock market since its high water mark at the end of 1989.” No kidding.

7. Many models brush past simplifying assumptions for spending and longevity. The math of simulation and/or optimizing equations is pretty simple if returns are normal, spending is constant or based on simple ad-hoc (and not necessarily based rigorously on mathematical or economic principles), and longevity is exactly thirty years. Longevity is, in fact, sometimes hard to model and hard to interpret in the output when it’s added. The absence of what Milevsky called the “term structure” of longevity skews the information needed for good strategy choice. Its presence in the model, on the other hand, can sometimes be either mis-modeled and/or mis parameterized creating some non-trivial variability between different models or even within runs of a single model.  Spending models, for their part, skim over actual spending observed in real life, spending variability, unpredictable shocks, and planned but lumpy future spend liabilities.  These points are clearly part of the same reductiveness and model-risk discussions above; I am just calling these two (spending and longevity) out specifically here for special consideration. Again.

8. Many risk models are both mono-dimensional and “single period.” It’s hard to get perspective from a single retirement finance “object” whether it’s an equation, simulation or something else. For example, a Merton Optimum spend rate tells me little about the concept of the actuarial feasibility of spending, either now or intra-plan. A Monte Carlo simulation (usually) tells me little about the magnitude of the fail either in terms of the number of years of the “fail state” or the degree to which lifestyle is compromised and for how long and how much. Neither of these examples say much, for that matter, about the intra-plan dynamics (touched on above) or the psychological states that might arise from awareness of the variable readings of the metrics over time.  To say I fail with x% probability over a 30 year “single-period term” says little about what happens to effective spend rates, feasibility, or “fail” readings in the intermediate and discrete chunks of time between plan start and end.  The unholy combination of model risk, mono-dimensionality, missing intra-term dynamics, and the inability to imagine the future well puts a premium, in my opinion, on the use of many models…frequently. I’d say continuously but that’d be a little OCD.  “Triangulation” among many models and methods, done at some reasonable interval, would not be a totally unconstructive process for a retiree. 

9. Many models in retirement finance are elegantly integrative but practically dumb.  The key inputs into retirement finance are well known: forthcoming (arithmetic) return and volatility expectations, spending expectations (and the wise consider spend variability and its path or shape), and longevity expectations, among other things and ignoring behavioral finance for now. The integration of these factors into beautiful models and hard-charging simulators is seen as demonstrative of deep knowledge and professional competence but often reminds me of a type of plumage whose beauty accrues mostly to tenure or higher fees. The reality is often that (a) the underlying processes are skimmed over and simplified (see reductiveness above), and (b) integrated models are prone to both model risk (above) and the underappreciated uncertainty explosion that can come from combining distributions of multiple random variables. Cotton (2019): “Combining the distributions of random variables increases the uncertainty but ignoring one or more of them is worse.” It may be this kind of thing that N Taleb had in mind when he used the phrase “naïve optimization.”


Trying to Set Up a Partial Framework for Success 
or My Own Private Idaho of an Amateur Methodology
“Cases of uncertainty where neither the mathematical nor the statistical basis for determining probability is available call for the exercise of what we call ‘judgement.’ These are cases where the events to be feared are so rare, or the difficulty of forming homogenous classes among them as a basis for statistical generalization is so great, that there is no adequate basis of experience for judging whether they will take place, or what is the probability of their taking place.” McGoun (1995)
If you buy in to my “setup” – that there are two retirement finance domains, one a lot more challenging than the other, we have mal-adapted or sub-optimal tools and models at hand for both domains, and that there may be large consequences for making mistakes…especially in domain 2 – then what is a self-respecting retiree-quant to do?  On the one hand I suppose that I could invest more time and effort into ever more complex math and more sophisticated and highly integrated models that add more features and more complexity and more real-life-ness. On the other hand, since getting more complex and more integrated seems like it might be doubling down on the problems listed above, maybe help would come more from something like a “methodology” than it would from math or models.  (Taleb quoting Makridakis and Hibon: “statistically sophisticated or complex methods do not necessarily provide more accurate forecasts than simpler ones.”)

Here, for better or worse, is the framework of a methodology that I use for myself to try to deal with domain 2 as well as deal with what I see as the flaws in the tools that are available to manage retirement finance risk, whether for “normal” domain 1 or for the harder stuff in the second domain:

1. I will remain skeptical. I look with a jaundiced eye at any one person, opinion, equation, assumption, model, recommendation, or “number.” I try to reflect on whether there might be bias, reduction or incentives at play. I look for type 1 and 2 errors: what’s there that shouldn’t be and what’s not there that should be? Results based on one over-fit run against history coming from one under-informed person’s model, when the future will more likely be infinite, are always suspect. Ad-hoc rules that have a limited (or missing or unknown) foundation in math or economics are on probation until proven otherwise.   

2. I try to engineer Taleb-ian robustness into my set-up “before the beginning” if I can. Whether I have done this in my real life is TBD. I’ll riff on what I think I mean by robustness below. Because I think robustness is a prerequisite to an ongoing operational process-view of retirement – i.e., managing a flow in motion – I won’t dwell on it much in this essay.

3. I almost always try to “triangulate.” I use multiple models. I gather different points of view. I re-run things. I synthesize opinions. I use different frameworks. I use more than just math by integrating a broad view of the world and different perspectives from disciplines outside of retirement finance. I’ll quote a little bit of Collins (2015) and Taleb and add some comments on this idea of triangulation in a section below.

4.  I plan on adapting as time goes by. One of the bigger superpowers we have as humans is the ability to adapt. A corollary to that is that “human capital” is often the unsung hero of the personal balance sheet, especially at younger ages but maybe at later ages, too. This ability to adapt is why I think that a “4% constant spend” and the language of “ruin” were always a little sketchy to me. We anticipate, we change, we nudge ourselves, we adjust lifestyle expectations, we (sometimes…and sometimes late) engineer lifetime income to create a floor of safety, we (if we can) go back to work, we ask for help. Though it can happen -- which is a reason for this essay…and my comments about Twitter above notwithstanding -- we rarely hit a “ruin state” full force without warning because we can, and do, adapt.

5.  I plan to monitor things as I go. I view retirement as a state of constant vigilance rather than a set-and-forget party. Because I spent quite a few years doing continuous process improvement in software development, I often view retirement processes through that lens. To me, it looks a little like a kind of industrial manufacturing process. One might be in a (retirement) process equilibrium but, like widgets on a production line, we can measure the process to look for trends, variance, and hidden costs coming from a lack of control. We optimize and tune. Black swans might destroy us out of the blue but maybe we can catch them early if we are looking. More on monitoring below because it is the meat of this essay


On “why I have a methodology in the first place”

I have been periodically accused over the years of things like “over-thinking” or being “anal” (anal seems preferable to over-thinking which I find to be a dreadful phrase), both of which might be true.  On the other hand, the accusations have tended to come from past romantic relationships, with the emphasis on “past,” or friends and neighbors that may not know the subject area well. Also, the accusations arose from me engaging in what I considered to be perfectly reasonable behavior, so I don’t feel all that oppressed.  I’ll set it up like this: if we were to over-simplify (under-think?) and say that there are three types of retirees – the non-feasible, the feasible…but just barely, and the very rich – then it’s my belief that only the middle cohort really cares about this subject of retirement income analysis and monitoring…and the closer to the line of infeasibility one is, the closer to “the edge.” the bigger of a deal it is.     For most of my early retirement I was absolutely, totally convinced (and proved to myself later) that I was right up against or past the line of infeasibility, or at least I was when considering the scale of my lifestyle against the resources deployed to defease it.  The tools and models and processes I used at the time to understand and manage my risk (successfully, I might add…for now) were reasonable under the circumstances.  Many of the things I have done or evaluated in retirement finance might seem unnecessary (anal) now in retrospect -- because my risk has abated due to the effort I put in to see, to understand, and to act (i.e., the methodology) -- but it was dead-serious-no-fun-and-games back then.  If the content to follow seems over-thought to you, and more people than I care to mention have implied something like that to me, then I’m not sure what category you fall into but maybe in this case I’ll frame it, tongue-in-cheek, as “broke, rich, or oblivious.” If broke or rich maybe none of the following is meaningful. If oblivious, then it depends.  But either way try to remember the figure below and maybe hold off on condescending to people that are near “the line” and who are trying to sort this risk out by any means possible. While I have cast aside a lot of these methods and risk has receded, it was all rather important at one time or another. 

Figure 1. Relative importance of Ret-fin

The other reason for me cooking up this amateur methodology on my own is that while the literature and practice of financial advisory, in its portfolio management, spend-rule, and security selection guise, is a million miles deep, the literature and practice of managing and monitoring a joint retirement spend/allocation “process-as-a-process-over-time” is awfully thin except in academia and some rarified areas of the advisory business. You and your CFPs and CFAs are darn solid, but they don’t really go far enough for me. This was obvious to me from the first moment I saw my risk for what it was and started to ask questions. It’s been more than four years of study now, and almost 10 retired, and that opinion has not changed much. The closest thing out there in terms of a credential on this might be an RMA but I have seen few RMA’s out there and I am not too familiar with their curriculum. Well after I started to go down this path, I was confirmed in my “must monitor” bias by a crew (Collins, Lam, Stampfli (2015)) that I read that seems to bridge that practitioner-academic gap well and that I now trust. I am not mono-focusing on them and their paper but after reading a largish pile of ret-fin lit over four years, I think they have the best bead on this. Here’s an example from Collin’s et al: 
“The need to know whether the portfolio is in trouble is a primary justification for establishing an appropriate surveillance and monitoring program. Money management encompasses ongoing monitoring; and effective monitoring helps the investor assess the continued feasibility of retirement objectives relative to financial resources at hand. There exists a substantial body of academic research evaluating the merits of various combinations of the portfolio management / withdrawal strategies / asset allocation approaches listed above. There is far less commentary on how to monitor the portfolio once it begins operations under the investment policy guidelines approved by the investor” [emphasis added]. Exactly.

 On “#2 – Robustness”

Let’s go back to my Twitter thread above.  The CFP-trained-correspondent “P2” that recommended running a 40% fail rate strategy (for himself) is, counterintuitively, kinda-sorta correct. Those kinds of plans can, in fact, raise lifestyle via higher spending rates and they can last for very long periods. This is a comment that I can’t believe I am writing.  But then again, those stand-at-the-edge approaches are incredibly fragile in ways that I can’t prove analytically but know intuitively. They tend to be prone to cascades of positive-feedback loops that can lead, on the margin, to bankruptcy and/or penury. They accumulate risk slowly until they suddenly cascade quickly.  Each slow step of risk looks reasonable until suddenly it doesn’t.  It’s a little like having a “friend” advise you, from shore, to walk out on the ice towards the open water on a frozen pond.  It can be done, and it might be a pretty walk, and the ice might hold the whole way…until it doesn’t. The risk is not linear all the way to the ice edge and he or she that is advising you is not sharing the risk with you[i].  I’ve seen this kind of thing, the un-shared-risk nudge, a thousand times from friends and advisors.  I ignore friends on this but advisors that tell clients to stand at the edge of the open water bother me. They can do it with their own family but not mine and the thought that they are suggesting it to others is cringe-worthy.  Maybe if they offered to backstop me out of their own pocket if I fail, that’d be one thing, but…     I once fired an advisor for being too glib on this issue.  The stakes for me are too high.

That means I sometimes feel like I am on my own with creating a “robust” plan.  Personally, I think that robustness is a precursor to the operationalization of a plan and resides not just in portfolio design and consumption planning but starts way before that. It starts when we first start earning and saving and setting our expectations and investing and planning over a full lifecycle. That means I won’t dwell on it much here since we are heading towards a “monitoring” theme.  What can we say? I don’t have a coherent structure for building robustness into a plan, but I’ll list at least a few things that I think are relevant.
 
- Plan to spend less than people say you can, at least until later into a retirement (“…errors of the estimates are reduced as we age and we experience diminishing uncertainty about the future.” Cotton 2019).

- Make more money before you retire or retire later; get your “multiple” higher than recommended.

- Win the lottery (just kidding, actually lottery winners are often marked for bankruptcy).

- Live shorter (just kidding again).

- Stay married and/or marry well.

- Engage in side hustles or part time work as long as possible.

- Make lifestyle spending dynamic in order to absorb return and spend-shock blows. There are limits to how much this can be done…and the lower part of the “dynamism” may last longer than you think…

- If you can still get one: a (well-funded and managed) pension plan that lasts a lifetime.

- Reallocate some wealth along the way to a lifetime income “floor.”

- Trust your kids to lend a hand if things go awry. Worked for centuries. No so sure now.

- Create redundancies and eliminate single points of failure.

That last point captures a lot of the essence of this topic of robustness.  Taleb (2010) made the point that if human bodies were run by economists, there might not be two kidneys (too inefficient) or even one kidney (still not entirely efficient) but there might be a communally shared kidney (now we’re talking).  But an efficient communal kidney would make individual bodies fragile and more prone to individual death.  So, “two kidneys” is not efficient but it is robust for survival, as are two eyes, two arms, two lungs, etc. That kind of goal, translated into retirement terms, might go like this: create redundancies in the financial and social structures that support us in retirement. I don’t have much to say about social structures but financially I can think of a few things. These might include building multiple streams of independent income or spreading assets and income across multiple platforms and providers. Another idea is to build redundant capital before you even start. That’s a fancy way of saying “save more.” It’s also a way of saying “have a lower spend rate on the ‘more’ you saved.” The way to visualize this is to maybe view it, cliché-like, as: retire at 65, save a multiple of 25 times a projected spend rate (that’s basically the 4% rule), and then, in addition to the main plan, create a side pool of capital that is “redundant” to some degree with the original pool in case some or all of the original pool gets spiked by circumstances beyond your control.  This redundancy is exactly the same thing as saying: “save a lot more and spend less of it.” This kind of thinking is considered by many, and by many straight to my face, as inefficient, life-denying, soul-sucking, penny pinching. But it is also called redundancy…or staying away from the edge of the ice of a partially frozen lake. It’s completely inefficient and not as much fun but then again, I suppose, so are two kidneys.  Donate one if you wish, which, by the way, in needful circumstances is not always a terrible idea. Other people sometimes need your kidney.   But for retirement, don’t give it away without even knowing you are giving it away. Give it away from an informed point of view. 


On “#3 –Triangulation: Vertical Triangulation, Phenomenology, and Consilience”

"The more complex the system, the greater the room for error." G Soros

Given the endless possibilities for model error, input/output sensitivity, inter-model disagreements[c], intra-model inconsistencies, irreconcilability or dissonance between fragmented optimization goals[d], the inaccessibility of the future (Taleb channeling Popper calls it “the fundamental, severe, and incurable unpredictability of the world”), and the often unexamined and underappreciated subjectivity of almost everything, one might be tempted to give up, go home, and watch Netflix.  The other option might be to “triangulate” by which I mean that we might get some good use out multiple models, methods, perspectives, and academic disciplines, finance-related or otherwise, rather than just one model or perspective. This purpose of this triangulation would be to better tease out some understanding of our risk and circumstances in the present moment, a moment where we still have a chance to do something about our risk.  This felt like second nature to me before I even knew half of what I know now and certainly before I read any academic or practitioner papers. Partly this was due to my skepticism which is another word for mistrust. I trust no one and no model. Several models speaking in unison and saying the same thing has more power to convince me.  Having been down this path on my own and being in total agreement with the Collins et al (2015) comments on this, I will let his comments speak for themselves: 
“Econometricians often discuss model risk in terms of specification error. Errors may arise as a result of including irrelevant variables in the model, failure to incorporate relevant variables, and inaccurate estimation of input variable values. Specification errors may result in examining five different models each of which produces different outputs when considering the same problem. This is an underlying reason why any single retirement income risk model may be unable to provide a good assessment of retirement risk.” [emphasis added]  
[from a footnote] “The Society of Actuaries and The Actuarial Foundation review of a cross-section of financial planning software, concludes ‘…programs vary considerably regarding when the user runs out of assets, if at all. Because of this finding, the study recommends that people run multiple programs, use multiple scenarios within programs, and rerun the programs every few years to reassess their financial position.’ Turner, John A., and Witte, Hazel A., Retirement Planning Software and Post-Retirement Risks (Society of Actuaries, 2009), p. 20.” [emphasis added]  
“…it is the risk model that generates the distribution of future results; and, therefore, probability assessments are not independent of the model. These observations indicate that effective portfolio monitoring is multidimensional and encompasses an evaluative process which requires tracking numerous risk metrics. This is a primary reason for designing and implementing a credible retirement income portfolio monitoring system focused on both sustainability and feasibility risk metrics.” [emphasis added]

This strikes me as quite constructive. And familiar, too.  The familiarity probably comes from taking a B.A. in religion in college (liberal arts, not seminary) where the method du jour in 1979 was phenomenology. I won’t define that since it’s a definitional sink hole depending on the audience[e] but the street-practice version we used in 1979 was to approach any particular phenomenon from a variety of directions using a multiplicity of methodologies (say, maybe: scientific method, literary/art critical, historical inquiry, psychoanalytic, etc.) in order to come to some type of synthetic or accretive understanding of what we were looking at or considering. That’s pretty fuzzy, of course, and it was, but then again, the objects being pursued in that context were rather elusive as well. Elusive…as is any unified understanding of retirement finance risk, I might add.  N Taleb gives all of this kind of thing a different name. He uses the word “consilience.”  Here is the first paragraph from Wikipedia on consilience: 
“In science and history, consilience (also convergence of evidence or concordance of evidence) refers to the principle that evidence from independent, unrelated sources can "converge" on strong conclusions. That is, when multiple sources of evidence are in agreement, the conclusion can be very strong even when none of the individual sources of evidence is significantly so on its own. Most established scientific knowledge is supported by a convergence of evidence: if not, the evidence is comparatively weak, and there will not likely be a strong scientific consensus.”
Consilience, when described like, this sounds like a STEM version of phenomenology (ignoring that there is a phenomenology of physics for now) and is also a fancy way of saying triangulation. But I do think it is a better way of saying it. Certainly, the idea of “convergence” and “convergence leading to strength in conclusions” is both laudable and useful if it can be achieved.  If achieved, it can be like a retiree-quant superpower. Minus the cape.


On “#3 – Triangulation: Horizontal Triangulation over Time as Contrasted to Vertical”

All that triangulation above is what I want to call “vertical” triangulation. That’s the type of consilience that is done across unrelated models, diverse disciplines, different analytic frameworks and methodologies, different parameterizations of the same model, or maybe even the consilience that can be done across iterations of the same, but unstable, model that is run many times. Another kind of triangulation is what I guess I now have to call “horizontal” triangulation, or that which is done across time. This kind of thing, in physics, might be considered the conventional detection and measurement of position, speed, acceleration, jerk, etc.  Personally, as applied to retirement, I think this is an underappreciated type of detection, measurement and source of reflective inference about risk.  I don’t see it too often in the lit.    

I started to mention this concept above, without naming it, in the context of Monte Carlo Simulation and “fail rates.”  A fail rate of 30% or 20% by itself is more or less meaningless. If an advisor tells you to pay attention to some x% rate as meaningful, ask him or her “why” (unless it’s a 95% fail rate; that you can pay attention to)?  Here is Robinson and Tahani (2007) on the fuzziness: “What do we consider to be an acceptable risk of shortfall? That is a decision for every retiree or planner to think about, but our choice is 10% [I've also seen 15, 20 and 30%...and we saw the Twitter conversation above staking out 40%]. We think that many people would choose 5%, but we know of no formal evidence on this question.” [emphasis added; maybe there is evidence now but there wasn’t then].  The only case where a standalone fail rate is meaningful is if it is so absurdly high (like mine in 2011) that it’s obvious there is a problem. Then again, a better way to look at this might be to look at the “first derivative” of the rate (the change in fail rate over time).  Saying it went from 4% to 30% tells us something more than “30%.”  The 2nd deriviative, the acceleration of the fail rate, is even more interesting. Acceleration would get my attention whether it is due to spending problems or changes in portfolio value.  

Spending, taken in isolation, by the way has the same problem.  A constant spend is risky enough and I’ve made the case that a constant spend, the macro-economics of consumption smoothing notwithstanding, is an active risk-seeking posture.  High spending can sometimes be dangerous. A relatively high spend that is also trending up (first derivative of a spending “rate”) is even more dangerous and is known to have a dominating impact on expectations for retirement success.  A spend rate that is accelerating (2nd derivative) can be downright destructive if not under control. This is “power law” spending and is to be feared (I doubt how often this happens in real life short of a bankruptcy spiral). And a large, 10x or 20x order of magnitude spend variation or “shock” (maybe call it 3rd deriv? although we are not really in continuous math or regular probability-world anymore)? This is may be chaos theory or something else altogether and it could also be life-altering.  

I haven’t proved it here, but I’ll assert that both the potential for change (acceleration) and the consequences of what we are measuring risk-wise are not linear and that getting ahead of both the change and the consequences is probably better than letting it ride.  Here’s Taleb (2017). The quote is not exactly applicable to my point but it’s in the same neighborhood:   
“The beautiful thing we discovered is that everything that is fragile has to present a concave exposure [13] similar – if not identical – to the payoff of a short option, that is, a negative exposure to volatility. It is nonlinear, necessarily. It has to have harm that accelerates with intensity, up to the point of breaking. If I jump 10m I am harmed more than 10 times than if I jump one metre. That is a necessary property of fragility. We just need to look at acceleration in the tails.
This is why, if I don’t explicitly mention it in the content below, my implicit bias in this essay on monitoring and in my own personal planning, is for “second derivative” detection and measurement in retirement analytics where I can do it and where the indicator does not lag too far behind. Maybe we can say that in the land of blind retirement measurements, the one-eyed detection and measurement of acceleration is king.   I’ll give Taleb (2017) the last word: “The new focus is on how to detect and measure convexity and concavity. This is much, much simpler than probability.” [emphasis added] 


On “#5 – Continuous Process Monitoring and Improvement”

Given the quicksand-ness that exists around any confident, conclusive judgements about retirement finance, especially over really long horizons, as in an early retirement, and given that we just asserted without much proof that detection and measurement of risk-acceleration is important, then maybe keeping a weather-eye on the financial environment and circumstances might be expected to pay more dividends than sticking to some fixed solution you might have ginned up 15 years ago and haven’t revisited since.  I can’t prove that analytically, but it seems about right. Collins thought so. So did Taleb. As far as I can tell, so does the CFA institute, the RIIA, Wade Pfau, Dirk Cotton, David Blanchett and a few others.  But then again it is not something I see all that often in the literature in general. Not “never.” Just not often.  The only other place I really see it is in talking to real non-quant people that have been retired for a while. To them it is often blindingly self-evident. It’s just not very formal to them. It’s more common sense.  

The content that follows is not terribly exhaustive (see Collins (2016) for “exhaustive”). Also, I could not figure out a neat, efficient organizing principle. So the content presented here is more of an impressionistic riff on the tools and models I’ve used myself, might use, or have used and then discarded. These are the models and tools that I use or used to triangulate myself into some sense for where I am and where I am going. This is based on my own personal journey through ret-fin.  Any lacunae below are all mine.

 5A.  Family Financial Statements - The Balance Sheet, Part 1 and the Income Statement

Balance Sheet 
This balance sheet thing seems obvious, right? But evidently, it’s not as common as one might think.  Me? I’ve had a balance sheet forever. It’s hard for me to imagine working without one. I once worked alongside a medium sized family office in MN and the very first thing they did on intake was build a family balance sheet. This was because: a) it was the core management tool to manage and evaluate risk and to connect with client goals, and b) very few families had one.  I’ll repeat: there were super high net worth families that did not have a household balance sheet to manage their financial life.  Wtf? I’ve asked around with more earthbound people and it’s still hit-and-miss. Maybe 50% have one. And for the 50%, those with a good advisor, the advisor has typically been the person that created it.  I realize that we are subject to figure 1 here. For low resource households it might not matter and for really high resources who -- besides soon-to-be-bankrupt sports and rock stars -- cares?  Those on a razor’s edge, however, care. I cared.  But I’ll assert that for those to the right of “the edge,” a personal balance sheet for managing a household financial process over time is the sine qua non[f] of personal decumulation finance tools. Here are three considerations. 
  1. First of all, it is a grounding document. If one has no liabilities, all assets are liquid, and all assets are in one account maybe the account statement is the balance sheet. For anything more complex with debt or direct private investments or assets across multiple platforms, the BS conceptually consolidates a financial life. It should, like my family office friends, list at least: asset, title, location, account # (ignoring data security for now), current value (observed or modeled), institution or platform, contact info, last date valued, etc. And it should have the usual suspects for assets and liabilities: cash, liquid assets, real assets, hard assets, IRAs, direct private investments, business interests, vehicles, tax liabilities, debt, and maybe some discrete near-future goals for which there is a reserve.
     
  2. Even if one does not have an estate plan, which is likely a good thing to do, the BS makes it a little bit easier for family and survivors to rep the estate if and when that is needed.
    If one has a modestly complex to complex BS and something as simple as the x% rule is being
     
  3. followed, one has to ask: x% of what? The denominator matters. This took me a while to figure out. The denominator can’t just be assets because there are claims prior to my retirement consumption: taxes, mortgages, educational commitments and so forth. And it can’t be “all” assets because I might have household goods on the BS that will never be monetized. It can’t be 100% of my residence for the same reason but it could maybe be 20% or more under duress. Again, for a simple liability-free estate it may not matter but otherwise I’ll call the denominator here “net monetizable assets,” i.e., liquid or monetizable assets that can serve retirement consumption net of liabilities. This would be a ret-fin filter over the base BS. I don’t know if that is too simple or over-thinking but it works for me for now.

The mechanics of building the BS, unless I create a technical appendix, I’ll leave to google or any basic financial planning resource. 

Income Statement
Concomitant with the Balance Sheet is the income statement. I have heard anecdotally that this is too much of a hassle for some people. Really? People must be richer than I think. I can't imagine doing any planning or adaptive change without knowing in great detail what is coming in and what is going out.  I do it monthly but a good case could be made for annually or quarterly.  I do it monthly because in 2011 I was so close to (or beyond) the edge that I had to cut my lifestyle in half. It took about six months and each month I looked, with great attention, at the details of what I was spending. In addition I view my monthly spending and a 12 month moving average as a percent of my net monetizable assets as early warning indicators on risk. More on that later. 

Big lumps of spending I have typically capitalized onto the balance sheet as a liability. That allows me to smooth recurring operational spending and to occasionally use debt as a smoothing mechanism too although the interest hits the income statement.  If I don't know what I spend and if I don't know my spend rate then I am not very familiar with my decumulation plan in operation and that, all by itself, is a risk.


5B. The Balance Sheet, Part 2 – Actuarial Balance Sheet (ABS) and Stochastic Present Value (SPV)

The ABS extends the BS to now include “flow” items like the PV of annuity streams, social security, and pensions as well as the PV of flow liabilities like a future spending process. I can’t be as thorough and deep and as comprehensive as what has already been written on this out in the world, so for a great resource on this I’ll recommend you to Ken Steiner at howmuchcanIaffordtospendinretirement.blogspot.com who covers the ABS with integrity.  The purpose of the ABS is to have a more comprehensive, well-informed view of the financial health of the retiree estate plus it is the foundation for the essential task of feasibility analysis, on which more below. 

The trick here, however, is to decide how far to take the analysis, especially for the SPV liability.  The nature of the SPV estimation can range from simple to complex, deterministic to working in distributions.  In the end, it is a valuation of a cash flow which is the bread and butter of finance and actuarial science and is obviously a well-trod path. For the purposes of setting up this essay, we can maybe say, if we step back and squint our eyes, that we can conflate all of the following while carefully remembering that they are clearly not the same:

1.       The sum of the real, live inflation-influenced spending as it unfolds into the unknowable future
2.       A current thumbnail estimate of the cost of the general spending “plan” or consumption strategy as viewed from today
3.       A sum of the deterministic, discounted spending (cash flow) estimate of the plan over a fixed horizon
4.       A deterministic discounted spending (cash flow) estimate weighted by a vector of survival probabilities conditional on age
5.       A market-based nominal or real annuity price as a proxy for the income that would defease the cash flow or at least pretend to defease the cash flow. It’s sometimes trickier than it looks to get a decent inflation-adjusted SPIA price if you can find it.
6.       A private math model for an annuity price that weights and discounts the cash flow and estimates the load levied by an insurance company (this is similar to B4)

7.       A model that randomizes at least the discount rate in order to create a distribution of probability-weighted spending NPVs or a what we might call a true “stochastic present value” (SPV)

There are probably some others. Here are some notes on the spend valuation variations:

B3. The basic model for B3 valuation of spending in deterministic world is found in most finance textbooks. C(t) is the cash flow or spend estimate at time t, d is the discount rate which is a policy choice not explained here, and T* is the planning horizon:
B4 and B6. The model for B4, I’ll assert for now, is the same as it is for B6 with the proviso that for B6 the cash flow is typically framed as $1 to replicate an annuity pricing model and for B4 the cash flow “cf” could be a custom nominal “plan,” i.e., a custom vector of planned spending with odd shapes.  R is the discount rate (I have not been consistent in notation) and l is the load or, in the case of B4, a “margin of error.”  x is the attained age of the retiree. This model, of course, could also be used to value income streams: 
B5. The model for B5 isn’t a model, it’s “call your advisor” or get a proxy from the web on something like immediateannuities.com.


B7. There are different ways to do this.  Formally in Milevsky and Robinson (2005) or Robinson and Tahani (2007) it can be framed like this where the notation is not consistent with the previous formulas: 
Since the continuous form was not directly accessible here for me, the way I’ve done it in simulation in the past -- in a case where I was extracting the expected value of the NPV of spending -- was done like this, which more or less mimics the SPV in Mindlin (2009):
where “i” is the number of sim iterations, T is a planning horizon but could also be a random draw on life duration that follows the shape of a known or analytic mortality distribution, c is the cash flow and d is the randomized discount rate. The first thing to note is that this is, in a way, an inside-out Monte Carlo simulation where, rather than projecting randomness out into the model future, the future known cash flow is discounted to the present using randomized discounts to reflect uncertainty in period returns. The second thing to mention is that the random draw on t->T could be replaced by a conditional survival weighting on c. The third thing to mention is that when I did it in simulation d was distributed normally representing a type of expectation around forthcoming return assumptions but, as we saw in the past work of the Five Processes, that assumption of normality is flawed assumption. The fourth thing to mention is that a spend valuation based on the expected value of the distribution, especially when lifetime is a random variable, is maybe not helpful since the resulting distribution is not a normal distribution. Milevsky (2005) points out that it is closer to a reciprocal gamma distribution. For this reason (I’m guessing), Collins (2015) uses the median which is a logical go-to for non-normal distributions. Me? I’ll point out that the valuation-metric choice within a distribution is a policy choice and I personally, for my own spending estimation, am uncomfortable with the median since it seems less conservative than I want to be. Let’s just say that the Pth percentile choice can or should be 50% ≤ P < 100% where maybe 80% or 95% would be a policy that, in the name of robustness, adds some desired or necessary conservatism.

5C. Feasibility Analysis

Feasibility is often contrasted with sustainability which is, in the end, a constructive distinction to make, I believe. Sustainability more or less asks the question: “how long will it last?” or “will I run out of money before I die?” Feasibility, on the other hand, is the question of whether I have enough money right now to retire, i.e., is there enough wealth now to “defease” my expected spend liability. Here is Collins (2015):
‘Sustainability’ differs from the concept of ‘feasibility.’ Feasibility depends on an actuarial calculation to determine if a retirement income portfolio is technically solvent—i.e., current market value of assets equals or exceeds the stochastic present value of lifetime cash-flow liabilities. If the current market value of assets is less than the cost of a lifetime annuity income stream, the targeted periodic distributions are greater than the resources available to fund them. The portfolio violates the feasibility condition. Determination of the feasibility of retirement income objectives is not subject to model risk because the determination rests on current observables—annuity cost vs. asset value—rather than projections of financial asset evolutions, inflation, and longevity. A prudent portfolio surveillance and monitoring tracks both risk metrics. -Collins (2016)
Let’s define this further before we dive in or critique any of this. I accept his definition at face value.  I’d modify assets to net-monitizable-assets before spending, though. We’ll call that “W” or wealth. The “stochastic present value of lifetime cash-flow liabilities” we just saw in section B.  Let’s call that “F” (for a feasibility constraint) and calculate F by whatever method suits you but for me, I’ll set it (for now, anyway) equal to what we saw in equation 2.  Basically, the concept of feasibility is really simple: W needs to be greater than F for the plan to be solvent at time zero. Or stated differently:

W/F > 1; F ~ a(t,x) | age x.
Eq 5a: The feasibility constraint

This is a reasonable conclusion to make. This kind of perspective is often an Achilles heel of arbitrary spend rules like the 4% rule or other ad hoc rules. Those rules may or may not have any economic or actuarial foundation and/or mathematical necessity. I showed in a recent post that a 4% rule starting, say, in 1966 was infeasible both initially and then thereafter forever. That portfolio funding that ad-hoc rule lasted 30 years, sure, but that is about all we can say.  See objection-to-models item #2 above. This is why feasibility is so important and why, in my narrative here, it precedes sustainability calcs like those that are typically done with MC simulation. 

Now let’s look at some minor quibbles.

1. Feasibility is not subject to model risk? Well, that “not subject to model risk” thing is based on the idea that wealth is observable in account statements and that an annuity price, as a proxy for the spend liability, is observable in the market. That’s correct enough but the annuity is not spending (see section B above) and since it is not spending it’s use, though better than the fuzziness of MC simulation or SPV, has a type of implicit model risk when considering the potential mismatch of spending with annuity income. The mis-match can come from flawed discount rates, variable interest rates and inflation, spending variability, mismatches between spending shape or lumps and income, etc. I call this a type of model risk. It’s better than a simulation because the use of market observables in the present is powerful, but it’s still model risk if only implicit.

2. Feasibility =/= Sustainability? Collins (2016) makes a strong case for the distinction between feasibility and sustainability. Don’t get me wrong here. I think this is a useful and constructive distinction and I will go with Collins on this. But I also have to mention that Robinson and Tahani (2007), as does Milevsky (2000), showed that a net wealth process projected forward to time t (i.e., sustainability) can be considered the same, mathematically, as the net stochastic present value of spending with respect to wealth in the present.[g]

3. Works perfectly in ongoing continuous operations? Feasibility -- Collins makes the same point so I am not actually contradicting him here -- works less well in isolation that it does combined with other tools. This is part of the triangulation argument above. Feasibility, applied continuously in future years, has a slight weakness over time due, I presume, to the foreshortening of longevity probabilities (TBD). Hewing to a “W/F = 1 continuously” rule creates suboptimal consumption when evaluated using alternative metrics like portfolio longevity or the “expected discounted utility of lifetime consumption.” Collins (2015) acknowledges this in section VII – “The Remainder Interest and the Steady State Ratio” where he attempts to balance lifetime consumption and bequest. He, like me, came to the conclusion that the feasibility ratio is better when adjusted by age: “the older the investor, the higher the required steady state coverage ratio [W/F].” His modeling showed the need to have the coverage ratio rise from 1 to ~2 at the end of 20 years. I achieved the same thing in my own modeling by putting a cap on consumption equal to the inflation adjusted value of a policy choice about lifestyle. Both of these approaches allow the ratio to rise but both seem arbitrary, however, and require other triangulation and tools to get to a better consumption outcome. This could mean tying feasibility and sustainability at the hip (which makes sense), adding economic utility analysis, and/or maybe some other thing altogether.
So, feasibility is a powerful, reasonable, and rational framework. It is the place to start. It is, on the other hand, one of those “necessary but not sufficient” things.

As a side note, Collins (2015) offers an additional metric in the context of feasibility analysis. He calls it the wealt-to-surplus ratio and defines it as

Wealth / (wealth – PV consumption) [or W/(W-F) in our terms]

Eq 5b. Wealth to Surplus Ratio

The advantage here is that it factors in a consideration of bequest or the W-F term.  Also as wealth declines the surplus shrinks and it shrinks at an increasing rate.  Or as he states “Retirement portfolio management may be defined as a contest between consumption and bequest goals. As the surplus shrinks, risk to the periodic income stream and to terminal wealth increases at an increasing rate.” [his emphasis]

What I like about this is that it illustrates both the concept of acceleration and the concept of increasing risk as one approaches what I was calling “the edge.” A simple model can show what I mean.  In this case, let’s say we have 500k SPV that we hold constant across scenarios. W we’ll call 1.5M and we will vary in 10 scenarios by decreasing W in 100k increments.  When we do this the ratio looks like this. Wealth at 500k leaves the ratio undefined. Acceleration! A canary in the mine. 


5D. Sustainability -- Monte Carlo Simulation and Fail Rate Analysis. 

Sustainability is well known in the academic and practitioner ret-fin literature and is sometimes, and not always correctly, framed as the main policy objective in retirement portfolios. We saw above that while sustainability and feasibility can be considered, counterintuitively, as both the same and different and when different, feasibility has a trump card, depending on how you look at it, of access to observable real data like wealth (e.g., account statements) and the cost of spending (annuity price as a proxy boundary for spending or lifestyle).  Here is Collins (2015) again, now on sustainability:
“Sustainability of adequate lifetime income is a critical portfolio objective for retired investors. Commentators often define sustainability in terms of (1) a portfolio’s ability to continue to make distributions throughout the applicable planning horizon, or (2) a portfolio’s ability to fund a minimum level of target income at every interval during the planning horizon. The first approach focuses on the likelihood of ending with positive wealth, or, if wealth is depleted prior to the end of the planning horizon, on the magnitude and duration of the shortfall; the second focuses on the likelihood of consistently meeting all period-by-period minimum cash flow requirements.” 
A major, but not the only, vehicle for evaluating sustainability as used in modern practice, is Monte Carlo simulation and fail or ruin rates.  The literature on this is vast; I have a stack about five feet high in my house right now.  So, I will not recapitulate that literature here. The basic concept is well known: formulate a joint return(vol) and spending program, use artificial (often incorrect) randomness in-model, use time dynamics over a planning horizon (less often random lifetime), and demonstrate: (a) importantly that a net wealth process can break to or through zero with some probability (an artificial model-induced frequency) “P” before the end of a planning horizon T or random lifetime T*, (b) less importantly, that a net wealth process is non-ergodic and can diffuse very widely and usually not very realistically on the upside, and (c) sometimes, that income available from decumulated wealth falls short of lifestyle needs and/or the magnitude of the fail or shortfall in amounts or years can be more severe or longer in some scenarios vs others.

While it is easy enough to find people to do this type of analysis and while it is surprisingly easy to construct these models on one’s own, what is less well know are the drawbacks, other than the “observable data” complaint we listed before, of MC simulation. So, while I won’t try to recreate the lit on MC simulation, I think it would be useful to describe some of the issues I’ve run into with this tool. This is not exhaustive or necessarily even coherent. It is merely a laundry list that I could come up with while writing this essay.
  1. It can cost money. Wells Fargo once tried to charge me $4,000.00 for one run of what was essentially a dressed-up Monte Carlo simulation. This is silly for a bunch of reasons: other’s can and will do it for free, there are free models on the internet, or one can create one’s own. There are even deterministic formulas for estimating this kind of thing. But those…they are all too simple, they say! No, we saw above that adding complexity in these fake constructs does not always lead to better conclusions about the future and it may have some disadvantages. See some of the Taleb quotes above.
  2. Fail rates are often hard to understand or are mis-understood. It is sometimes hard to explain what fail rate means and why a 100% success rate is neither achievable at less than infinite cost nor is it reasonable. This is especially true if the planning horizon is randomized.
  3. There are no real academically supported hard bright lines for fail rate thresholds that I know of. Perhaps there are now but in four years of reading this stuff I have not seen anything (yet) about hard lines. I’ve seen 5%, 10% 20% 30%. In the end it’s a policy choice. I’ve even seen, as in the twitter dialogue above, that some advisors will go with as much as 40%. Do you trust him on that?
  4. The effort to fine tune a plan to some kind of optimal fail rate is a trial and error process. There is no economically rigorous way to iterate the joint return/spend/horizon choice. It’s a little ad-hoc. It’s also a pain in the neck timewise. 
  5. One of the most common and useful critiques of MC simulation and ruin risk is that it is a mono-dimensional metric. One does not see the “magnitude” of the fail: how many years did I spend in a fail state or by how much and for how long is my lifestyle compromised. In the context of a random lifetime this is a pretty strong critique.
  6. It is a forward hypothetical. It predicts nothing and the conclusions and predictive quality degrades fast. Dirk Cotton once estimated that in a chaos-theory context, the prediction horizon was good for about a year and more or less useless after that. That doesn’t mean that MC is not a useful general risk metric, just that it is useless in isolation and when not repeated over time. Cotton (2019) “a spending rule estimate is good for perhaps a year. They should be recalculated at least annually. Retirement plans based heavily on spending rules have a one-year planning horizon.”
  7. It is typically opaque in terms of the parameters and their dynamic relationships to each other. It is a black box that can be prone to the modeler’s bias or lack of skill. Real retirement intuition, pedagogical or otherwise, in this type of situation is either challenging or inaccessible.
  8. It is prone to the model risk and reductive-ness described above.
  9. Tuning a plan for the lowest fail rates tends to lose the forest for the trees. When a consumption plan is constructed jointly with return and volatility assumptions and then evaluated with some degree of academic rigor using, say, lifetime consumption utility, it can be counter-intuitive but more optimal to (with the presence of a decent floor of lifetime income) deplete wealth earlier rather than later, i.e., fail big and fail early.  There is almost no way for a poorly designed MC sim to know that.
  10. “Fail” or “Ruin” is an abstract mathematical concept that is not always seen in real life.  People adapt given enough warning, they go back to work, they spend less, they lean on family or social services or a natural community, with foresight they sometimes purchase lifetime income while wealth can afford to do so, and pensions might and social security should be available etc. Cascades of risk in catastrophic feedback loops, when risk accumulates slowly then suddenly and quickly, can send us into bankruptcy but bankruptcy is different than mathematical ruin.
  11. The full shape of the “unconditional” distribution of portfolio longevity in years, something we saw in Process-3, can be obscured by the artificial fixed horizon common in MC.
  12. The fixed horizon fails to visualize and communicate the full term-structure of mortality and adding random lifetime to MC can sometimes confuse things.
  13. Fail rates, even with “magnitude” metrics ignores dynamics. Sure, MC is run over many years “in” the model but how about doing it each year or once a quarter?  Then we can get at the 1st and 2nd derivatives of fail: the change in the rate and the change of the change of rate (acceleration)
  14. In dynamic mode, it will sometimes be a surprise that MC simulation and the derivatives of fail are pretty sensitive to market moves and also that there is a psychological/behavioral component to seeing fail rates rise (and fall) over long periods of time. This goes back to the issue of thresholds in D3. When is it ok to accept the status quo and hunker down and when is a plan really failing? It is also in this sense that I have made the case in the past that a constant spend plan can be an active risk-taking posture because it puts us in that exact state of confusion
  15. When looking at MC output it is forgotten that the risk is coming from only from market volatility and that volatility comes from only inside the model and that the volatility is often modeled incorrectly when it comes to what Taleb calls Extremistan or I was calling “domain 2.” There is no consideration for either spend volatility or extreme unexpected surprises.
  16. Most models have a finite (or no) ability to look at the “shapes” of a spend plan over a lifetime. Some do but some don’t. 
  17. Results from multiple runs with one model and across different models that are custom or proprietary can often show a fair amount of inconsistency. What, exactly, is calibrated to what? And how many iterations are needed? My own estimate in 2011, with my second (well-designed I might add) simulator, of my fail rate exceeding 80% did not match Wells Fargo’s estimate of 20%. Something was wrong here and there was no real solid standard against which to judge or calibrate. 
I’m sure there are other reasons to be skeptical about MC-based sustainability analysis but that’s at least a start. It’s not that MC-based sustainability analysis is bad it’s just that it needs to be re-positioned as a more generalized and less important risk metric that gets more utility from being used in some kind of vertical and horizontal triangulation process than it does as a stand-alone metric. “We ran it 10000 times and your plan failed 27% of the time” I find not to be all that helpful.

5E. Sustainability – Lifetime Probability of Ruin (LPR).  
“Human beings have an unknown lifespan, and retirement planning should account for this uncertainty…the same questions apply to investment return R… The aim is not to guess or take point estimates but, rather, to actually account for this uncertainty within the model itself. In a lecture at Stanford University, Nobel Laureate William F. Sharpe amusingly called the (misleading) approach that uses fixed returns and fixed dates of death “financial planning in fantasyland.” Milevsky (2005)
E.1 LPR solved with simulation and/or finite differences approximations to PDEs
Where MC simulation can be an easy but brute force method of accessing the more transparent but harder to implement insights from closed-form math or from partial differential equations (PDE), the lifetime probability of ruin, which can be done with or without some kind of simulation, is somewhere in-between. The distinction I’ll make, and I’m not sure if I’m on solid ground here, is that this approach starts by working in probability distributions rather than ending there. Not sure if I can say it like that.

I tried to make clear in a previous section that the formal definition of LPR can be described as, say, a Kolmogorov PDE for lifetime probability of ruin (See Process 3). I also showed that if one does a simple simulation to get a distribution of portfolio longevity in years P(PL) or what was referred to by Albrecht and Maurer (2005) as the full “constellation of asset-exhaustion” to infinity (i.e., unconstrained or unconditional with respect to horizon) and then if one also constructs a mortality distribution (survival probabilities out to infinity) conditional on current age (call it P(S) or what Milevsky called the full term-structure of longevity) then the Kolmorogorov PDE for LPR can be satisfied by a simple combination of the two distributions:

LPR = sum{0-infinity}[P(PL)*P(S)]. 

Eq 6. LPR with simulation

This notation is a little botched but the output of the effort matches what one could extract from the PDE with a finite-differences solution approach (I call that simulation by any other name[h]).  Both approaches result in lifetime “ruin rates” that are, in broad strokes, very similar to what we would get from MC sim…and with many of the same objections. On the other hand, we now have access to the two important distributions: (1) portfolio longevity or what the joint return/spend choice or “net wealth process” does to itself over infinity unconstrained by a fixed horizon or even random lifetime, and (2) conditional survival probability to infinity or at least age 120.  Both of the distributions and their graphical visualization are available prior to the integration via eq 6.  Also, because we have access to the full distributions across all time, LPR can, in the right hands be, in a way, more convincing and satisfying. In addition, and to the extent that “magnitude” is important, having both distributions available means that we can make a policy choice about the points within each distribution from which to mark off in years the magnitude of the mismatch between portfolio longevity and mortality. This statistical flexibility is probably not well known or is at least underappreciated. 

E.2 LPR approximated with closed-form and (mostly) transparent equations

No doubt reacting to the long list of objections to MC simulation and even some of the weaknesses of the approaches described in E.1 (i.e., still requires a little simulation) Milevsky (2005) offers an analytic approach that promises to: enhance replicability, solve an iteration-deficiency problem that I don’t think really exists, provide pedagogical intuition in trade-offs between retirement risk and return in a transparent equation, and create a coherent framework on which to base decisions. He uses a continuous-time approximation and the SPV approach we discussed above and -- given that Robinson and Tahani (2007) showed the affinity between a “dynamic net wealth process failing over some time t” and the “evaluation of wealth vs SPV at time zero” – I think this is a credible and useful addition to the literature and I think it clearly achieves his goals. Jumping over the background and math in the paper (which I recommend as a to-read) we can summarize the gist of the idea by quoting at length:

This is easily implementable in Excel which I’ve done at least once. And while it is an approximation to ruin math, Milevsky points out it is (closer to) exact if the lifetime is at infinity. Without perseverating on the innards of the paper I can agree that this framework is worthy for the reasons he stated in the paper as well as providing a decent answer to some of the 17 objections to Monte Carlo simulation above. What is achieved can be described, somewhat redundantly with what has already been said before, like this. The reciprocal gamma approach: 
  • Creates a tool that is transparent and where the relationships between the main variables, especially longevity, can be seen on the surface. “It can also explain the link between the three fundamental variables affecting retirement planning…The formula makes clear that increasing the mortality hazard rate…has the same effect as increasing the portfolio rate of return and decreasing portfolio volatility…”
     
  • Does not require the opacity and inconsistency and biases sometimes inherent in Monte Carlo simulation approaches.
     
  • Provides an independent tool for calibrating outcomes across many models (I called this triangulation) or vs complex models.
     
  • Not mentioned and underappreciated is that this would be an efficient way to create an efficient, dynamic programmatic module for evaluating future fail risk estimation inside MC sims as it steps through years within an iteration.

5F. Spending Process Control and Control Charts. 

I will freely admit that some of what follows could be considered a little anal, and even I am starting to recoil from the work effort I do for myself, but I think it is useful and, in reference to figure 1 above, the closer one is to “the edge” the more important this might be.  If we recall our review of stochastic spending processes (in Process 2) perhaps we can stipulate, and I realize not everyone will agree with me here, that a consumption pattern that is:
-        
- Highly irregular and/or volatile
- Trending higher against a plan (i.e., lifestyle creep)
- High relative to ambient wealth
- Un-organized with respect to income flow or lumpy future liabilities
- 180deg out-of-phase with asset value (return) cycles | share redemptions to fund spending
- High (unplanned) during an early and adverse sequence of returns

can be a little more destructive than you think it is and more destructive than what most ret-fin lit seems to reveal because they are more focused on things like volatility or (return) sequence risk.  Therefore, it is my personal contention that: (a) being aware of the current spending baseline, (b) making sure it is brought under control, in terms of scale and volatility, in an iterative process over time, and then (c) monitoring and optimizing the spend process in an ongoing continuous process can pay dividends that are paid out in a denomination called “portfolio longevity” all else being equal.  These dividends probably are irrelevant to the rich or the broke or maybe even a late cycle retiree.  On the other hand, if one were to be un-wealthy and close to “the edge” and/or an early retiree then maybe there is probably some value to mine here.  

If one were to happen to also be conversant with things like ops management or statistical control methods or 6-sigma or ISO 9000 processes, then what I am suggesting should look vaguely familiar. That’s because an ongoing spending process shares a lot in common with an industrial process where something like, say, widgets are being produced with a high error rate resulting in high product-returns, shrinking sales and market share, and a high cost of production operations. In that situation, the basic objective of most quality control methodologies would be vaguely similar:

1.       Measure and establish a baseline
2.       Through some methodology like a Deming cycle, the process would be improved iteratively
a.       Plan (figure out the changes that might work)
b.       Do (implement)
c.       Study (measure the results, usually via statistical process control charts)
d.       Act (fix problems and enhance opportunities)
e.       Repeat….
3.       Optimize and monitor; repeat previous processes if necessary

This was grossly over-simplified, but you get the idea.  In practice in a statistical control chart it looks like this. This SPC is for a widget (flange) production process that I pulled off google images. The first third of the chart is #1 above, the second third is #2 and the final third is probably #3.


When translated to a retirement spending process context it would look like this below where the “widget” is now a spend rate and an out of control spend rate has a cost, just like the uncontrolled widget: earlier risk of failure or lifestyle destruction. A “controlled” spend rate means longer portfolio life and longer portfolio support of desired lifestyle. I borrowed this from “a friend:”


This is broken into the three-phase approach just like we did above.  The y axis is scrubbed to protect “my friend’s” personal data. The lines are as follows:

Dotted lines: median (measured in stage 1, policy thereafter) and upper and lower control limits (policy choice)
Blue:    monthly spend rate = c/NMA; c = consumption and NMA is net monetizable assets
Black:  12 month simple moving average of blue
Green:  +/- 2 standard deviations of black and blue

Like I mentioned before, this is a little anal but maybe necessary even though some of the variance drops out if measured annually. What you don’t see is a precipitous drop in the ongoing fail rate estimates that occurred over this period via spend reduction and control.  You also can’t see that control limits, especially the upper and median bounds would rise with age in the future.  Also, it’s hard to see that the rate is affected not just by the numerator ($spend) by also by the denominator which is subject to market forces. In a recession the spend rate would rise, but that is still not totally un-useful information since this is an early indicator for risk.  The main purpose for a process control and improvement technique like this, a purpose that might be considered reasonable is:

  • It is a generalized risk detector
  • It keeps a vigilant eye on one of the highest impact and important variables in retirement finance. It focuses one’s attention
  • It’s like a canary in the coal mine or a seismograph. It can pick up early changes in an equilibrium state when the stakes are high
  • It shows more than just the “position” of the spend rate. One can see speed (trend) and acceleration, too. 

On the other hand, it has some downsides 

  • Requires data collection on spending and manipulation to render the chart
  • Takes some time and effort
  • The denominator is out of one’s control
  • The control boundaries are subjective and moving targets
  • Spending on an annual horizon or a plan horizon may not care about this level of control.

I have used this kind of thing for myself, but it remains to be seen if I’ll have the energy ongoing to keep at this. We’ll see.


5G. Expected Discounted Utility of Lifetime Consumption (ULC)

There is a fair amount of literature out there on the evaluation of consumption utility over remaining lifetime, from macro-econ textbooks to Yaari’s seminal 1965 paper. There are some compelling pros and cons for using economic utility in retail personal finance but it still seems pretty uncommon in the practitioner financial literature which is unfortunate.  ULC has some advantages once one can get over the difficulty in measuring risk aversion or contemplating its stability over time. ULC:
  • Focuses first on consumption rather than asset prices or asset volatility, this is generally what we care about
  • It factors in random lifetime and subjective time preferences
  • The concave utility math is uniquely good at evaluating changes in consumption over a lifetime
  • Strategy comparisons have a rigorous-econ-foundation feel rather than something merely ad-hoc
  • It’s crystal clear about the advantages of either having or buying lifetime income over the lifecycle

I’ll refer you to the literature for more on this or to those sections of Process 2 (Section E part IV Spending evaluation using lifetime consumption utility) or Process 3 (Portfolio Longevity Evaluation and Use - Life-Cycle Utility) where this has been discussed before.  To recapitulate in a small form just to visualize, here is what one might typically see in a grad-macro text (e.g. Volrath (2007)) in continuous form notation for the value function for the utility of lifetime consumption.
where U is a utility function of consumption c at time t. U is often framed as CRRA utility in a form similar to this: 
Eq 9. CRRA Utility
It’s more complex, of course, when random lifetime and subjective time discounts are involved, not to mention if one were to want to instantiate it in a program or spreadsheet within a surrounding net-wealth process.  I’ll refer the reader to Yaari (1965) or Google for more on alternative forms and notation. For me, I have used some of the following discrete forms for a more real-worldly type of evaluation:
Eq 10. ULC with random life and subjective discount
Where tPx is the conditional survival probability for a retiree aged x at time t, theta is the subjective time preference and g is CRRA utility. In simulation mode I might do it like this: 

Where the 2nd term related to bequest is ignored here and omega basically stands in for tPx and alpha, because I was too lazy to change the notation is the former theta or the subjective discount. What is not seen in this is that there is an interaction between consumption and wealth and that in those iterations where wealth depletes before distant ages and end of life, there is a potential big jump down in consumption that has a jarring effect on utility. That means that this form, in simulation, is in a more complex context not far removed from MC simulation but with a type of utility evaluation overlay.  See my posts on Wealth Depletion time here. https://rivershedge.blogspot.com/p/test.html

What is also missing, and that I have mentioned before, is that endogenous (to the model) purchases of lifetime income from wealth, before it falls too far to be able to execute the purchase, generally have dramatic positive effects on ULC. This was the conclusion of Yaari (1965) given the assumptions at the time.  Things have changed a bit since then, and the analysis has been nudged forward, but the advantages of life income is still a very strong conclusion even in a world with high insurance loads and low interest rates.  That is another post…

5H. Perfect Withdrawal Rates

The concept of a perfect withdrawal rate is relatively straightforward. I’ll pull this definition from the paper by Suarez, Suarez, and Waltz (2015) that first laid it out: 
“We now posit that for any given series of annual returns there is one and only one constant withdrawal amount that will leave the desired final balance on the account after n years (the planning horizon). This can be verified by solving a problem that is formally equivalent to that of finding the fixed-amount payment that will fully pay off a variable-rate loan after n years. In other words, we re-derive the traditional PMT() formula found in financial calculators, but with three amendments: (a) interest rates are not fixed but change in every period, (b) the desired ending value is not necessarily zero, and (c) we are dealing with drawdowns from an asset instead of payments to a liability.

Ignore the fact, for now, that random lifetime is still a problem in this framework.  Cutting to the chase, the math of this way of thinking looks like this:
Where K(s) is the endowment, K(E) is the bequest and r(i) is the return in period I and j allows for the geometric chaining of returns.  If the endowment is set to $1 and there is no bequest, a point we made in Process 2, the equation simplifies to
Eq 12b. PWR without bequest, Endowment = $1
Why do this? That’s a good question, especially since if you look carefully, this is not radically different from MC simulation when it is dynamized via randomness of r and multiple iterations (with all the model risk and other objections that one would see in MC simulation).  I can think of at least a few reasons I’d like to have this in my arsenal: 

1. It inverts MC simulation. MC will hold the spend rate constant and see what happens to the terminal distribution of wealth and what percent of states of wealth over time T “fail.” PWR, on the other hand, holds terminal wealth over horizon T constant (zero) and lets the spend rates fall where they may. When dynamized, what it does is create a distribution of spend rates and, as we have seen, a distribution is a useful informational tool.  For example, when working with the spend rates within a distribution that would have been less than some percentile threshold, we effectively have a type of fail rate detector and my own investigations show that the results are pretty consistent with MC simulation across a broad set of parameters. You’ll have to trust me; I don’t show it here.

2. #1 gives us a good new tool for our project in triangulation, consilience and calibration especially since it inverts another tool we’ve used before.  Unfortunately, almost all the tools we’ve seen, except maybe consumption utility, and even that sometimes, are all playing with the same exact variables (spending, returns, vol, lifetime) so that I’m not sure how much true consilience is going on.  Calibration? Sure. Interdisciplinary consilience? Probably not.

3. Because PWR does not trade much in terminal wealth distributions, and since it shows the necessary connection between return/vol profiles and the resulting spend rate distributions, PWR seems uniquely positioned to demonstrate the capabilities of alternative asset allocations, e.g., those like trend-following (assuming that their return profiles will continue to be stable over time) where large drawdowns are tamped down and there is some evidence of a shift up and left (same return, lower vol) in efficient frontiers. Since the right side of the PWR distribution tends to be boring (of course it’s cool to live in hypothetical-worlds where we can spend a ton, those where we can’t – i.e., the left side -- are of seriously unique interest), playing with alternative allocations that enhance this efficiency effect can demonstrate some quite positive changes in capacity to spend on the left side of the distribution. 

4. When wealth at time T is constrained to zero, as it can be in PWR, the model can show the hyper-reliance of full horizon spend rates on returns and return sequences. This means we can show directly in the formula the pernicious impact of return sequences on the capacity to spend. Back in Process 2 we showed the visual illustration of PWR math in sequence-of-returns terms like this, which is worth a repeat. This was where I framed PWR, or “w” in the figure below, as what I called “spend capacity:”
Figure 4. PWR and Sequence Risk
To save myself from writing more or the writing same thing again, I’ll quote myself, which is always a little weird:
“One can see in this that the capacity to spend (i.e., PWR = w) is entirely a function of returns and how they "stack" in sequence. Just looking mechanically, there are more “r”s at the end (look "vertically." This is Suarez’s point.) so that low returns early and high late makes a big number which would make the PWR lower. It may also be helpful to think of early spending as an opportunity cost of compounding capital (if I have it right. Look "horizontally.") that hurts us because we could have captured some of the late high returns with money that was otherwise spent early.”
5I. Estrada and Kritzman’s “Coverage Ratio”

This approach was in a recent paper by Estrada and Kritzman (2018).  I covered this a bit in Process-3 but it is probably worth mentioning again here since we are looking at monitoring and management methodologies.  The Coverage Ratio is something that captures the number of years of withdrawals supported by a strategy relative to the length of the retirement period considered. E&K(2018) define it like this: If Yt is the number of years inflation-adjusted withdrawals are sustained by a strategy and L is the length of the period under review then the coverage ratio Ct they propose is

Ct = Yt/L ;      Eq. 13 – coverage ratio


where a ratio of 1 is like hitting the runway right on the numbers, <1 is bad, and > 1 puts is in bequest territory.  Since this approach does not really capture either the diminishing returns to bequest utility or the full force of high magnitude shortfalls, they also propose a utility overlay where the U function is kinked at a ratio of 1 (where portfolio would have covered withdrawals to precisely the terminal date). Their kinked function looks like this:


Like I mentioned in Process-3, I have reserved judgement on whether this "coverage ratio" approach adds anything to (a) the direct examination of the Portfolio Longevity distribution (especially since you have to come up with some type of portfolio longevity calculation anyway), (b) the feasibility evaluation which it greatly resembles for obvious reasons, or (c) the conventional life-cycle utility value function that is typically used and that has at least 60 years of historical weight behind it (see above). My guess, as before, is "no," but there may be some “pros” to this approach:

1. The ratio itself has a ton of communication value above and beyond any technical analytic insights that might be revealed. The pedagogical value alone may be worth it.

2. Yt has to come from somewhere so at least one is forced to confront the question of “years.”

3. Random lifetime has not been addressed but we can suspend that to focus on the useful narrative of “coverage”

4. The triangulation and consilience project may have been enhanced a tiny bit.

This may fade in my toolbox in the presence of more powerful tools, but I thought it made sense to put it here.


5J. Closed Form Optimization Equations


I am typically wary of optimization equations in all their elegant closed-form glory and not just because they are hard and I don’t know the math, though that may be part of it.  The reason I’m skeptical is for many of the reasons we’ve seen above. While they seem to help with tenure considerations and they do, in fact, nicely and transparently lay out relationships and dynamics between variables, they are also:  

  • Overly integrative in the sense that they disrespect a fuller understanding of the core underlying processes in all their real-life messiness, 
  • They lack respect for the dynamism inherent in a real, lived retirement over time: “nice result today, now tell me, what will the optimum be tomorrow!?”
  • They brush under the rug the explosive uncertainty that comes from combining more than a couple probability distributions,
  • They are prone to the same model error and reductionism we have mentioned several times. Fine to reduce ourselves to a perfect world where perfect questions can be followed by perfect answers, but the obligation is maybe then on the “perfect equation writers” to hand us on a silver platter a discussion of the parts of the world that they have left behind…

This is maybe why, among other reasons, N. Taleb calls these kinds of objects “naïve optimization.”  On the other hand, we should not ignore them if only for their pedagogical value, however opaque that might be.  I have not made a deep study of these things, but we can point out at least two for illustration purposes.

1. Merton Optimum Allocation and Spending. The best-known example of these types of objects might be the Merton Optimum. I have no history on this, so I recommend Google or Wikipedia. The formula, when pulled of Wiki, looks like this,

First for stock allocation:
and then for consumption

What can we say about this? Not much since it is impenetrable for me for the most part. I’ll take a shot though and see who beats me down on this:

- There are two equations: i.e., it’s a joint solution between consumption and allocation. We kinda knew that. That makes taking a shot at a solution is hard and iterative and dynamic.

- Given the continuous time context, it feels surprising there are even any closed form solutions at all

- Since the expression “v” solves to something, even at limits, somewhere between the risk free rate and a full risk-based discount depending on risk aversion, it starts to make sense. Over infinity we can spend v of W. For less than infinity with no risk aversion we can more or less spend wealth divided by time though there’s a pesky bequest factor in there. Then for less than infinity with some risk aversion involved we can spend (with what I have to call in my own flawed terms) the equivalent of a levered amount of W until W runs out which might be before T. Not sure I got that right, but my point is that someone somewhere other than me should be able to dilate on the relationships here to give is a better understanding…while maybe ignoring the fact that tomorrow we’d have to do it all over again.

- I can spread-sheet these and the results are intuitive. On the other hand there (a) is a wide dispersion of outcomes for different parameterizations…but we predicted that given all the implied joint distributions and (b) the results seem a little generous to me but I’m guessing that’s because it is sandbagging on domain-2-type uncertainty.

- This kind of thing, while fun to play with, and of marginal use for me, is also helpfully confirmatory for calibration and triangulation purposes. Again, I’m not exactly sure how much “real” consilience we’ve brought to the table.

2. Milevsky and Huang’s (2011) Optimal Consumption Rate. This addition to the essay will go without commentary. That is because I have not tried to understand or interpret the math here. My main goal is to just point out that there are other (probably quite a few other) closed-form mathematical constructs when it comes to optimal consumption and that there are maybe other ways we might be able to calibrate and triangulate what we think we know.

Optimal consumption while the wealth trajectory is still > 0
solving for the Initial consumption rate c*(0) we get 


Again, I have not interpreted this nor will I define variables; I just wanted to show that it’s there. This is kind of a cheat, but I just wanted to show that something like this exists. If pressed I could swag an interpretive guess and some of it is, in fact, “readable” given what I’ve learned but that’s also like saying I can read a little bit of juvie French, less French adult literature, and I will never speak either out loud in public. All of which is true.

K. Other.


We’ve only scratched the surface of the tools and methods and models involved in a continuous retirement monitoring and management process.  I wanted to get some of the main ones on paper.  As I get to new ret-fin objects or solutions I will add them here or I will bury them in a technical appendix if and when I get to that.  Some candidates for this might include things like

1. Deterministic formulas

When I first started working in tech in 1987 or so, during the first giant wave of converting “atoms to bits,” a mentor once reminded me that I should not underestimate the power of paper-based systems in a computer-obsessed world.  The same thing could be said of deterministic formulas in a random-math and simulation obsessed world. Simple formulas for things like portfolio longevity, annuities, present values, PMT() functions, etc. are still quite useful and especially so if one were to not be besotted by simulation and formal, integrated continuous time mathematics.  Maybe better to understand a couple things simply and well than many-things-integrated-poorly. 

2. Optimization Using Backward Induction and Stochastic Dynamic Programming.

I did this technique once for evaluating optimal asset allocation by age and level of wealth.  I was using a framework I borrowed from Gordon Irlam who, I assume, was borrowing from past work of others with Bellman equations and the like.  The basic principle is that the forward combinatorics of the variables involved in optimizing an asset allocation choice is to explosive.  It’s much easier to work backwards from the optimal allocation in the final year and then work backwards to time zero using the probabilities that can be chained on the walk backwards.  It’s like deciding on when to leave for the airport by working backwards.  For me, it was hard to construct and program and it was also reasonably hard to interpret but the output created a really nice tool to enhance the consilience project from a direction outside the normal parameters I usually work with.  This was more consilience/triangulation than it was calibration and I will continue to keep my eyes open for tools and approaches like this. Knowing an optimal map for asset allocation by year and by level of wealth is a useful monitoring tool. 

3. Real-Option Pricing Methodologies. 
 
I recall a recent Twitter dialogue where a Twitter friend re-tweeted something with a comment on his enthusiasm for the original tweet. The original tweed was a description of a rudimentary, intuitive description for “option pricing 101.” The tweet was something along the lines of: “discounted, probability-weighted volatility-dispersed arb-free forward price above a strike level = the option.” If I got that right. He immediately got ding-ed by a pendant that chimed in with a delta-hedging argument. The pedant was too clever by half because: 1) my friend was just saying it was good general intuition, 2) my friend was right, 3) delta hedging arguments make for good closed form equations and Nobels and for hedging a book of option business but are terrible at creating a broad-based and supple framework for valuing things like “real options.” The latter I once did in a simulation framework to try to validate some of the intuition from an article from M Milevsky on the optimality of waiting to annuitize wealth.  And I did. Validate. I think. It also works pretty well for valuing stock options, too. It works even better if one can play around with customized distributions beyond normal, for which I know there is math out there that is totally beyond me.  But, the ability to value a real option in a simple way in the context of monitoring a retirement process over multi-period time (e.g., with respect to an annuity or a lifestyle boundary) I think will become more useful in the future rather than less. It also adds some real consilience in a continuous monitoring process because it is not a framework typically used by the other models that all seem to be breathing the same air.  

I’ll add an example. The way I used the real option approach was to take the actuarial balance sheet we did above and then project it out into the future in two ways: (a) the dispersion of the joint return/spending net wealth process, and (b) the SPV of spending, here framed as an annuity price, priced conditionally on achieving the future age and for a then-inflated cash flow.  This was an interesting reframing since the fail state is no longer wealth falling to or through zero. The fail state is falling through the ability to permanently and (maybe) forever lock in lifetime income that defeases spending. Running out of money is one thing, walking away from the one chance you might have had to save yourself forever is another. So, in this context the annuity “boundary” is the option strike and net wealth is what it is. An option can be priced and the knowledge can be used to either delay annuitization choice or detect speed and acceleration that might push us into taking action now or soon. Either way this is clearly an addition to the management and monitoring process.

4. Freebie Retirement Calculators and Rules of thumb

These are a dime a dozen but if one understands the math and the potential biases and model error and one knows how to triangulate, then these are not totally useless.  In fact, they are additive to the triangulation process. The 4% rule may be flawed for example but an age-adjusted version of rule is easy to remember and if I am at dinner with someone who is in a rut, I don’t need a computer; I don’t really even need a calculator.  If we view monitoring not as “once in an entire horizon activity” and not every five years or not even every year but also a continuous process in each instant (ok, that might be a bit much) then being continually tuned into the questions and answers of retirement via simple ROT is not all bad, right? Just remember Collins (2016) warning that these are better if they have some economic rationalization or mathematical necessity. Otherwise it may just be "an exercise in data mining."

5. Ongoing Portfolio analysis and Optimization
This section is deferred for now. Portfolio design and optimization is to some extent the beating heart of modern financial practice. The literature goes beyond vast. I mean, there is asset allocation advice in the old testament…which still sound good. I’m not sure how much I can add and anyway I left it out of the scope of these essays because it often precedes the operational monitoring task. When I get the chance, I’d like to tackle at least something about the ongoing awareness and evaluation of alternative risk strategies that have a lot of potential for enhancing the utility of consumption and portfolio longevity of retirement portfolios. For example, if return distributions were to be fat (left) tailed monstrosities, and then adding an allocation to, say, a trend following strategy could leave returns in a status quo while clipping the left tail, then that change to the portfolio allocation, which might not have been available at the initial time it was designed, is a tactical optimizing move that is very friendly to a retiree, especially an early one.  More later. In the mean-time, cruise the work at https://blog.thinknewfound.com/ where they know the quant stuff AND the lifecycle impact, and if I recall proved the point I made above.  Impressive. 

6. Geometric Mean analysis

Monte Carlo simulation gets all the glory these days, but people often forget that a basic understanding of multiplicative return processes over multi-period time can not only substitute for simulation but can add to transparency as well as be quite pedagogical in explaining the ins and outs of the return generation process that affects us all in retirement, something we dwelt on for a lot of pages way back in Process-1. Since Process-1 went into such depth I won’t re-walk that trail but if we can recall that “in the long run, one gets the geometric mean return, not the arithmetic mean return.” (Markowitz) and that the geometric mean is framed like this:

Eq. 17 N-per Geometric Mean

and that additionally

Median of terminal wealth = (1 + GM)N    ; Eq 18.


then knowing just those two things will go a long way to showing how geometric mean analysis can provide a decent framework without using simulation since the median in Eq 18 is the same median, net consumption, we’d see in a simulator. And it’s easier to see and explain. Check out Process-1 for more on multi-period return processes or take a look at Michaud (2003) which I consider not only a must-read but also a must-re-read. I’ll let Michaud (2003) speak for himself:
“Since the multiperiod terminal wealth distribution is typically highly right-skewed, the median of terminal wealth, rather than the mean, represents the more practical investment criterion for many institutional asset managers, trustees of financial institutions, and sophisticated investors.21 As a consequence, the expected geometric mean is a useful and convenient tool for understanding the multiperiod consequences of single-period investment decisions on the median of terminal wealth.”  
"Properties of the geometric mean also provide the mathematical foundation of the Monte Carlo simulation financial planning process"  
"The advantage of Monte Carlo simulation financial planning is its extreme flexibility. Monte Carlo simulation can include return distribution assumptions and decision rules that vary by period or are contingent on previous results or forecasts of future events. However, path dependency is prone to unrealistic or unreliable assumptions. In addition, Monte Carlo financial planning without an analytical framework is a trial and error process for finding satisfactory portfolios. Monte Carlo methods are also necessarily distribution specific, often the lognormal distribution."  
"Geometric mean analysis is an analytical framework that is easier to understand, computationally efficient, always convergent, statistically rigorous, and less error prone. It also provides an analytical framework for Monte Carlo studies. An analyst armed with geometric mean formulas will be able to approximate the conclusions of many Monte Carlo studies."

"For many financial planning situations, geometric mean analysis is the method of choice. A knowledgeable advisor with suitable geometric mean analysis software may be able to assess an appropriate risk level for an investor from an efficient set in a regular office visit. However, in cases involving reliably forecastable path-dependent conditions, or for whatif planning exercises, supplementing geometric mean analysis with Monte Carlo methods may be required."

Concluding Remarks

My concluding sense is that most of this essay above is long and sometimes a little over-wrought. No small number of people are plainly fine in their retirement and a lot of them have told me so directly. The have often mentioned that I over-think things too much and too often (I won’t name names…yet).  But America also has a retirement crisis so there is also a non-trivial cohort of people that can’t retire and/or will suffer when they do or soon thereafter.  That crowd in the middle is my interest and the ones that are close to the edge are my real interest. I used to be there myself and I half-expect to be there again. When? The future is impenetrable and none of the tools I have at hand can tell me. There might be market ups and downs and I might spend too much or too little. Over time I might be just fine, but I might also have a sudden bankruptcy from an accumulation of risk I didn’t see because I wasn’t looking. We, collectively, might, as I recently read in a book on hedging from someone that lived through a war in my lifetime, even end up either (a) handing piles of value to our kids from our retirement home or (b) be refugees in a time of war with jewels sewn into our hem as our only asset and a gun held to our head to take it away.  Who knows?  At a minimum, a lack of “paying attention” seems like a luxury, a luxury I don’t feel I can afford.  Maybe you do. 


Most of this post was geared towards articulating a view of the world that includes some impenetrable and difficult-to-manage uncertainty that continuously unfolds in an unstable present ("For now we see through a glass, darkly”). My opinion is that this impenetrable uncertainty will continue to put a premium on monitoring and continuous management and improvement processes more than it will on single-trick solutions that are often proffered by 30-year-old advisors that have no real human conception of time and risk…yet.  I’ll give Dirk Cotton some room on this here since I trust his judgement and he’s my age and retired: 
The key is to recognize that a spending rule estimate is good for perhaps a year. They should be recalculated at least annually. Retirement plans based heavily on spending rules have a one-year planning horizon. Managing with a one-year retirement planning horizon is like driving while looking only at the road immediately in front of your car. When we can't see clearly what lies ahead, on foggy days perhaps, most of us respond by becoming less confident and driving more conservatively. [and by watching the unfolding road and conditions with great care … my addition] Cotton (2019)

So, in my opinion, a careful and skeptical methodology for evaluating where we are and where we are going at each moment in time, combined with a bias for action and intervention along with a willingness to adapt our lives even to our own short-term discomfort is as close as we can get to an “answer,” academic “perfect integrated solutions” notwithstanding. 



Notes
----------------------------------------------------------------
[a] This is unfair to younger advisors. But I’ve talked to retirement-quant bloggers that have retired and they share with me the opinion that it is very hard to understand the visceral feeling of retirement risk until one has, in fact, retired. Human capital depletes faster than you think and the lack of a safety net other than one’s own portfolio gets your attention completely and utterly; margins for error compress more than one would like. 

[b] A resource for this vast literature, one that I have not even begun to mine, is in Collins (2016) “Annotated Bibliography on the Topic of ‘Longevity Risk and Portfolio Sustainability’” which, at 567 pages, makes me nauseous with anticipatory fear and the dawning sense that I know a lot less than I ever thought I did.

[c] a good example of this inter-model inconsistency might the difference between a MC sim at a large national bank and one I custom built based on a relatively sophisticated view of how these things work, having built a bunch of them. The former said, at one point, 20% risk of failure while the latter said 80%. Frankly I don’t put much stock in single read figures from random sims but the gaping hole between these two was worthy of consideration. That lack of convergence means that one’s homework is not done, it has merely begun. 

[d] e.g., “The desire to provide for a longer life together with the desire for more certainty by consuming now pull in opposite directions” – Levhari & Mirman (1977), or “Under more realistic conditions, ‘a straightforward relationship between riskiness and optimal consumption does not exist…’ In some cases, uncertainty elicits greater consumption; in other cases, greater savings.” – Collins (2016) quoting Levhari.

[e] there was a Teflon-like elusivity to the definition of phenomenology back in 1979. That hasn’t changed much. Here’s Wikipedia quoting Gabriela Farina: “A unique and final definition of phenomenology is dangerous and perhaps even paradoxical as it lacks a thematic focus. In fact, it is not a doctrine, nor a philosophical school, but rather a style of thought, a method, an open and ever-renewed experience having different results, and this may disorient anyone wishing to define the meaning of phenomenology.”

[f] yes, I too detest people that use casual Latin in papers and essays. My professors used to toss out phrases like “ceteris paribus” or “inter alia” like it was candy or beads at Mardi Gras and they couldn’t somehow say “all else being equal” or “among other things” in English. I generally try to avoid that kind of thing, but it seemed to make sense here.


[g] In their appendix on page 12, Robinson and Tahani (2007) show the following which is my point on the rough equivalence, at least in their 2nd equation, of feasibility and sustainability.  
[h] whether it’s fair or not to call finite-difference approximations for solving PDEs is beside the point. The explosion of the matrix required in time and wealth units and the equations and iterations required to come to a conclusion make simulation look like 2+2. Let’s call FD simulation-but-worse.


[i] Collins uses the analogy of ice for a side trip into the physics of boundaries with random variation. It’s not that I am borrowing or stealing the image, it’s that I grew up in Minnesota so I can own the metaphor for risk. 


References
----------------------------------------------------------------
Collins, P., Lam, H., Stampfli, J. (2015) Monitoring and Managing a Retirement Income Portfolio
Collins, P (2016) Annotated Bibliography on the Topic of ‘Longevity Risk and Portfolio Sustainability’ http://www.schultzcollins.com/static/uploads/2015/07/Annotated-Bibliography.pdf
Cotton, D. (2019) Negotiating the Fog of Retirement Uncertainty, Forbes 2019 https://www.forbes.com/sites/dirkcotton/2019/02/22/negotiating-the-fog-of-retirement-uncertainty
Estrada, J., Kritzman, M. (2018) Toward Determining the Optimal Investment Strategy for Retirement. IESE Business School and Windham Capital Management.
Fellner, W. (1943) “Monetary Policies and Hoarding,” The Journal of Political Economy, Vol 51
Knight, F. H., (1921) Risk, Uncertainty, and Profit, Houghton Mifflin Co
McGoun, E. (1995) The History of Risk “Measurement,” Critical Perspectives in Accounting 6, 511-532 Academic Press Limited
Michaud, Richard (2003, 2015) A Practical Framework for Portfolio Choice.
Milevsky, M and Huang H (2011), Spending Retirement on Planet Vulcan: The Impact of Longevity Risk Aversion on Optimal Withdrawal Rates.
Milevsky, M., Robinson, C. (2000) Is Your Standard of Living Sustainable During Retirement? Ruin Probabilities. Asian Options, and Life Annuities. SOA Retirement Needs Framework.
Milevsky, M. and Robinson, C. (2005), A Sustainable Spending Rate without Simulation FAJ Vol. 61 No. 6 CFA Institute.
Mindlin, D (2009), The Case for Stochastic Present Values, CDI Advisors
Robinson, C., Tahani, N. (2007) Sustainable Retirement Income for the Socialite, the Gardener and the Uninsured. York U.
Suarez E., Suarez A., Walz D, (2015) The Perfect Withdrawal Amount: A Methodology for Creating Retirement Account Distribution Strategies. Trinity Univ.
Taleb, N. (2010, 2007) The Black Swan, 2nd Edition. Random House
Taleb, N. (2014) AntiFragile
Taleb, N. (2017) Darwin college lecture: Probability, Risk, and Extremes.  http://fooledbyrandomness.com/DarwinCollege.pdf
Vollrath, D (2007), Graduate Macroeconomics I http://www.uh.edu/~devollra/gradmacrobook07.pdf
Yaari, M. (1965), Uncertain Lifetime, Life Insurance, and the Theory of the Consumer














No comments:

Post a Comment