1. The Problem Zone
If you've been following the blog you know I've been playing
around with some spending simulation and math lately. This has been in order to get myself more
familiar with the more advanced spending PV calculations one might do in a
plausibly academically-grounded and robust household balance sheet (HHBS). The
balance sheet is a tool I lean on pretty heavily for managing my retirement in what
looks like an increasingly post-human-capital world. In addition, given the several types of
simulation tools I've built over the last year or two, along with the recent
HHBS work, I wanted to give myself a much better intuitive sense of some of the
differences between the concepts of feasibility and sustainability. In this self-directed learning process I have
built a couple rudimentary spending simulators that kick out a giant piles of
spending NPVs so that I can do statistics on them such as the mean (often used
when defining stochastic present value [spv]) or percentiles or rank since the
distribution can take on decidedly non-normal shapes in scenarios with high
variance and summary stats other than the mean can provide interesting insights
in those particular situations.
Since I don't have a serious background in statistics,
probability or calculus and my last formal math class was in about 1977 (if you
ignore grad school) I've been casting about on and off and here and there for a
year or more for help on notation and concepts that are probably a step or two above what
I can extract from Google. It's amazing how little access I have these days to
the level of help I need and how often I have to revert back to a search engine. But I
have had some successes of late. In addition to the feedback I've received from
the small wins, I have also written notes to myself along the way or drafted
emails using some of the material below in order to get prepared to communicate
my questions to others (e.g., a pension risk director at PwC or a retired
mathematician). This post is a more organized
version of the sum of the various self-notes and emails. All of this is likely of interest to no one
but myself but the process of integrating and posting it helps me codify this
stuff in my own head. If someone else
finds it useful, so much the better. Some of this is redundant with a past post or two.
2. The Simulation Zone
The simulation I was trying to do most recently takes a
"spending plan" and animates the terms with random variables to
create a distribution of net present values or, when a summary stat filter is
added (e.g., expected value or mean), we can call this a "stochastic
present value." Any of the
individual paths of a spending or consumption plan in the simulation (call the
consumption plan as a whole "c" or ct in period t) could
be: a) a constant ($1 or 40k say), b) a series (say geometric) that trends up
or down in a deterministic way, c) an arbitrary fixed custom path plugged into
the sim, d) some kind of random walk with or without trends or discontinuities;
this variance would include inflation, planned age-based inflections, rules,
shocks, etc, or e) "other" because, well, because there is always
other. I am working with "d" for now.
I try to create randomness in the following variables: expected lifetime
in years, some elements of the spend path, and the discount rates. In the past
I have also tried to introduce spend shocks that are outside of normal (or
non-normal) distributed, probability-based spending variance (see Dirk Cotton
on this) but not in this version. The randomization of the discount rates has
some precedents in the pension actuarial business but I gather that it has had a
bit of controversy associated with it depending on the audience. I'm going with
it for now, though.
3. My Attempt at the Expected Value Notation for the
Spending Simulation
Here is my attempt at the expected value notation and the
terms. This was pushed forward quite a bit with a little help from David Cantor
and his crew at PwC:
- x is the iterations of the simulation, say 10,000 times
- N is a random variable for lifetime in years that is selected for each sim iteration. The number of years is chosen using a non-normal probability distribution for mortality that can be extracted empirically from an actuarial life table (Society of Actuaries or Soc Sec. for example) or can be approximated with Gompertz-Makeham math. I am doing the latter where the underlying process can be described by ln[p] = (1-e^t/b)e^((x-m)/b) where x is age, t is time in years, m is the mode, and b is a dispersion factor. (see Milevsky's 7 Equations book, Ch2)
- Ct
- For now, for simplicity, think of c as a constant, say $1 or $40k,
with or without inflation adjustments.
In practice C is an arbitrary path that starts with some kind of
choice and intent and then over subsequent time periods it changes with things
like random inflation, predictable age based inflections and/or planned
decision rules, straight-up just plain "other" randomness in
spending, the force of spending or portfolio shocks, and so on…which makes Ct not
so much a random variable, which it no doubt is without the rules or
inflections or shocks, but rather it makes "c" a chopped-up, arbitrary, unpredictable, and
sometimes chaotic path. In the simulation I can either program that path
to vary mostly as described from iteration to iteration or alternatively I
can feed the simulation a fixed vector of a future spending path that
represents one version or path of the plan submitted for contemplation, a
contemplation that might be necessary when it comes to variability in
rates or longevity not to mention the concept of chaos (search theretirementcafe.com for that last one). So let's call C =
f(x1, x2, x3 x4…xn) so it clear that it is not a simple constant (buh bye
4% rule), it's a complex and sometimes arbitrary -- if not what can feel
like mysterious -- process.
- D is a randomized discount rate. Certain pension actuaries have advocated for this approach given the uncertainty in the returns from reference rates and portfolios that are used for planning purposes. Not every actuary or pension expert is all-in on this idea I hear. The denominator of the equation can also be rendered with (1+d)^t or e^dt. The standard deviation in the randomized version of D can also be pushed towards zero so it approaches (1+d)^t. An alternative sim that I've done goes with the (1+d)^t.
- EV(npv) - While the EV is cool and all, and it sometimes closely matches the deterministic outcomes of other versions of this effort (see one below), if you ditch the EV on the left and the division by x on the right then what you get is a giant pile of 10,000 NPVs coming out of the simulation. If we were to play around with the various parameters we could change the shape of the distribution (it moves towards a more lognormal-looking kind of thing when the model is highly variable). That means we might often be more interested in the median or the Pth percentile, say the 75th or 95th, than we would be in the EV.
4. Context and Antecedents
A. A deterministic version of the equation might look something
like this one that I made up:
where CSL is a custom spend liability or estimate, ct is
the spend path, tPx is a conditional survival probability for someone
age x at time t which is extracted from the same lifetime processes described
above, d is the discount rate (with some superfluous hyperbolic discounting
parameters in there to capture some time subjectivity should one want to do
that kind of thing). If you ditch the hyperbolic terms and change ct
to 1 this is basically an annuity formula. Also, the infinity term could be
changed to 120-age without much effect because the conditional survival goes
towards zero around that point.
B. If the simulation equation I'm pitching up above makes
sense to you and if you were to then envelop Ct with a utility function -- and thus
render the EV as a utility function that one might want to maximize
after screening out spend paths that are non-admissible due to some kind of
constraint -- then you’d be much, much closer to something called the "lifetime
utility of wealth without annuities" proposed by M Yaari (UncertainLifetime, Life Insurance, and the Theory of the Consumer) in 1965 and seconded
by Milevsky and others later. In one of the four "cases" in Yaari's paper, case A, the one with a wealth
constraint but without annuities (after some derivational twists and turns) ends
up looking like this:
where [0,Tbar] is the uncertain life interval, omega is a
term for the probability that the consumer will be alive at time t, alpha is a
subjective discounting term, and g[c(t)] is the utility of an arbitrary
consumption path that has constraints like wealth being >= 0 for all t. This
is also where (for his case A and what he calls the Fisher problem) the ultimate
task is to "find an admissible plan c* such that V(c*) >= V(c) for all
admissible plans c." Note that V(c) ends up random because T is random
which was the whole point of his paper. More recent academics have covered this
topic in different ways, notably Moshe Milevsky in various books and papers. Let's just say that in this type of context my
amateur-hack simulation and equation would perhaps represent (or describe), in
vaguely human terms, some c or set of c rather than c*. That was my
whole point.
C. Yaari has hard admissibility constraints (e.g., wealth
>=0 for all t) for consumption plans and an optimization assumption for the
version of the utility function he describes in his case A. He also has utility
functions. That means that we are not really in Yaari-world when using my
amateur math. But we are working in the context of a household balance
sheet and we are working in present (not continuous) time. In that
world, if the EV[stochastic present value] (or alternatively the Pth percentile?)
of a randomized spending plan (let's call this the spend liability or
maybe, like Dimitry Mindlin does, a "commitment") were to be more
than available assets (or alternatively assets are less than the cost of a life
time annuity…which, by the way, is merely an imperfect proxy for the PV of the
consumption plan) then we would likely have an infeasible plan and it would be
rejected or at least modified (there is the Yarri constraint. Trial an
error, yes of course, but a constraint nonetheless). Let's call this effort using
present value of spending (or an observable annuity "proxy") plus
currently observable asset values: feasibility[1]. There is not much I can say here
yet about utility functions or optimization or maximization processes for plans
that pass the admissibility test. Future
post maybe? Note that there is also a case for knowing not just things like the
constraints, admissibility, and the optimal plan but also the "process"
version of all of this i.e., we should know something about both distance and
vector -- speed and direction of movement towards or away -- relative to the
constraint(s) over time where the inputs are unstable. This, if I have
read things rightly, is a type of free boundary problem, about which I know
almost nothing; but it is at least worth noting.
D. Contra the idea of initial feasibility that was broached in the last paragraph,
the "sustainability" of a plan is a different topic entirely (or is
it?!) and is usually referred to as the risk of ruin and has a well known
history starting, for my purposes, with Kolmogorov but can now be considered as
in the capable hands of academics like Milevsky and Huang (http://www.math.yorku.ca/
Who/Faculty/hhuang /papers/ Ruined.pdf) or advanced practitioners like
Collins[1]. To me, based on the last two
years of thinking about this, the basic idea around sustainability analysis is
that by connecting a forthcoming expected (at least initially independent[4]
) spending process -- and here we are in a forward dynamic simulation process rather
than a backward PV analysis[2] -- to an initial wealth state and an independent return generating process[3] we have created an
expectation for a joint third independent process we can call "portfolio
longevity in years" that has some predictable statistical properties
unique to itself. Ruin risk, then, would
be the estimate of a failure probability (running out of money before we run
out of life) that comes from joining the probability distribution of the
independent "portfolio longevity process" with a fourth
independent process: longevity or rather "conditional survival probability" (is it really independent?
Mostly but I also read a paper recently about feedback loops between human
longevity, spending and highly sustainable portfolios; I can't find it of
course). While this type of ruin analysis
is by its nature fake and does not really predict the future it does have some
usefulness in the sense that complex and dynamic interactions between wealth,
longevity, spending decisions, taxes etc etc can be modeled that are hard to figure
out elsewhere. So let's do this. Let's call
"sustainability" a soft methodology of process analysis while
feasibility tells you something today about whether you can even start by using,
as Collins says, hard and objective "current observables." That is a good enough distinction for now.
On the other hand I do have to say that the further I go
into this whole stochastic present value thinking thing (let's assume you agree
with this approach…but you don't have to), especially when it comes to adding uncertainty
to discount rates as a proxy for knowing something about the ebb and flow of future
return generation processes, the closer I get to thinking spv (i.e., feasibility)
is really a type of simulation (i.e., sustainability) by another name. The big thing
missing in feasibility, of course, might be the dynamic decision-making
interaction process that happens between spending, wealth and longevity
over multiple (modeled) periods. But note that since we have, in the Yaari sense, a
constraint that wealth is >= 0 for all t we have also then already
discarded in the first place plans that would likely not have been sustainable
in the "note[1]" sense. In other words the spending/feasibility
analysis in this "Yaari Case A" mode has a certain degree of
sustainability already baked into it by way of what is admitted and
discarded. Collins[1] happens to make a
stark distinction between feasibility and sustainability which is useful and likely correct but perhaps I'll call
this distinction something more like "shades of semantic grey" which
sounds like a movie.
As a side note: I once simulated the ruin concept directly (but
it wasn't Monte Carlo in that case,
it was a type of joint probability approximation of a PDE) with successful
results that satisfied Kolmogorov's PDE and compared well with a finite
differences solution to his equation. That was in addition to the several "real"
simulators I built before. That means that I am not totally disinterested when it comes to sustainability analysis.
Also, just for fun, here is Yaari's version of a vote for
feasibility analysis in his case A scenario with-constraint-without-insurance: "some people think that the institutional framework makes
it virtually impossible for a man in our society to die with a negative net
worth. For this reason it is of interest to see what the consumer's optimal
plan looks like given that the constraint S(T) >= 0 must hold with
probability one."
E. Since my spending plan variable "c" is roughly the
same as (or at least contained within) the consumption plan "c" we
find in more elegant and better informed calculus that goes back to at least
1965 (before then, really, I just don't know the scope) it seems like my
equation and efforts might be considered redundant or superfluous or even amateur-hackish.
Probably true for the first two; most certainly true for the last. But consider this: 1) I don't know calculus
so I need something else, 2) I need something tractable and implementable that
I can actually work with in Excel or R, 3) I want to be able to efficiently communicate
to others what I am doing, and 4) the retirement finance literature I read seems
woefully short -- except as noted above and some work by Dirk Cotton and a few
others -- on respect for spending as an independent[4] and funky
process, a process that is neither constant nor static nor truly random in the
probability theory sense. The literature
is replete, rather, with studies that gloss over the challenges of modeling spending
by just assuming it away as constant or constant with inflation adjustments or
a percentage of a portfolio or a rule tied to portfolio and age; those seem
like the easy way out (even if we layer on ever more complex spending rules). Spending
in this easy sense becomes merely the portfolio's annoying but otherwise
uninteresting little brother But
spending, as an independent and unknowable and possibly bankrupting process is
worthy, in my opinion, of consideration on it's very own terms.
------------------------------------------------------
[1] I am not personally very deep on the formal definitions
of feasibility and sustainability. I
guess I am using them in the commonsense sense.
Yaari refers to feasibility this way: "Unfortunately, the wealth
constraint, S(T) >= 0, also depends upon T so that a given consumption plan
may be admissible for one value of T but inadmissible for other values. This
difficulty is sometimes called the feasibility problem." On the other hand
here is Patrick Collins on the subject:
"Sustainability of adequate
lifetime income is a critical portfolio objective for retired investors.
Commentators often define sustainability in terms of (1) a portfolio’s ability
to continue to make withdrawals throughout the applicable planning horizon, or
(2) a portfolio’s ability to fund a minimum level of target income at every
interval during the planning horizon. The first approach focuses on the
likelihood of ending with positive wealth, or, if wealth is depleted prior to
the end of the planning horizon, on the magnitude and duration of the
shortfall; the second focuses on the likelihood of consistently meeting all
period-by-period minimum cash flow requirements…‘Sustainability’ differs from
the concept of ‘feasibility.’ Feasibility depends on an actuarial calculation
to determine if a retirement income portfolio is technically solvent—current
market value of assets equals or exceeds the stochastic present value of the
cash-flow liabilities. If the current market value of assets is less than the
cost of a lifetime annuity, the targeted periodic withdrawals exceed the
resources available to fund them. In short, the portfolio violates the
feasibility condition. Determination of the feasibility of retirement income
objectives is not subject to model risk because the determination rests on current
observables—annuity cost vs. asset value—rather than on projections of
financial asset evolutions and the distribution of longevity. Although it is
important to track both risk metrics—sustainability and feasibility—as part of
prudent portfolio surveillance and monitoring, the remainder of this article
focuses on the sustainability / shortfall probability risk metric."
[2] Note that while spending by it's nature starts as, and
can continue to be, an independent process here in simulation, as in
real life, it doesn't have to be independent and can in fact (should) respond
to changes in the portfolio (and longevity) up to the point when the constraint
of zero wealth is hit. See more in note 4.
[3] Usually modeled in fake sim world with some type of
geometric brownian motion process with drift (return) and diffusion (variance)
parameters. I read some critiques of GBM used in this way once but otherwise it
seems to work pretty well, at least in fake-world.
[4] I haven't thought about this "spending is
independent" thing very carefully and no doubt I am on shaky ground in the
academic definitional sense. But as an
amateur I get to do that so let's think this through. While it is true that a return generating
process is independent and knows nothing of longevity or spending it is
not really or necessarily true that spending is independent of either returns
or longevity. Ignoring the latter, I can maybe say that it would defy common
sense that spending would not change in behavioral terms if either past or
expected forthcoming returns were bad or expected to be bad. No one willingly
spends themselves into oblivion except for the oblivious and maybe addicts and
teenagers. But this is a behavioral or
cultural argument not a finance argument as such. Then, in finance terms, since we are
discounting using rate assumptions from somewhere that are either deterministic
or not, the present value of spending or it's stochastic present value distribution
instantiation certainly does know something about return generating
processes. So is it really independent? Probably not.
On the other hand when I go to the grocery store or the gas station or
break my leg or 1-click on amazon, those events really know nothing about
return generating processes and at least they feel independent. A counter-counter argument might be that in
choosing lifestyles we have already rejected choices that are likely to fail in
either the feasibility or sustainability sense both of which do know a lot about
both return generation and longevity hence spending is not really independent
at all because return/longevity risk is already baked into spending choices.
But! A counter-counter-counter argument might be that for the
"admitted" range of lifestyle choices after we have rejected the ones
that are not convivial to avoiding the abyss, those paths are mostly
independent. I think. Maybe. I'm calling
it independent then because it sorta kinda is most of the time or at least
it is for my current personal path. Or it is until something bad happens and
then it isn't because then I'll have to reevaluate all of this all over
again. Independent for now. Tomorrow I
will change my mind.
No comments:
Post a Comment