Past a certain level of income, what you need is just what sits below your ego. Morgan Housel
I have fiddled around a lot with spreadsheets and simulators
and formulas over the last several years exploring things like the impact of
asset allocation or return volatility or path dependence (sequence risk) on
retirement outcomes. These are all important subjects of course but the topic
of spending in retirement towers like a giant over all of that in the way that it
can ruin outcomes and also in the way that it is one of the few levers that we
really truly can control. Spending when we read about it in academic
literature ranges from the constant dollar spending of the 4% rule -- or, as
Dirk Cotton once wrote: "In fact, constant-dollar spending is the only
widely acknowledged spending strategy that results in portfolio ruin under
reasonable spending assumptions." -- to various adaptive systems (like
Waring and Seigel's ARVA) to no small number of conscious decision rule
overlays (like Guyton and Klinger or Kitces Ratchet rule; I have not explored rules based systems much…yet).
But academic spending-speak never feels quite right to me. It's a cool and distant
equation on a page rather than lived experience (except from retiree bloggers
like Cotton or Darrow Kirkpatrick). It
also, especially in it's adaptive or rule based forms, presupposes that change
is easy and that, in particular, downward change is both easy and achievable
and that spending doesn't jump around like crazy sometimes i.e., it presupposes it is under control, a bold assumption. Personal experience tells me that control is elusive, change
is achievable but not easy, and that spending is quite a bit more variable than
it looks like on paper. So, rather than a
survey of spending systems or SWR methods, which others have done quite well, this post is merely a small attempt
at a mini case study about the control of both the direction and variability of
spending and how one might evaluate the evolution of spending over time.
A Side Trip on Utility.
In addition to simulated fail rates (or the utility of
terminal wealth) a really common way I see researchers deal with the evaluation
of retirement consumption and consumption volatility is via economic utility
math. That's fine as far as it goes but
there are objections. Some will say that
there are an infinite number of utility curves, that there is no generic
utility curve for all, that even if one can be customized it may or may not
reflect real behavior, and if it does no
one mortal retiree would really "get it" anyway. All likely true but even beyond all that one
can still object.
While the concept makes sense in that the satisfaction
gained from consuming an extra unit of whatever is probably monotonically
positive and incrementally declining and can be described by some equation, I'm not so sure the standard
concept of utility, from its proposition in the St. Petersburg paradox in 1713 all
the way to constant relative risk math used in random AdvisorPerspectives
articles works very well for a retiree.
[See note 1 here: a big fat disclaimer about any of this post
being either true or useful or even modestly well informed].
Yes, going from starvation to consuming a scrap of bread has
very high utility. And going from a junker car to something reliable has
positive incremental utility but maybe less than the scrap of bread increment.
And I suppose going from one Lear jet to two might have an immeasurably small
(I hope small) jump in utility. But for at
least one retiree, that model makes no sense.
Going to the second Lear jet is well beyond where consumption would make
sense (or happen) for normal people so my guess is that utility of consumption declines quite
rapidly after some inflection point, rather than continuing to rise, probably
somewhere around the spending budget which is or should be created using some prior knowledge of retirement math.
But before we go there let's look at constant relative risk
aversion (CRRA, one of a family of many utility functions) first because it
shows up in the lit quite often. It does a good job of capturing marginal
changes and uncertainty in consumption.
The formula as generally presented in retirement research is some form
of: U(Ct) = Ct^(1-g)/(1-g) where Ct is consumption at time t and g is gamma or
the coefficient of risk aversion. I've
read, but cannot cite, that a "normal" risk aversion coefficient is somewhere around 2-4.
If applied to a monthly spending vector using a gamma of 4 it might look like
this:
But it is not usually presented like that. Another way that I have found useful and that
is often offered by researchers is where they evaluate periodic consumption in
terms of utility and then average or weight it to an expected value level and
then the CRRA utility math is inverted to solve for the "certainty
equivalent" (CE) of consumption that one might trade for the more
uncertain and volatile consumption (see https://en.wikipedia.org/wiki/Risk_premium). Here for example is 10k of monthly
consumption rendered as a certainty equivalent at different levels of
volatility of spending.
I have no idea how to interpret a certainty equivalent of
zero but I get that one might trade off a risky level of consumption for a
lower level of certain consumption and that volatile spending would at least
cause anxiety which may be the proper behavioral interpretation of the
math. But this is absurd in ret-world
for a couple of reasons: 1) it doesn't necessarily penalize a high, volatile level of spending
that we know can destroy us (if the mean is high enough) compared to lower mean
spending unless the lower spending level vol is somehow insanely higher, 2) it
hides the fact that something like monthly spending volatility basically
disappears when looked at annually or when it is simulated generically when the
return volatility of the portfolio comes into play, and 3) for a variety of
debatable and "soft and not necessarily quantitative" reasons,
spending volatility, even if it "disappears," is not always desirable
in and of itself. Let's see if I can
make some sense of this that anyone might possibly buy.
A Small, Maybe Weak, Case Against Constant Relative Risk
Aversion.
1. High, volatile spending can be incorrectly said to have higher utility than lower less volatile spending.
We saw that in CE terms, we'd trade off higher uncertain spending for
lower more certain. But what if high and
volatile spending is high enough that it overcomes the math of the CE so that
it evaluates to "better" but leads to higher fail rates. Take for example a situation where there are
two spend rates, one is 10k per month with 15% vol and the other is 15k per
month with 30% vol. The CE for the 10k,
depending on the sim and the number of periods (60 here) is ~9.7k and the CE
for the 15k is ~13k. While I acknowledge
that the limit of the CE for 15k may go down as the number of periods evaluated
gets large, I used 60 periods as a reasonable range that a retiree might
actually examine; bias but reasonable bias.
What that means is that while 15k may be ruinous as a spend rate and the
higher volatility should take it down another notch even further due to the
utility math, it in the end gets a higher utility score because it's "higher
enough" on average and we are using relatively short time frames to
evaluate.
2. Outside of CRRA the effects of higher volatility spending can disappear
if we are not careful.
This perhaps gives CRRA more power than it deserves if you look at it a certain way. First of all, high monthly spending
volatility can disappear when looking at annual data. I find this in my own spending. Monthly spending can jump around but over the
years it washes out to a more or less steady annual clip. But that does not
mean volatile monthly spending is "ok" (to me, anyway. I honestly
don't know how universalizable any of this is).
Also, when simulated, high random draws are countered by low random
draws. But we know that is hard to pull off in real life. There, a high random draw is generally followed by,
well, by normal spending. That means
volatile spending should just really be called what it is: "higher
spending." Just for fun, though, let's
look at what it looks like in a simple simulation. Here is a back-of-the-envelope
simple sim where there is a $1M endowment and a 40k spend. Nothing but spending
or the rate of return is allowed to vary randomly; everything else is constant.
Run 100k times this is what it looks like in terms of terminal wealth if spending
is allowed to vary for different levels of return volatility:
This shows that under limited assumptions if rates are held
constant there is a tiny negative effect of spending vol (top panel) but otherwise
it disappears when returns vary either due to the overwhelming effect of the
rate vol or due to the counterbalance effects of low and high spending (which
we know may be unrealistic) or both. Kinda
bogus but it was a fun sim. But spending volatility really does matter, in my
book, and I'll try to make the case though I'm not sure how convincing it'll be.
With respect to CRRA, the interaction here with the measured spend volatility is
mixed or confusing which I guess was my point.
3. For a variety of debatable and "soft and not
necessarily quantitative" reasons, spending volatility in and of itself,
even if it "disappears," is not always desirable. This is where I
probably go off the deep end but bear with me.
First of all anything I'm about to assert would probably be shot down by
an economist in a millisecond; I have no idea what I'm doing except playing a blogger
game. Second, I think that a proper economist might deal with this stuff by
also at the same time evaluating things like the utility of terminal wealth or
other long-game optimization criteria.
Here, however, I am going in a different direction. I say that in
evaluating the utility of monthly retirement consumption, the fact that high
(and low) consumption is above and below some level that is supposed to make
sense for a retiree (i.e., the budget) means that it should have lower utility
the further away it gets…if, that is, one is playing a utility game, which is
debatable from the start. The problem
for me is that in normal utility (or rather CRRA utility) higher absolute consumption
is better under certain assumptions and even high vol consumption can be better
as I showed above. In this case here I
am saying relative consumption (relative to the plan/budget) matters and both
high and low variance, within reason, should be evaluated to worse than plan.
Why would/should higher consumption be
utility-penalized? Well, a couple
reasons: a) it might be that if it is "above plan" spending has more to do
with ego (see the epigraph) or lifestyle creep than real need or reasonableness - which might make
it what I'll call (with some risk) morally un-rationalized un-anchored spending[2]
or if not that then at least "inefficient" or wasteful, b)
incremental consumption, whether we like it or not has social costs in my
opinion. Every time I buy an incremental do-dad (let's say it’s the newest
greatest electronic thing of the year) that I have not budgeted and do not
really need I am also doing things like mining rare earth metals that don't
need to be mined by anonymous exploited locals, I am throwing unnecessary
packaging unnecessarily into the waste stream, I am requesting transportation logistics
carbon to be expended just for me, etc. etc. c) even if you don't buy a and b I will assert
without evidence but from personal experience that spending volatility is the
first sign, or at least the planted seed, of a potential spend-up-trend which
might be called lifestyle creep. While
spend vol may "disappear" in a simulation, a sequence of high spend
months in real life would be as pernicious as sequence of bad returns except
that a series of high spends is exactly what it sounds like: high spending. And
if it lasts long enough it is nothing more than, again, high spending, and the
first evidence of lifestyle creep. That
means vol=bad…or at least potentially bad.
Root out the bad seed before the roots get too deep I say. And d) if
nothing else, highly volatile spending contributes to retirement uncertainty,
anxiety, and potential instability especially if one were to throw in chaotic
feedback loops tied to high spending at the wrong time, a topic Dirk Cotton has
covered quite well. Some or all of this
is probably the original point of the declining part of the declining incremental
utility math, I just don't think it is captured well enough for us in its CRRA
form.
But then why would under-consumption be utility-penalized?
First it is lower and we all like to consume (up to a point as previously
mentioned) "more." But also, lower
spending, past a certain point, is a form of unnecessary self denial. One may be correctly and prudently "saving"
to reserve for unknown liabilities like future non-recurring expenses or legacy
budgets but if those have already been factored in, it really is a form a self
denial at some point. There are
complexities when it comes to retirement scenarios with guaranteed income and
such that I have not explored but in simple terms spending too little in some
forms under some assumptions is nothing other than an unnecessary crouch of
retirement fear. In utility terms I'd ding it.
At least today in this post.
A Case for Amateur-Hack Utility.
I have no idea if this makes sense or has been done before
but to deal with my issues above I propose an amateur-hack of a utility of
consumption function that might look like this (forgive me if this has been
done before, or better yet…tell me where or how):
A. The budget target, the plan
B. Some reasonable level of variance
C. A penalty in utility terms for
monthly budget control fail,
higher implied simulated fail-rate
risk,
ego fail for spending on what one
doesn’t really need,
the social cost fail idea,
risk of a spend trend or lifestyle
creep risk, etc
D. Penalty in utility for
self denial above and beyond
reserving for late retirement risk and legacy
penalty for unecessarily hidden or deferred spending
combining C and D a penalty for
variance in and of itself for all the above
Ok, so that looks like a nice little quadratic
function. I don't know what the
prevailing wisdom in econ world is on quadratic utility but let's wing it for
now. Take one of the examples I used
above with the 10k and 15k in spending where the 15k "won" in CE
terms even though it had higher volatility abd higher fails because it was "higher
enough" in utility terms. So here we'll try it again. Let's set the "budget target"
to 10k and then make up a fake utility function like U(Ct) =
-5e-06x^2+.1x. Now add a third and
fourth scenario where there is a 10k spend with 10% vol rather than 15% and an
8k spend with 10% vol. Using the fake formula it would come out like this in
terms of spending, spend volatility, and utility "score:"
Spend vol U(Ct)
15k 30% 292
10k 15% 484
10k 10% 494
8k 10% 474
That's a little simplistic and not too thorough and not very
scientific but you get the point (someday I'll actually study utility so I know
what I'm talking about). But isn't this
over-engineering things? Couldn't we just use standard deviation relative to a
budget as a measure of non-control? Yes and yes but I also want some malleability and I will need some asymmetry at some point. Downside spending does not offend me as much
as upside. A perpetuity might be problematic in terms of self-denial but that
would also be an elegant problem to have and probably shouldn’t be penalized as
harshly as over-spending. Also the distance from the budget level to the max
penalty on the down side is probably smaller than the distance on the upside. A
little over-engineering in the math to make things asymmetrical can go a long
way to dealing with this especially if and when we were to add higher order
terms like x^3 or x^4...
A Spending Control Case Study.
Now let's do a case study of spending control and make a case for at
least thinking about a rigorous type of spending control (not "rules"
or SWR, control) borrowed from industrial process improvement methodologies to bend a spending process to our needs and situation. Then maybe we can layer on some of this utility stuff to evaluate the results.
If you buy the idea that spending variability is a bad idea,
and I think there are a lot of reasons to do so, even though it may not fall
out of simulation with such a bad rap, then spending control becomes not just a
good idea but essential. This is
especially true since we don't live 10,000 simulated lives and then pick the
best one. It's one bite of the apple or
one whack at the cat, however you want to describe it. Better to control and maybe over-control-and-under-spend
in general that to blow it on our one run.
They say that running out of money has infinite disutility and I believe
"them." So let's look at process control since spending is, for
better or worse, a process that has an outsize impact on retirement outcomes.
I spent the better part of the 90s building software or
managing software development projects or building companies that built and
operated software processes. We were heavily invested in process control and
improvement. It went by many names in
the day -- six sigma, ISO x000, continuous process improvement, etc -- and was
a core task. Making mistakes was really
really expensive to fix after the fact so we had to flush them out early by creating extremely efficient processes.
We borrowed old concepts from industrial process improvement where
variability in manufacturing "widgets" means either expensive
non-competitive production or dissatisfied and departing customers or both. The old manufacturing guys used statistical
process control to deal with this. The
basic idea was: 1) be aware that a process exists in the first place, 2) start
to measure it, 3) create a baseline, 4) seek to improve the process
through data analysis and improvement methods, 5) measure again, improve again,
measure again, etc. When it is stable and demonstrably improved, consider
optimizing the process one way or another. Here is Wikipedia:
Statistical process control
(SPC) is a method of quality control which uses statistical methods. SPC is
applied in order to monitor and control a process. Monitoring and controlling
the process ensures that it operates at its full potential. At its full
potential, the process can make as much conforming product as possible with a
minimum (if not an elimination) of waste (rework or scrap). SPC can be applied
to any process where the "conforming product" (product meeting
specifications) output can be measured. Key tools used in SPC include control
charts; a focus on continuous improvement; and the design of experiments. An
example of a process where SPC is applied is manufacturing lines.
While we are on SPC let's also talk about Kaizen. This is an idea tied to Japan but
originates out of early 20th century American industrial experience if not
before. It is usually but not always tied at the hip to SPC. W. Edwards Deming
gets the main credit for this kind of thing but there are other players.
Underlying Kaizen (or continuous improvement or 6-sigma) is the basic principle
of an iterative Deming-style improvement cycle that looks like this: Plan, do,
inspect, act(fix), repeat. Here is Wiki again:
Kaizen, Japanese for
"improvement." When used in the business sense and applied to the
workplace, kaizen refers to activities that continuously improve all functions
and involve all employees from the CEO to the assembly line workers. It also
applies to processes, such as purchasing and logistics, that cross
organizational boundaries into the supply chain. It has been applied in healthcare,
psychotherapy, life-coaching, government, banking, and other industries.
By improving standardized activities and processes, kaizen aims to eliminate
waste.
One of the major, but of course not the only, tools of SPC
is the process control chart. It is used to visualize the process and
gauge the extent of and consistency of improvements. It is this kind of
chart/process that I think can apply directly to retirement spending. One
can Google "process control chart" and find a ton examples as applied
in a variety of industries. Here is one example for a "flange
manufacturing" process I lifted off of Google images that captures the
basic idea:
This shows the basic steps pretty well:
Production (baseline)
- measure a baseline and determine the mean and standard
deviation
Change 1
- try to improve the mean and std deviation by whatever
means
- measure again and show the results
Change 2
- try to improve (or optimize) the mean and standard
deviation again
- measure again and show it again and then do it iteratively
Ok, so the point here, if it wasn't clear is that spending
is a process, variance is wasteful or inefficient, and using statistical
process control is probably a good idea even if most people seem to hate
spending too much time understanding what they spend. Which reminds me that all
of this and what follows presupposes that one actually has an income statement
and spending data and has a bias for assertive action on spending. Without that anything else below here will be
meaningless.
It’s pointless to try to
figure out how much you’ll need in [retirement] savings or income if you don’t
have a good understanding of how much it costs for you to live. Ben Carlson
So, let's look at a case study in spending to see what it
looks like and whether the amateur-hack utility math adds any value. Nominal amounts will be real but buried to
protect the innocent. We will also
borrow the simplistic SPC method of: 1) establish a baseline, 2) measure,
control, and improve, and 3) iterate and
optimize. Using this framework, here are seven years of spending data split into
it's three stages in a case where a conscious choice to use process control approach was
applied. The spending "budget"
is not specified exactly but you can view it as goal of x% of the portfolio
where x is < 4% because 4% would be a bit insane for early retirees in
2017. Over those seven years it looked
like this:
red - Monthly spend as a % of portfolio
blue - 12 month moving average of the red line
grey - 2standard dev (12 month) of the red line +/-the blue
line
red dotted - upper and lower control limits
grey dotted - mean expected value or "budget" for
time frame
Stage one.
Stage one is clearly the unsustainable, out of control
process, i.e., the baseline. Measurement is the first act and is revealing. Let's
start with the idea that, in a retirement context, much of this can be analyzed
via simulated fail rates rather than standard deviation or utility. And, in
fact when we look at the fail rate estimates of this case based on annualized
data over the period we can see that stage 1 is very high risk.
A fail rate of
80%!?! Really? Yes really. How's
that for a wake-up call? Remember that number the next time someone criticizes
simulation as a waste of time or incomprehensible to laymen. Simulation has
it's uses in extreme circumstances and in this case it's use, so often hard to
interpret, is as a call to action.
Stage two.
Stage two makes heavy use of the income statement, an open
mind, and a release of any anchoring to the past in terms of lifestyle or
consumption relative to friends, peers, family, etc. Changes are made, cuts are deep,
expectations are adapted. Measurement continues and feeds back to the "change,
cuts, and expectations" in a loop. The process is finally stabilized and
deemed both under control and more efficient than in the past. Fail rate estimates have dropped
precipitously and the volatility of monthly spending has collapsed to a manageable range.
Stage three.
Stage three builds on stage two. Once under control, the
process can be optimized or at least bent in one direction or another so that
it can be demonstrated that not only volatility but direction is under the firm
control of the "user." New or changed assumptions about optimality
are rationalized and implemented. Fail rate estimates (partly due to market effects)
are at their lowest in seven years. In a retirement context it is only now
that SWR "rules" can be reasonably expected to make sense.
Going A Little Too Far Beyond Process Control and Fail
Rates.
In terms of the case study fail rates -- stage 1: bad, stage 2: better,
stage 3: best. So this should be enough, right? In one sense yes, of course.
This type of process control chart and effort is probably well beyond what most
people would consider reasonable or necessary. And fail rates by themselves,
with or without the SPC context, can get abstract when misused but can be indicative. And, I believe
that there are no small number of people in retirement that do not use an
income statement at all or examine spending closely. For them a process control
approach would be a very significant endeavor that would get them to "way
more than enough" especially if they understand the interaction between
spending and long term outcomes.
In another sense, at least for the ocd: no, it's not really enough
because variance, even in a steady controlled state, can still be a problem and
the low optimized stage 3 spending, though it "wins" in fail-rate
terms, is pretty deceptive. All we are
seeing is that spending is going down and fail rates are going down (though the
CE form of utility is jerking us around). Much of this could be considered obvious
stuff. In fact if spending goes to zero, fail rates will go to zero but as
we've pointed out, that is a little wanky because a spend rate of zero is not
necessarily great if lifestyle is the goal. More would better at that point,
but again, only up to a point.
As far as me calling the chart deceptive, that's because: 1)
the stage-one lack of control is influenced by externalities that would never
have persisted that unfairly make it look worse than it probably was (but it at
least makes a hell of a good case study) and that heroic efforts were taken to
reduce and control spending (which they were but maybe over dramatized here),
2) the bull market of 2010-2017 was helping to drive down the spend rate the
way it is presented here because the spend rate here is in a percent form and not
in absolute dollars so the process looks better than it really is even though
absolute spending levels are in fact being driven down, and 3) the stage 3 simulated
success rates and low and clearly "controlled and optimized"
spending, while no doubt admirable achievements, hide the fact that the spend and fail rates
improvements are being "purchased" not just in terms of the soft
costs of self-denial but also in real costs such as deferred maintenance which
will come back to bite this case study's spending later. It just can't be seen here and again more
spending would, counter-intuitively, be better.
So, declining spend rates, falling fail-rate estimates, and even
standard deviation aren't enough to convey what needs to be conveyed even
though all that tells a pretty good story.
That, I think is where a legitimate use of utility math comes in. It's just that the function that I've seen
used the most doesn't quite seem to get at it the right way either. For example, using CRRA utility on this case
study -- using it in its 'CE of expected utility' form -- the math ranks it
this way for a coefficient of risk aversion = 4:
Stage 1 - best
Stage 2 - 2nd best
Stage 3 - worst
And that holds true all the way up to a coefficient of ~22
where stage 2 and 3 tie and stage 3 takes over 2 based on the data I have while
stage 1 always wins. Except I have no
idea what a risk coefficient of 22 really means. But that doesn't even matter
because we know that stage 1 is a flat out fail because of the high average
spend, the radical volatility, and the sky-high fail rates. It should never
"win" in any terms, CRRA utility or otherwise.
Now let's try an amateur hack. Who knows if this is even close to something
legitimate. I'm going to try it
anyway. First, though, I'll set some
boundaries. I'll say that spend rates
below about 2% are probably in perpetuity territory with big legacy
implications. Since that's better than running out of money (nice problem to
have, eh?) notwithstanding all the self-denial talk, I'll give anything below
2% some minimum but constant positive utility. I'll also say that anything
above about 5.6% has a constant utility of zero (I did another post/study that
had a Max Withdrawal Rate in that range so I am arbitrarily picking a number
near there). For spend rates that are inside
that defined range I'll give them some kind of quadratic utility with an
optimum around 3.5% (probably should have made it 3% these days…). We will ignore different risk aversion levels in this game. To hack this out I used an excel trendline to
get an equation to match a curve in that neighborhood and came up with this indefensible
rookie monstrosity: U(Ct) = 7e-11*x^3 -6.85e-06*x^2+.1396x-565.56. Using a "max" function below around
2% and a different max function above 5.6, it might look like this, in red:
I think this kind of thing would make sense to absolutely no
one but the case study spender or the post writer. It looks like an egg or some geologic cross
section. Since it is probably not
academically legit, let's try it out just for the hell of it, a joy ride if you
will. To protect the innocent again,
I'll skip over the exact spend values. The stages, when applying this, are ranked:
Stage 1 - worst
Stage 2 - best
Stage 3 - second best
Good, that seems about right but that may just be due to over-engineering
and curve fitting. We'll also ignore for now that the absolute dollar
"budget" would change each year or two. The annual spending and
inflation and the endowment are constant enough in the case that we can fudge
it a bit. But at least the ranking of
the stages make sense because of the way I defined utility so that's pretty
good.
Is it useful? Maybe.
The best I can say is that it might be useful when evaluating the design and
implementation of "process control experiments" after the fact. For fast timely evaluation it's harder to say. In addition to the chart and basic spend rates, which really should
be enough all by itself[3] without adding all the funky layman's U-math, maybe a
trailing 12 month utility calc to monitor the process[4] in addition to direct process
monitoring might be of some use but probably useless in the end. It's also
plausible that I could use this in a simulator to evaluate spending rules, and it's there that has the richest opportunity, but I
haven't looked at spending rules yet. My guess for myself is that it is
unlikely this would make it into daily or monthly practice in anything like its
current form anytime soon. Too complex and I would be forever forgetting what it means. It was a fun game though and I guess I
convinced myself in the end, just by talking myself through this, that spending
control, just by being aware of it and monitoring it in any form, is more than
likely worth the effort if I believe the various assumptions I've made.
---------------------------------------
[1] I am not an economist and not really all that good at
math and I do not have access to economists, math teachers, professors,
professionals, staff, students or colleagues.
I have access only to me and Google.
All that aside, please do not take any of this at face value or as true
or as advice. This is just a game I'm playing.
While there might be some value for me (only) to playing this game,
that's about as far as it goes.
[2] Yes, I realize that the instant I say the word moral,
eyebrows raise. But really, that's what it is, right? Maybe it's just because I
come from a northern state and grew up among the descendants of Scandinavian
and German Lutherans for whom excess consumption was a moral issue. In my new state of FL, excess consumption is,
rather, the state sport and why some wags have said self-interest should be the
state motto and have claimed our biggest export is financial fraud. I'm sure there is utility math for Florida
that would look different.
[3] The process control chart by itself is a little anal and
I doubt most retirees would take it that far in the first place. If they did the chart all by itself is
probably more than sufficient as a boundary management and process control tool. Utility calcs are probably a step or two (at
least) too far.
[4] I'm not even sure this makes sense but if it did a
rolling average of 12 months U(Ct) might look like this:
What does this tell me? Right now it says the case study
spender should spend more but that's kind of a
broad brush.
No comments:
Post a Comment