Intro
The short answer to the title is that it looks like the machine's output shifts from finance to economics. That confused me at first but I think I have a bead on this. First we'll look at where we've been with: a) lower risk aversion (small, error prone sampling), and b) slightly higher risk aversion (again with smaller sampling. Then I'll change the sampling a bit to see what happens. Then finally I'll try to explain what I think I'm seeing.
What do I mean by sampling and outliers?
In the machine/model as it walks through the meta-sim -- where "1 iteration = 1 life" and then "year by year within a life" -- it is, at each age for whatever wealth level and spend rate it is at, checking by way of a forward consumption utility simulation for an estimate of the lifetime consumption utility. It does this in order to compare a course of action (changing the spending) to a baseline (what it would have done notwithstanding the change). Since that is a
heavy use of the processor and since I was just playing around I originally kept the iterations for that internal mini-sim low, say 100. That is "the sample" and since it it is technically sampling from infinity, it is a laughably low sample. In this post I increased that to 300 which is still laughably low but also painfully slow. On AWS with 4x4core so 16 CPUs it takes about 50 minutes for 1000 iterations of the meta-sim. I later nudged it down to 200 due to impatience but that didn't change the conclusions much.
The main difference, an obvious statistical thing, is that the dispersion of the sampling distribution narrows a bit and the relative impact of outliers (of lifetime consumption utility) comes in. I'll try to interpret that later.