Roy Spencer’s Latest Silver Bullet
Posted on 22 May 2011 by bbickmore
Barry Bickmore is a geochemistry professor at Brigham Young University who runs the blog Anti-Climate Change Extremism in Utah. He has graciously allowed us to re-publish his excellent post on Roy Spencer's latest blog post "Silver Bullet" that climate sensitivity is low.
Summary: Roy Spencer has come up with yet another “silver bullet” to show that climate sensitivity is lower than IPCC estimates. I.e., he fits a simple 1-box climate model to the net flux of heat into the upper 700 m of the ocean, and infers a climate sensitivity of only about 1 °C (2x CO2). There are several flaws in his methods–inconsistent initial conditions, failure to use the appropriate data, and failure to account for ocean heating deeper than 700 m. (He partially fixed the last one in an update.) All of these flaws pushed his model to produce a lower climate sensitivity estimate. When the flaws are corrected, the model estimates climate sensitivities of at least 3 °C, which is the IPCC’s central estimate. In any case, a simple 1-box climate model does not appear to be adequate for this kind of analysis over only a few decades. But while Spencer’s latest effort doesn’t really do any damage to the consensus position, it turns out that it does directly contradict the work he promoted in The Great Global Warming Blunder.
Since I launched my blog a year ago, I’ve had the chance to examine the claims of a number of climate contrarians. One thing that has surprised me is how often the contrarians seem to think they’ve come up with a “silver bullet” to shoot down mainstream scientific views about how sensitive the climate is to forcing by greenhouse gases, etc. By “silver bullet” I mean a very simple, direct, and clear demonstration that the mainstream is flatly wrong. Why is this so surprising? Because even though climate response is a complicated subject, overall sensitivity has been estimated at least nine different ways, and many of these methods are based on paleoclimate data, rather than model output. Since all of these methods have produced roughly the same answer, the vast majority of climate scientists have concluded that the climate is pretty unlikely to be insensitive enough to avoid major problems if we continue to burn fossil fuels like we have been. In other words, the possibility of a silver bullet in this case seems remote.
In my career as a scientist I’ve come up with a couple “silver bullets,” although they weren’t about anything so dramatic, but the fact is that nine times out of ten when I’ve gotten excited because I THOUGHT I had one, further investigation showed that I had made some minor mistake, and in fact the scientists whose work I was examining were right. What bothers me about the climate contrarians is that it doesn’t seem to occur to many of them to keep digging once they get an answer they like.
Roy Spencer is a prime example of a contrarian scientist who exhibits this tendency. As I noted in my recent review of Spencer’s The Great Global Warming Blunder, he has a history of publishing dramatic new evidence for low climate sensitivity… and then letting others sort out the serial errors he’s made. E.g., he said the reason he published his book was that he could virtually prove climate sensitivity is low and recent warming has been the result of chaotic natural variation, but establishment scientists had blocked publication of this silver bullet in the peer-reviewed literature. I took apart his work, however, and found out that he was only able to get the answer he liked by plugging physically unreasonable parameter values into his model. What’s more, his results depended on a numerical fitting procedure that I hesitate to call “statistical,” because no statistician would ever suggest such a thing. Spencer’s answer? He says he’s too busy writing papers for the peer-reviewed literature to respond to a critique on a blog.
Well, in a new blog post, Roy claims to have yet another silver bullet that is supposed to trump those nine other methods for estimating climate sensitivity. That is, he has modified a “simple climate model” (i.e., a zero-dimensional or 1-box climate model) to produce a climate sensitivity estimate based on recently published ocean heat content (OHC) data. Here again, I’m going to closely examine Spencer’s methods. Unsurprisingly, it turns out that this particular silver bullet doesn’t do any damage to the consensus on climate sensitivity. However, my readers might be a little surprised to find out that Spencer’s discussion of his new approach provides a very clear demonstration of why the modeling results he promoted in his book were meaningless. Maybe it is a silver bullet after all.
Ocean Heat Content and Net Radiative Flux
Spencer focuses on the OHC data series published by Sydney Levitus and colleagues. (Here is the explanatory page for the data set.) The series estimates the total heat content of the 0-700 m layer of the ocean since 1955, in units of 10^22 J. Now, if you take into account the total surface area of the ocean (~3.6 x 10^14 m^2) it’s easy to calculate the net flux of energy going into the upper 700 m of the oceans.
Figure 1 shows the net heat flux into the 0-700 m layer of the ocean, calculated as explained above. I’ve also digitized Spencer’s calculated flux from this figure and plotted it. Notice that both lines plot exactly on top of one another. This shows that both he and I were at least using 1) the same OHC data, and 2) the same ocean area.
Spencer realized that he could interpret this data through the lens of a modified version of his simple climate model. (I explained the original version more thoroughly here.) The modified model is shown in Eqn. 1, where Cp is the total heat capacity of a 1 x 1 x 700 m column of water, T is the average temperature anomaly of the 0-700 m layer of the ocean relative to some equilibrium, dt is the time step, F is the radiative forcing of the system relative to equilibrium, a is the feedback factor, and Ts is the average surface temperature anomaly of the ocean relative to equilibrium.
Let’s restate that in English. On the left side of the equation, we multiply the heat capacity of a 700 m column of water that is 1 m^2 on top (700 m * 4180000 J/m^3/K) by the rate of change of the average temperature of the 0-700 m layer (units = K/s). When we multiply those out, we get the net flux of heat energy into the ocean (units = W/m^2). So the left side of the equation represents the same thing we plotted in Fig. 1. The net flux of heat into the ocean equals the forcing on the system (units = W/m^2) plus the feedback (units = W/m^2). The feedback on the system is just the response of the system to a temperature change, in terms of a change in the amount of radiation sent back into space. In Spencer’s model, the feedback term includes the Planck response (change in black-body radiation with a change in temperature), and whatever else (changes in cloud cover, water vapor, and so on, in response to a temperature change.) The feedback is represented as the negative of the feedback factor multiplied by the surface temperature anomaly.
Now, James Hansen at NASA has published estimates of climate forcing (both human and natural) since 1880, and Spencer noted that the HadSST2 sea surface temperature series seems to be rising about 3.5x as fast as T. Spencer’s idea was that if he knew the net heat flux into the ocean and the forcing on the system, and could link the the ocean surface temperature to the 0-700 m layer temperature, all he had to do was run Eqn. 1 in a finite difference model and adjust the feedback factor until the net heat flux in the model output adequately mimicked the real net heat flux data (Fig. 1).
Fig. 2 plots the net heat flux output from Spencer’s model along with a 10-year smoothed version of the data in Fig. 1. Readers will note that the model output doesn’t seem like that good of a fit to the smoothed version of the real heat flux data. However, Spencer doesn’t appear to have been fitting the model output to the entire data series. He appears instead to have been trying to get the same average flux value of 0.2 W/m^2 over the whole series, and an average of about 0.6 for the period 1995-2005.
The a value he came up with was 4 W/m^2/K, which is equivalent to a climate sensitivity of about 1 °C per doubling of CO2. Since the IPCC puts a lower limit on climate sensitivity of 1.5 °C and a probable range of 2-4.5 °C, Spencer asks,
"Now, what I do not fully understand is why the IPCC claims that the ocean heat content increases indeed ARE consistent with the climate models, despite the relatively high sensitivity of all of the IPCC models."
Actually, the IPCC is more equivocal about whether the GCMs they use account for OHC data. They say,
"Although the heat uptake in the ocean cannot be explained without invoking anthropogenic forcing, there is some evidence that the models have overestimated how rapidly heat has penetrated below the ocean’s mixed layer (Forest et al., 2006; see also Figure 9.15). In simulations that include natural forcings in addition to anthropogenic forcings, eight coupled climate models simulate heat uptake of 0.26 ± 0.06 W m–2 (±1 standard deviation) for 1961 to 2003, whereas observations of ocean temperature changes indicate a heat uptake of 0.21 ± 0.04 W m–2 (Section 188.8.131.52). These could be consistent within their uncertainties but might indicate a tendency of climate models to overestimate ocean heat uptake."
They go on to explain that there could be problems with how the models handle heat transfer in the ocean, or with the OHC data itself. (E.g., the upper 300 m of the ocean is fairly well observed, and the models agree well with the data there, but there isn’t nearly as much data for greater depths so there could be sampling artefacts.)
In fact, James Hansen has recently brought up this issue, as well. In any case, the IPCC does at least indicate that their models get in the same ballpark as the OHC data, so the dilemma Spencer poses is reasonable to bring up.
The Easy Answer
There are a number of ways we can answer Roy Spencer’s dilemma, but perhaps the easiest one is simply to note that a 1-box climate model probably isn’t adequate to answer the kinds of questions he is using it for. Regarding this issue, Spencer characterized the climate modeling community in an unflattering light in The Great Global Warming Blunder (p. xxii).
"Climate models are built up from many components, or subsystems, each representing different parts of the climate system. The expectation of the modelers is that the greater the complexity in the models, the more accurate their forecasts of climate change will be. But they are deceiving themselves. The truth is that the more complex the system that is modeled, the greater the chance that the model will produce unrealistic behavior."
In my experience, this is a gross mischaracterization of the climate modeling community. The truth is that modelers generally understand “Einstein’s Razor” very well, i.e., that theories and models “should be made as simple as possible, but no simpler.” In their textbook, Mathematical Modeling of Earth’s Dynamic Systems, Rudy Slingerland and Lee Kump put it this way. ”Bad models are too complex or too uneconomical or, in other cases, too simple” (p. 2).
In this case, a potential problem with a model like Eqn. 1 is that all the feedbacks are lumped together, but in reality different climate feedbacks operate on different timescales. E.g., look at Fig. 2 again, and note that the forcing series has a number of sharp downward spikes, which are due to major volcanic eruptions, and almost identical spikes appear in the heat flux model output. And yet, no such spikes appear in the real data. Why? Because if different feedbacks operate on different timescales, then those kinds of short-term perturbations tend to get smoothed out in the overall climate response.
In a recent blog post, climatologist Isaac Held examined just this issue. He actually took a model exactly like Spencer’s and fit it to the output from a GCM that he uses. Then he estimated the equilibrium climate sensitivity of both models by running them with a constant forcing equivalent to doubling the CO2 in the atmosphere, and finding the temperature they settled into after a while. It turned out that whereas the GCM had a sensitivity of about 3.4 °C (2x CO2), the simple model only had a sensitivity of about 1.5 °C. This is interesting, because the simple model was parameterized specifically to mimic the output of the GCM. Could it be that the issue of overall climate sensitivity is more complex than a model like Eqn. 1 can address, at least over timescales of a few decades? Could it be that such a model would consistently low-ball its estimated climate sensitivity? One thing is clear–Roy Spencer hasn’t asked questions like these, whereas bona fide climate modelers like Isaac Held have.
Spencer’s Flawed Modeling
What if the opposite is true, however? That is, what if more complex models like GCMs consistently overestimate climate sensitivity, all the while fitting the same data as a more simple model? I’m not really in a position to address that question, but let’s assume for the moment that Spencer’s simple model is adequate to the task. Now the question is, did he apply the model correctly?
First, take another look at Fig. 2 and Eqn. 1. The temperature terms in Eqn. 1 are temperature anomalies, i.e., in a model like this they refer to the difference between the temperature and some assumed equilibrium state. The larger the absolute value of the temperature anomaly, the larger the feedback term becomes, providing a strong pull back toward the equilibrium temperature. The feedback curve in Fig. 2 starts at 0 W/m^2 in 1955, so it is apparent that Spencer began his model in equilibrium in 1955. We can argue about whether the climate was really in any kind of “equilibrium” in 1955, but the trouble doesn’t end there. ”Forcing” refers to a change in the energy input relative to some assumed “normal” state. Obviously, if Spencer assumed the climate was in equilibrium in 1955, he should have adjusted his forcing numbers to zero in 1955, as well, but instead he started his model run at 0.34 W/m^2 forcing. Why? Because Hansen’s climate forcing estimate is 0.34 W/m^2 in 1955, but Hansen’s series is referenced to the year 1880. In other words, the entire forcing series is 0.34 W/m^2 too high, given Spencer’s other initial conditions.
Am I nitpicking, here? Remember that the average net heat flux over 1955-2010 is only about 0.2 W/m^2, which is significantly less than the error in forcing. What’s more, if the forcing is uniformly too high, the feedback has to be more negative by the same amount to produce the same net heat flux result. In other words, the a parameter in Eqn. 1 has to be larger than it otherwise would be, indicating a less sensitive climate.
Second, Spencer’s feedback series shown in Fig. 2 is model output, but if the feedback depends on Ts (see Eqn. 1), how can the model spit out BOTH the feedback AND the net heat flux, which depends on changes in T? Spencer gave us a clue when he noted that the rate of change in the HadSST2 global average sea surface temperature was about 3.5x the rate of change in T from 1955-2010. Therefore, I guessed that he had linked the two temperatures together in the model like so (Eqn. 2).
I ran this model by the finite difference method in Microsoft Excel, using 1 year time steps, and came up with a feedback curve that exhibits a general trend pretty close to that of Spencer’s (see Fig. 3). My curve is a lot more noisy, which could be due to me using a different time step, Roy smoothing the data, or whatever. (I sent two requests to see his code, which he has so far ignored. I also asked him whether I was right that he linked T and Ts by a factor of 3.5 in his model, but he hasn’t responded to that, either. He was kind enough to answer a couple of my questions, however, which was very helpful.) In any case, it’s pretty clear that my model is pretty similar to whatever Spencer did.
Even after I asked Roy and he told me the feedback was model output, it actually took a while for me to think of the idea to link the two temperatures by a constant factor. I initially just assumed that he would have used the HadSST2 temperature anomaly as input to the model, because if I already have data for Ts, why make the model generate it? In any case, since Spencer had his model generate the feedback, all we have to do is compare the model feedback values with the HadSST2 data for Ts, zeroed at 1955 and multiplied by 4 W/m^2/K (see Eqn. 1). This comparison is shown in Fig. 4.
Whereas Spencer adjusted the a parameter in his model to fit the net heat flux data, he apparently paid no attention to how well his model reproduced the average ocean surface temperature. Spencer’s feedback curve is never as strong as the one generated using real ocean surface temperatures, so it’s clear that his model-generated ocean surface temperatures are uniformly too low. Therefore, to get the same feedback, his model has to adopt a larger value of a, once again making the climate seem less sensitive than it should be.
Another odd thing about Roy Spencer’s latest modeling adventure is that he doesn’t seem to have taken the normal route of fitting the model to the OHC data via a least-squares regression. Rather, it appears he was trying to make his model produce the same average heat flux over the entire period as the real data (0.2 W/m^2). Given the corrections I outlined above (using real ocean surface temperatures and zeroing the forcing at 1955,) I decided to try both methods to see what kind of feedback parameters I would get. When I did a least-squares regression, varying the feedback factor to fit the OHC data, I got an a value of 1.1 W/m^2/K, which amounts to an equilibrium climate sensitivity of about 3.4 °C. This is not only within the probable range given by the IPCC, it’s very close to their central estimate of 3 °C. When I adjusted the model to produce an average heat flux of 0.2 W/m^2 for 1955-2010, I got an a value of 0.7 W/m^2/K, i.e., a climate sensitivity of about 5.2 °C, which is above the IPCC’s most likely range. I have plotted the smoothed heat flux data along with Spencer’s model output and the output from both my model runs in Fig. 5. Readers should note that all the models (including mine) pretty much suck at fitting the data, reinforcing the point that a simple 1-box climate model probably isn’t the best tool for this job.
Finally, Spencer’s readers pointed out in the comments on his blog that the ocean has been heating up below 700 m, as well. Spencer posted an update to his blog entry that says he took into account heating down to 2000 m, and he came up with a climate sensitivity of 1.3 °C, which is a bit larger than his original estimate of 1 °C.
The fact is that a simple 1-box climate model like Spencer’s probably isn’t a very good tool for interpreting the meaning of a few decades of OHC data. But if we try it anyway, we come up with climate sensitivity estimates that are at least as high as the IPCC’s central estimate of 3 °C. In contrast, Spencer’s erroneous estimate is less than half of that at 1.3 °C. Roy Spencer was only able to come up with low sensitivity estimates by committing serial errors that all pushed his estimates in the direction of lower sensitivity. These errors include: inconsistent initial conditions, failure to use the appropriate data, and failure to account for ocean heating deeper than 700 m. (He fixed the last one in an update.) So the fact that Roy Spencer’s analysis contradicts at least nine other methods for estimating climate sensitivity is pretty meaningless.
This is becoming a familiar story. Roy Spencer comes up with some great idea, all but proving the establishment wrong. Instead of trying to work on his idea until it’s in good enough shape to pass peer review, he posts it on a blog or publishes it in a book, and then accuses the other scientists of ignoring evidence. His true believers shower him with accolades (see the comments on his blog), and he leaves it up to others to point out the flaws in his methods.
I will say this in Roy Spencer’s defense, however. At least he sometimes has the grace to admit he’s wrong when someone goes to the trouble to point out his errors. For example, in my recent review of his book, The Great Global Warming Blunder, I pointed out another serious abuse of the same simple climate model he used in his latest blog post. One of the problems I pointed out was that his model assumed a 700 m mixed layer in the ocean, when it is really something more like 100 m. In other words, his model assumed that the entire top 700 m of the ocean heats up and cools down at the same rate, which is nonsense. Afterward, Roy mentioned on his blog that many of his readers had been asking him to respond to my review, but he said he was too busy trying to publish peer-reviewed science to waste his time responding to blog critiques. And yet, in the blog post I’m critiquing here, I found the following frank admissions that I was right.
"The reason why we need to use 2 temperatures is that the surface has reportedly warmed about 3.5 times faster than the 0-700 meter ocean layer does, and radiative feedback will be controlled by changes in the temperature of the sea surface and the atmosphere that is convectively coupled to it….
While some might claim that it is because warming is actually occurring much deeper in the ocean than 700 m, the vertical profiles I have seen suggest warming decreases rapidly with depth, and has been negligible at a depth of 700 m."
Ok, so he didn’t specifically acknowledge that he had been wrong about this before, but I’m sure he meant to say that.