Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Donate

Twitter Facebook YouTube Pinterest

RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
Keep me logged in
New? Register here
Forgot your password?

Latest Posts

Archives

How reliable are climate models?

What the science says...

Select a level... Basic Intermediate

Models successfully reproduce temperatures since 1900 globally, by land, in the air and the ocean.

Climate Myth...

Models are unreliable
"[Models] are full of fudge factors that are fitted to the existing climate, so the models more or less agree with the observed data. But there is no reason to believe that the same fudge factors would give the right behaviour in a world with different chemistry, for example in a world with increased CO2 in the atmosphere."  (Freeman Dyson)

Climate models are mathematical representations of the interactions between the atmosphere, oceans, land surface, ice – and the sun. This is clearly a very complex task, so models are built to estimate trends rather than events. For example, a climate model can tell you it will be cold in winter, but it can’t tell you what the temperature will be on a specific day – that’s weather forecasting. Climate trends are weather, averaged out over time - usually 30 years. Trends are important because they eliminate - or "smooth out" - single events that may be extreme, but quite rare.

Climate models have to be tested to find out if they work. We can’t wait for 30 years to see if a model is any good or not; models are tested against the past, against what we know happened. If a model can correctly predict trends from a starting point somewhere in the past, we could expect it to predict with reasonable certainty what might happen in the future.

So all models are first tested in a process called Hindcasting. The models used to predict future global warming can accurately map past climate changes. If they get the past right, there is no reason to think their predictions would be wrong. Testing models against the existing instrumental record suggested CO2 must cause global warming, because the models could not simulate what had already happened unless the extra CO2 was added to the model. All other known forcings are adequate in explaining temperature variations prior to the rise in temperature over the last thirty years, while none of them are capable of explaining the rise in the past thirty years.  CO2 does explain that rise, and explains it completely without any need for additional, as yet unknown forcings.

Where models have been running for sufficient time, they have also been proved to make accurate predictions. For example, the eruption of Mt. Pinatubo allowed modellers to test the accuracy of models by feeding in the data about the eruption. The models successfully predicted the climatic response after the eruption. Models also correctly predicted other effects subsequently confirmed by observation, including greater warming in the Arctic and over land, greater warming at night, and stratospheric cooling.

The climate models, far from being melodramatic, may be conservative in the predictions they produce. For example, here’s a graph of sea level rise:

Observed sea level rise since 1970 from tide gauge data (red) and satellite measurements (blue) compared to model projections for 1990-2010 from the IPCC Third Assessment Report (grey band).  (Source: The Copenhagen Diagnosis, 2009)

Here, the models have understated the problem. In reality, observed sea level is tracking at the upper range of the model projections. There are other examples of models being too conservative, rather than alarmist as some portray them. All models have limits - uncertainties - for they are modelling complex systems. However, all models improve over time, and with increasing sources of real-world information such as satellites, the output of climate models can be constantly refined to increase their power and usefulness.

Climate models have already predicted many of the phenomena for which we now have empirical evidence. Climate models form a reliable guide to potential climate change.

Basic rebuttal written by GPWayne

Last updated on 26 February 2014 by LarryM. View Archives

Printable Version  |  Offline PDF Version  |  Link to this page

Further reading

Update

On 21 January 2012, 'the skeptic argument' was revised to correct for some small formatting errors.

Comments

Prev  1  2  3  4  5  6  7  8  9  10  11  12  

Comments 551 to 584 out of 584:

  1. Hi JasonB, you provide a very interesting perspective there, and I think you make the most important point as well:
    It certainly wouldn't have less trustworthy just because it wasn't written by somebody with a CS degree.
    This is the key issue - is the code any less trustworthy because somebody wrote it who wasn't at the core, a CS specialist< and I concur with your answer that it is not.

    I don't doubt that (for example) code I wrote was a lot more 'clunky', poorly commented, inefficient and all the rest than a CS specialist's code! (though clunky 3D graphics were quite fun to do). Equally I suspect the coders of big GCMs are much more skilled at efficient algorithm generation than I ever was, as they need to be, running large computationally expensive programs. The core algorithms that controlled the scientific part of my programs were as you describe them - transcriptions of mathematical expressions, and computationally relatively straightforward to implement. Some algorithms are harder than others, of course! Ensuring they are correctly implemented is where detailed testing and validation comes in, to make sure the mathematics and physics is as good as you can make it. These are then documented and described in relevant publications, as with all good science. All part of the scientific coder's life. Thanks for your perspective.
  2. JasonB:

    Yes, an interesting and illuminating example.

    It seems that Clyde is "on hiatus", but to continue the discussion a bit:

    "Climate Models" (of the numerical/computer-based type) date back to the 1960s, when all computing was done on mainframes. Individuals wrote portions of code, but the mainframes also typically had installed libraries of common mathematical routines. The one I remember from my mainframe days is IMSL, which (from the Wikipedia page linked to) appeared on the scene in 1970, and is still actively developed. Such libraries were typically highly-optimized for the systems they were running on, and brought state-of-the-art code to the masses. (When I hear object-oriented affectionados talk about "reusable code" as if it is some novel concept, I think back to the days when I created my own linkable libraries of routines for use in different programs, long before "object oriented" was a gleam in someone's eye.)

    Of course, "state-of-the-art" was based on hardware that would compare badly to the processing power of a smart phone these days, and "the masses" were a small number of people that had access to universities or research institutes with computers. When I was an undergraduate student, one of my instructors had been a Masters student at Dalhousie University in Halifax (east cost) when it got the first computer in Canada east of Montreal. The university I attended provided computing resources to another university in Ontario that did not have a computer of its own.

    JasonB's description of developing algorithms and such is just what doing scientific computing is all about. The branch of mathematics/computing that is relevant is Numerical Methods, or Numerical Analysis, and it is a well-developed field of its own. It's not user interfaces and pretty graphs or animations (although those make programs easier to run and data easier to visualize), and a lot of what needs to be known won't be part of a current CS program. (My local university has four courses listed in its CS program that relate to numerical methods, and three of them are cross-references to the Mathematics department.) This is a quite specialized area of CS - just as climate is a specialized area of atmospheric science (or science in general).

    The idea that "climate experts" have gone about developing "climate models" without knowing anything about computers is just plain nonsense.
  3. I would also describe myself as computer modeller (though not in climate, but petroleum basins). My qualifications are geology,maths and yes a few CS papers, notably postgrad numerical analysis. My main concerns are about the numerical methods to solve the equations in the code; their speed, accuracy and robustness. Validation is a huge issue. We also have CS-qualified software engineers who tirelessly work on the code as well. What they bring to the picture is rigorous code-testing procedures (as opposed to model testing which is not the same thing), and massive improvement in code maintainability. Not to mention some incredibly useful insights into the tricky business of debugging code on large parallel MPI systems, and some fancy front-ends. The modelling and software engineering are overlapping domains that work well together. I suspect Clyde thought climate modellers were not programmers at all, imagining people tinkering with pre-built packages. So much skepticism is built on believing things that are not true.
  4. My initial comment on here and firstly thanks to the site for a well-moderated and open forum.

    I am a hydrologist (Engineering and Science degrees) with a corresponding professional interest in understanding the basics (in comparison to GCMs, etc) of climate and potential changes therein. My main area of work is in the strategic planning of water supply for urban centres and understanding risk in terms of security of supply, scheduled augmentation and drought response. I have also spent the past 20 years developing both my scientific understanding of the hydrologic cycle as well as modelling techniques that appropriately capture that understanding and underpinning science.

    Having come in late on this post I have a series of key questions that I need to place some boundaries and clarity on the subject. But I'll limit myself to the first and (in my mind) most important.

    A fundamental question in all this debate is whether global mean temperature is increasing. This has meant we need some form of predictive model in which we have sufficient confidence to simulate temperature changes over tim, under changing conditions, to an appropriate level of uncertainty.

    So, my first question that I'd appreciate some feedback from Posters is:

    Q: Is there a commonly accepted (from all sides of the debate) dataset or datasets that the predictive models are being calibrated/validated against?

    Also happy to be corrected on any specific terminology (e.g. GMT).
  5. opd68 Rather than validate against a single dataset it is better to compare with a range of datasets as this helps to account for the uncertainty in estimating the actual global mean temperature (i.e. none of the products are the gold standard, the differences between them generally reflect genuine uncertainties or differences in scientific opinion in the way the direct observations should be adjusted to cater for known biases and averaged).
  6. opd68 @554, no, there is not a universally accepted measure of Global Mean Surface Temperature (GMST) accepted by all sides. HadCRUT3 and now HadCRUT4, NCDC, and Gistemp are all accepted as being approximately accurate by climate scientists in general with a few very specific exceptions. In general, any theory that is not falsified by any one of these four has not been falsified within the limits of available evidence. In contrast, any theory falsified by all four has been falsified.

    The few exceptions (and they are very few within climate science) are all very determined AGW "skeptics". They tend to insist that the satellite record is more accurate than the surface record because adjustments are required to develop the surface record (as if no adjustments where required to develop the satellite record /sarc). So far as I can determine, the mere fact of adjustments is sufficient to prove the adjustments are invalid, in their mind. In contrast, in their mind the (particularly) UAH satellite record is always considered accurate. Even though it has gone through many revisions to correct for detected error, at any given time these skeptics are confident that the current version of UAH is entirely accurate, and proves the surface record to be fundamentally flawed. They are, as saying goes, always certain, but often wrong.
  7. Models do not predict just one variable - they necessarily compute a wide range of variables which can all be checked. With the Argo network in place, the predictions of Ocean Heat Content will become more important. They also do not (without post-processing) predict global trends, but values for cells so you can compare regional trends as well as the post-processed global trends.

    Note too, that satellite MSU products like UAH and RSS measure something different from surface temperature indices like GISS and HAdCrut, and thus different model outputs.
  8. Another thought to when you talk about "model validation". Validation of climate theory is not dependent on GCM outputs - arguably other measures are better. However, models (including hydrological models) are usually assessed in terms of model skill - their ability to make more accurate predictions than simple naive assumptions. For example, GCMs have no worthwhile skill in predicting temperatures etc on timescales of decadal or less. They have considerable skill in predicting 20year+ trends.
  9. Thanks all for the feedback - much appreciated.

    For clarification, my use of the terms 'calibration' and 'validation' can be explained as:
    - We calibrate our models against available data and then use these models to predict an outcome.
    - We then compare these predicted outcomes against data that was not used in the calibration. This can be data from the past (i.e. by splitting your available data into calibration and validation subsets) or data that we subsequently record over time following the predictive run.
    - So validation of our predictive models should be able to be undertaken against the data we have collected since the predictive run.

    Dikran & scaddenp – totally agree re: importance of validation against a series of outcomes wherever possible, however I feel that in this case the first step we need to be able to communicate with confidence and clarity is that we understand the links between CO2 and GMT and can demonstrate this against real, accepted data. As such, in the first instance, whatever data was used to calibrate/develop our model(s) is what we need to use in our ongoing validation.

    Tom Curtis – thanks for that. The four you mention seem to be the most scientifically justifiable and accepted. In terms of satellite vs surface record (as per paragraph above) whatever data type was used to calibrate/develop the specific model being used is what should be used to then assess its predictive performance.

    From my reading and understanding, a key component of the ongoing debate is:
    - Our predictive models show that with rising CO2 will (or has) come rising GMT (along with other effects such as increased sea levels, increased storm intensity, etc).
    - To have confidence in our findings we must be able to show that these predictive models have appropriately estimated GMT changes as they have now occurred (i.e. since the model runs were first undertaken).

    As an example, using the Hansen work referenced in the Intermediate tab of this Topic, the 1988 paper describes three (3) Scenarios (A, B and C) as:

    - “Scenario A assumes that growth rates of trace gas emissions typical of the 1970s and 1980s will continue indefinitely” (increasing rate of emissions - quotes an annual growth rate of about 1.5% of current (1988) emissions)
    - “Scenario B has decreasing trace gas growth rates such that the annual increase in greenhouse forcing remains approximately constant at the present level” (constant increase in emissions)
    - “Scenario C drastically reduces trace gas growth between 1990 and 2000 such that greenhouse climate forcing ceases to increase after 2000”

    From Figure 2 in his 2006 paper, the reported predictive outcomes haven’t changed (i.e. versus Fig 3(a) in 1988 paper) which means that the 1988 models remained valid to 2006 (and presumably since?). So we should now be in a position to compare actual versus predicted GMT between 1988 and 2011/12.

    Again, I appreciate that this is merely one of the many potential variables/outcomes against which to validate the model(s) however it is chosen here as a direct reference to the posted Topic material.
  10. opd68,

    I think part of your difficulty is in understanding both the complexity of the inputs and the complexity of measuring those inputs in the real world.

    For example, dimming aerosols have a huge effect on outcomes. Actual dimming aerosols are difficult to measure, let alone to project into their overall effect on the climate. At the same time, moving forward, the amount of aerosols which will exist requires predictions of world economies and volcanic eruptions and major droughts. So you have an obfuscating factor which is very difficult to predict and very difficult to measure and somewhat difficult to apply in the model.

    This means that in the short run (as scaddenp said, less than 20 years) it is very, very hard to come close to the mark. You need dozens (hundreds?) of runs to come up with a "model mean" (with error bars) to show the range of likely outcomes. But even then, in the short time frame the results are unlikely to bear much resemblance to reality. You have to look beyond that.

    But when you compare you predictions to the outcome... you now need to also adjust for the random factors that didn't turn out the way you'd randomized them. And you can't even necessarily measure the real world inputs properly to tease out what really happened, and so what you should input into the model.

    You may look at this and say "oh, then the models are worthless." Absolutely not. They're a tool, and you must use them for what they are meant for. They can be used to study the effects of increasing or decreasing aerosols and any number of other avenues of study. They can be used to help improve our confidence level in climate sensitivity, in concert with other means (observational, proxy, etc.). They can also be used to help us refine our understanding of the physics, and to look for gaps in our knowledge.

    They can also be used to some degree to determine if other factors could be having a larger effect than expected.

    But this statement of yours is untrue:
    This has meant we need some form of predictive model in which we have sufficient confidence to simulate temperature changes over tim, under changing conditions, to an appropriate level of uncertainty.
    Not at all. We have measured global temperatures and they are increasing. They continue to increase even when all other possible factors are on the decline. The reality is that without CO2 we would be in a noticeable cooling trend right now.

    There are also other ways (beyond models) to isolate which factors are influencing climate:

    Huber and Knutti Quantify Man-Made Global Warming
    The Human Fingerprint in Global Warming
    Gleckler et al Confirm the Human Fingerprint in Global Ocean Warming
  11. opd68,

    Your definitions of calibration and validation are pretty standard but I'd like to make a few points that reflect my understanding of GCMs (which could be wrong):

    1. GCMs don't need to be calibrated on any portion of the global temperature record to work. Rather, they take as input historical forcings (i.e. known CO2 concentrations, solar emissions, aerosols, etc.) and are expected to produce both historical temperature records as well as forecast future temperatures (among other things) according to a proscribed future emissions scenario (which fundamentally cannot be predicted because we don't know what measures we will take in future to limit greenhouse gasses -- so modellers just show what the consequences of a range of scenarios will be so we can do a cost-benefit analysis and decide which one is the optimal one to aim for -- and which we then ignore because we like fossil fuels too much). There is some ability to "tune" the models in this sense due to the uncertainty relating to historical aerosol emissions (which some "skeptics" take advantage of, e.g. by assuming that if we don't know precisely what they were then we can safely assume with certainty that they were exactly zero) but this is actually pretty limited because the models must still obey the laws of physics, it's not an arbitrary parameter fitting exercise like training a neural net would be.

    2. GCMs are expected to demonstrate skill on a lot more than just global temperatures. Many known phenomena are expected to be emergent behaviour from a well-functioning model, not provided as inputs.

    3. Even without sophisticated modelling you can actually get quite close using just a basic energy balance model. This is because over longer time periods the Earth has to obey the laws of conservation of energy so while on short time scales the temperature may go up and down as energy is moved around the system, over longer terms these have to cancel out. Charney's 1979 paper really is quite remarkable in that respect -- the range of climate sensitivities proposed is almost exactly the same as the modern range after 30+ years of modelling refinement. Even Arrhenius was in the ballpark over 100 years ago!
  12. opd68. Your process of calibrate, predict,validate does not capture climate modelling at all well. This is a better description of statistical modelling, not physical modelling. Broadly speaking, if your model doesnt predict the observations, you dont fiddle with calibration parameters; you add more physics instead. That said, there are parameterizations used in the climate models to cope with sub-scale phenomena (eg evaporation versus windspeed). However the empirical relationship used is based on fitting measured evaporation rate to measured wind speed, not fiddling with a parameter to match a temperature trend. In this sense they are not calibrated to any temperature series at all. You can find more about that in the modelling FAQ at Realclimate (and ask questions there of the modellers).

    Sks did a series of articles past predictions. Look for the Lessons from past predictions series.
  13. Once again, many thanks for the replies. Hopefully I’ll address each of your comments to some degree, but feel free to take me to task if not. It also appears that I should take a step back into the underlying principles of our scientific 'model' (i.e. understanding) - for example how CO2 affects climate and how that has been adopted in our models. Sphearica – thanks for the links.

    Totally recognise the complexity of the system being modelled, and understand the difference between physically-based, statistical and conceptual modelling. I agree that it is difficult and complex and as such we need to be very confident in what we are communicating due to the decisions that the outcomes are being applied to and the consequences of late action or, indeed, over-reaction.

    The GCMs, etc that are still our best way of assessing and communicating potential changes into the future are based our on understanding of these physical processes and so our concepts need to be absolutely, scientifically justifiable if we expect acceptance of our predictions. Yes, we have observed rising temperatures and have a scientific model that can explain them in terms of trace gas emissions. No problem there, it is good scientific research. Once we start using that model to predict future impacts and advise policy then we must expect to be asked to demonstrate the predictive capability of that model, especially when the predicted impacts are so significant.

    Possibly generaling however my opinion is that the acceptance of science is almost always evidence-based. As such, to gain acceptance (outside those who truly understand all the complexities or those who accept them regardless) we realistically need to robustly and directly demonstrate the predictive capability of our models against data that either wasn’t used or wasn't in existence when we undertook the prediction. In everyday terms this means comparing our model predictions (or range thereof) to some form of measured data, which is why I asked my original question. Tom C thanks for the specifics.

    So, my next question is:

    Q: there are models referred to in the Topic that show predictions up to 2020 from Hansen (1988 and 2006) and I was wondering if we had assessed these predictions against an appropriate data from one of these 4 datasets up to the present?
  14. The IPCC report compares model predictions with observations to date and a formal process is likely to be part of AR5 next year. You can get informal comparison from the modellers here.

    One thing you can see from the IPCC reports though is that tying down climate sensitivity is still difficult. Best estimates are in range 2-4.5 with I think something ~3 being model mean?? Climate sensitivity is a model output (not an input) and this is range present. Hansen's earlier model had a sensitivity at 4.5 which on current analysis is too high (see the Lessons bit on why) whereas Broecker's 1975 estimate of 2.4 looks too low. In terms of trying to understand what the science will predict for the future, we have to live that uncertainty for now.

    I still think you too fixated on surface temperature for validation. Its one of many variables that affected by anthropogenic change. How about things like GHG change to radiation leaving planet or received at surface? How about OHC?
  15. Whoops, latest model/data comparison at RC is here
  16. Thanks scaddenp. That link is exactly what I was after. And not fixated, just referring to what is provided and communicated most often. Always best to start simple I find.

    My point about prediction is really what the models are about - if we aren't able to have confidence in their predictions (even if it's a range) then we will struggle to gain acceptance of our science re: the underlying processes.

    And the question of climate sensitivity is really the key to this whole area of science - i.e. we know that CO2 is increasing and can make some scientifically robust predictions about rates of increase and potential future levels. But that isn't an issue unless it affects out climate. So, the question is then if we (say) double C02 what will happen to our climate and what implications does that have to us? If we have confidence in our predictive models we can then give well-founded advice for policy makers.

    And whilst sensitivity may be an output, my understanding is that it is determined by our input assumptions re: the component forcings such as increased atmospheric water vapour (positive feedback) and cloud cover (negative feedback).

    (ps. When you talk about climate sensitivity, I gather the values are referring to delta T for doubled CO2?)
  17. opd68,

    The Intermediate form of this post contains six figures (including Tamino's) demonstrating the results of exactly the kinds of tests you are talking about. The first one, Figure 1, even shows what should have happened in the absense of human influence. Since the models aren't "tuned" to the actual historical temperature record, the fact that they can "predict" the 20th century temperature record using only natural and anthropogenic forcings seems to be exactly the kind of demonstration of predictive capability that you are looking for.

    The objection usually raised with regards to that is that we don't know for certain exactly what the aerosol emissions were during that time, and so there is some scope for "tuning" in that regard. But I think it's important to understand that the aerosols, while not certain, are still constrained by reality (so they can't be arbitrarily adjusted until the output "looks good", the modellers have to take as input the range of plausible values produced by other researchers) and there are limits to how much tuning they really allow to the output anyway due to the laws of physics.

    I think that if anyone really wants to argue that there is nothing to worry about, they need to come up with a model that is based on the known laws of physics, that can take as input the range of plausible forcings during the 20th century, that can predict the temperature trend of the 20th century using those inputs at least as skillfully as the existing models, and has a much lower climate sensitivity than the existing models do and therefore shows the 21st century will not have a problem under BAU.

    Simply saying that the existing models, which have passed all those tests, aren't "good enough" to justify action is ignoring the fact that they are the most skillful models we have and there are no models of comparable skill that give meaningfully different results. Due to the consequences of late action, those who argue there is nothing to worry about should be making sure that their predictions are absolutely, scientifically justifiable if they expect acceptance of their predictions, rather than just saying they "aren't convinced ". In the absence of competing, equally-skillful models, how can they not be?

    Regarding climate sensitivity, which you are correct in assuming is usually given as delta T for doubled CO2, the models aren't even the tightest constraint on the range of possible values anyway. If you look at the SkS post on climate sensitivity you'll see that the "Instrumental Period" in Figure 4 actually has quite a wide range compared to e.g. the Last Glacial Maximum.

    This is because the signal:noise ratio during the instrumental period is quite low. We know the values of the various forcings during that period more accurately than during any other period in Earth's history, but the change in those values and the resulting change in temperature is relatively small. Furthermore, the climate is not currently in equilibrium, so the full change resulting in that change in forcings is not yet evident in the temperatures.

    In contrast, we have less accurate figures for the change in forcings between the last glacial maximum and today, but the magnitude of that change was so great and the time so long that we actually get a more accurate measure of climate sensitivity from that change than we do from the instrumental period.

    So it is completely unnecessary to rely on modern temperature records to come up with an estimate of climate sensitivity that is good enough to justify action. In fact, if you look at the final sensitivity estimate that is a result of combining all the different lines of evidence, you'll see that it is hardly any better than what we already get just by looking at the change since the last glacial maximum. The contribution to our knowledge of climate sensitivity from modelling the temperature trend during the 20th century is almost negligible. (Sorry modellers!)

    So again, if anyone really wants to argue that there is nothing to worry about, they also need a plausible explanation for why the climate sensitivity implied by the empirical data is much larger than what their hypothetical model indicates.

    And just to be clear:

    And whilst sensitivity may be an output, my understanding is that it is determined by our input assumptions re: the component forcings such as increased atmospheric water vapour (positive feedback) and cloud cover (negative feedback).


    No. It is influenced by some of the inputs that go into the models, but those inputs must be reasonable and either measured or constrained by measurements and/or physics. And the models constrain it less precisely than the empirical observations of the change since the last glacial maximum anyway -- without using GCMs at all we get almost exactly the same estimate of climate sensitivity as what we get when adding them to the range of independent lines of evidence.
  18. JasonB - all clear and understood, and I agree completely that the same clarity and scientific justification is required for the opposite hypothesis of increased CO2 having no significant effect on our climate. Science is the same whichever side you are on.

    I spend my working life having people try to discredit my models and science in court cases, and doing the same to theirs. I therefore think very clearly on what is and what is not scientifically justifiable and am careful to state only that which I know can be demonstrated. If it can't I am only able to describe the science and processes behind my prediction/statements which by necessity become less certain the more I am asked to comment on conditions outside those that have been observed at some stage.

    My entry to this conversation is because I keep hearing that the science is settled and I want to see that science. From what I have learned here (thankyou!) the key question for me (which I will start looking through at the climate sensitivity post) is:

    - Are we confident in our understanding of the forcings that are underpinning our predictions at increasing CO2 levels?
  19. opd68,

    Are we confident in our understanding of the forcings that are underpinning our predictions at increasing CO2 levels?


    The forcing resulting from increasing CO2 levels is very accurately known from both physics and direct measurement. By itself it accounts for about 1.2 C per doubling. The forcing from water vapour in response to warming is also quite well known from both physics and direct measurement and, together with the CO2, amounts to about 2 C per doubling. Other feedbacks are less well known, but apart from clouds, almost all seem to be worryingly positive. As for clouds, they are basically unknown, but I think a very strong case can be made that the reason they are unknown is precisely because they're neither strongly positive nor negative. As such, any attempt to claim that they are strongly negative and will therefore counteract all the positive feedbacks seems like wishful thinking that's not supported by evidence. If anything, the most recent evidence seems to suggest slightly positive.

    One way to avoid all these complications is to simply use the paleoclimate record. That already includes all feedbacks because you're looking at the end result, not trying to work it out by adding together all the little pieces. Because the changes were so large, the uncertainty in the forcings is swamped by the signal. Because the timescales are long, there is enough time for equilibrium to be reached. The most compelling piece of evidence, for me, is the fact that the best way to explain the last half billion years of Earth's climate history is with a climate sensitivity of about 2.8 C, and if you deviate too much from that figure then nothing makes sense. (Richard Alley's AGU talk from 2009 covers this very well, if you haven't seen that video yet then I strongly recommend you do so.)

    Look at what the evidence tells us the Earth was like during earlier times with similar conditions to today. This is a little bit complicated because you have to go a really long way back to get anywhere near today's CO2 levels, but if you do that then you'll find that, if anything, our current predictions are very conservative. (Which we already suspected anyway -- compare the 2007 IPCC report's prediction on Arctic sea ice with what's actually happened, for example.)

    No matter which way you look at it, the answer keeps coming up the same. Various people have attempted to argue for low climate sensitivity, but in every case they have looked at just one piece of evidence (e.g. the instrumental record) and then made a fundamental mistake in using that evidence (e.g. ignoring the fact that the Earth has not yet reached equilibrium, so calculating climate sensitivity by comparing the current increase in CO2 with the current increase in temperature is like predicting the final temperature of a pot of water a few seconds after turning the stove on) and ignored all of the other completely independent lines of evidence that conflict with the result they obtained. If they think that clouds will exert a strong negative feedback to save us in the near future, for example, they need to explain why clouds didn't exert a strong negative feedback during the Paleocene-Eocene Thermal Maximum when global temperatures reached 6 C higher than today and the surface temperature of the Artic ocean was over 22 C.

    My view is that the default starting position should be that we assume the result will be the same as what the evidence suggests happened in the past. That's the "no models, no science, no understanding" position. If you want to move away from that position, and argue that things will be different this time, the only way to do so is with scientifically justifiable explanations for why it will be different. Some people seem to think the default position should be "things will be the same as the past thousand years" and insist on proof that things will change in unacceptable ways before agreeing to limit behaviour that basic physics and empirical evidence shows must cause things to change, while at the same time ignoring all the different lines of evidence that should be that proof. I find that hard to understand.
  20. opd68 - you are correct that obviously sensitivity is ultimately a function of the model construction. "our input assumptions" of course are known physics.

    You ask "are we confident in our understanding of the forcings that are underpinning our predictions at increasing CO2 levels?". The answer is yes, but I wonder what you are looking for that could give you that assurance? It's rather exhaustively dealt with in Ch9 from memory of the AR4 IPCC report. If that didnt convince you then what are you looking for? These forcings and response can be verified independent of GCMs.
  21. JasonB - clearest response/conversation I have had on that ever. Thank you.

    scaddenp - what I am looking for is each 'against' argument dealt with rationally and thoughtfully, which is why I'm working through this as I am.
  22. 563, opd68,

    Sorry, I've been too busy to follow the conversation and get caught up on everything that's been said, but this one comment struck me (and it's wrong):
    Once we start using that model to predict future impacts and advise policy then we must expect to be asked to demonstrate the predictive capability of that model, especially when the predicted impacts are so significant.
    We're not entirely using models to predict and advise. It's one tool of many, and really, if we wanted to we could throw them out (at least, the complex GCMs, I mean -- after all, all human knowledge is in the form of models, so we can't really throw that out).

    The bottom line is:

    1) The physics predicts the change, and predicted the change before it occurred, and observations support those predictions

    2) Multiple, disparate lines of investigation (observations, paleoclimate, models, etc.) point to a climate sensitivity of between 2 C and 4.5 C for a doubling of CO2.

    3) None of this requires models -- yes, they add to the strength of the assessment in #2, but you could drop them and you'd still have the same answer.

    The models are an immensely valuable tool, but there is no reason to apply the exceptional caveat that they must be proven accurate to use them as a policy tool. Poppycock. Human decisions, life-and-death decisions, are made with far, far less knowledge (conduct of wars, economies, advances in technology, etc.). To say that we need even more certainty when dealing with what may turn out to be the most dangerous threat faced by man in the past 50,000 years is... silly.
  23. Sphaerica,

    Whatever we use to illustrate and communicate our science must, in my opinion, be valid and justified. Otherwise we are simply gilding the lily. The fact that life and death decisions can be made with a paucity of information does not mean that we would be better off not doing so if we can.

    My opinion is simply that if we are using models to predict outcomes and inform our decisions then if we are confident in them and can demonstrate that to others:
    (1) we will more easily gain acceptance of the need for and impacts of our decisions, and
    (2) our decisions are more likely to be good ones.

    If the models can be so easily discarded, then we have spent a very long time and a lot of money & effort that could have been better employed elsewhere. If, however, they are a key element in improving our understanding and ability to communicate the problem then we can't afford to discount the need for them to be robust and demonstrably so.

    My point, which I'm still not sure was either wrong or silly, was simply that since we are using these tools I was interested in seeing how they were performing because that is how I increase my confidence in other peoples knowledge and build my own. Your point (1) in 'the bottom line' indicates to me that you think exactly the same way: a model of physics predicted the change and the observations supported those predictions - and you use this evidence to support your knowledge.
  24. For the modellers and the funders of modellers, the point is not see if the science is right - way past that and as Sphaerica says not needed. What the models can do that other lines cannot, is evaluate the difference in outcomes between different scenarios; estimate rates of change; predict the likely regional changes in drought/rainfall, temperature, snow line, and season on so on. Convincing a reluctant joe public that there is a problem is not the main purpose. And yes, I do agree that we need to understand the limitations but the IPCC reports seems to be paragons of caution in that regard.
  25. Moving the discussion of models from posts by Eric here and here.

    I had asked here "What kind of model could be used to support a claim of "significant contributing factor", but would not also have an estimate of sensitivity built into it?"

    [For context, since we're jumping threads, we're talking about humans being a significant factor in recent warming, but Eric is questioning predictions of the future, in particular the idea that the sensitivity of doubling CO2 is most likely in the 3C range.]

    Eric has made the comment "Using a model without sensitivity built in: the rise in CO2 is 6% per decade so the rise in forcing from CO2 per decade is 5.35 * ln (1.06) which without any feedback (lambda is 0.28 K/W/m2) means 0.087C per decade rise due to CO2." (Look at the first link above to see the complete comment.)

    Can't you realize you've just assumed your conclusion? You've made the erroneous assumption that the model you present (rate of temperature increase dependent on rate of CO2 increase) does not have a built-in sensitivity. Any model that creates a T(t)=f(CO2(t)) relationship (t being time) also has a "built-in" relationship between temperature and CO2 levels at different equilibrium values that will, by necessity, imply a particular "sensitivity".

    As far as I can see, your answer is a tautology. You've "demonstrated" a transient model that doesn't provide a sensitivity value by simply saying that your transient model doesn't provide a sensitivity value. This falls into the "the isn't even wrong" class.

    I'm not sure, but perhaps your second comment (second link above) is admitting this error, where you say "I should point out here that using the same simple (no model) equations I get 2.4C per doubling of CO2. I'm sure someone else will point this out, but fast feedback in my post above disproves my claim of low sensitivity that I made on the other thread." The issue in this sentence is the failure to realize that your "no model equation" is indeed a model.

    I really, fundamentally, think that you do not understand what a "climate model" is, how they are used to examine transient or equilibrium climate conditions, or how a "sensitivity" is determined.

    To reiterate what other have said: you seem to have a psychological block that "models are bad", and even though you follow much of the science (and agree with it), at the end you stick up the "model boogie man" and declare it all invalid.
  26. Bob, as you saw in my second comment, I realized after my first comment that I had in fact assumed a model. My model assumes the rise in CO2 is causing a rise in water vapor and a larger addition to global temperature (57% vs 43%) than the CO2. The question is how well this model applies to the future. There could be long term positive feedback from other sources (e.g. methane) which I am not considering. I am just looking at short term WV feedback and whether the present feedback will continue.

    I had a lot longer post about WV feedback in models, but the preview erased my comments, possibly due to some bad format code in text I cut/pasted from two different papers. I referred to a report entitled "Climate Models: An Assessment of Strengths and Limitations" and they answer one of my concerns about the unevenness of WV in a sidebar on p. 24. They refer to a paper by Ramaswamy which I could not find, but found a similar paper by Held and Soden (2000): http://www.dgf.uchile.cl/~ronda/GF3004/helandsod00.pdf

    On p. 450 in HS00, they talk about the importance of circulation in determining the distribution of water vapor. I agree with their final remark indicating that satellite measurements of the distribution of WV should validate modeled WV distribution by 2010 or very likely by 2020 (they wrote that in 2000). There should be recent papers on that topic which I need to look for. What it boils down to is if water vapor is unevenly distributed then there will be less WV feedback and that will be determined by circulation (in reality and in the models).
  27. Eric: The question is how well this model applies to the future.

    No, the question I was asking was "What kind of model could be used to support a claim of "significant contributing factor", but would not also have an estimate of sensitivity built into it?"

    Perhaps you are now accepting that any models used in the short-term will have a sensitivity "built in"? Perhaps what you are wanting to do is to argue about the uncertainty in those results?

    The question of whether that model works well is a different issue from whether or not it has a "sensitivity is built into it". Any model that quantifies the extent to which human activities (i.e., CO2 increase) have contributed to current warming must also have an associated "sensitivity built into it". It's simply a question of how you run the model and how you process the results.

    You are diverting the discussion into an evaluation of different feedbacks. The models, when used to look at recent warming, are basically the same models that are used to estimate 2xCO2 sensitivity. They incorporate the same feedbacks. They incorporate the same uncertainties in feedbacks. I have grabbed the Climate Models report you refer to, and it does talk about water vapour feedback uncertainty, but the question (in my mind) is:

    ...why do you decide that all the uncertainties are wrong, and climate sensitivity of the models (which is what is used to decide on the uncertainty) can't be trusted - and indeed you are convinced the sensitivity is too high? You seem to trust the models in the short-term, you seem to feel that something is not handled properly in the long-term, and then you use that lack of trust to argue for a greater certainty/less uncertainty (at the low end) than the scientists come up with.

    It appears to me that you accept the WV feedback in the short term, and are convinced that the models do it wrong in the longer-term, and then conclude that the only possible correct answer is the one at the low end of the uncertainty. The documents you mention are expressions of that uncertainty, not an argument that the correct answer is at the low end. Your decision at the end of the logic/evidence chain, that the sensitivity is is at the low end of the scientists range, looks like you're just applying magic.

    [posting note: I've found that if you forget to close an href tag (i.e., leave off the closing > after the link), the editor will drop everything after that point in the text box. I've made the habit of doing ^A ^C to select everything and copy it to the clipboard before I hit "Preview". When I'm feeling particularly unsure, I paste it into a text editor to be sure I've got a copy.]
    Response: [Sph] If this happens, the content of your comment will probably still be there, but just be invisible. Simply post another comment asking a moderator to repair your post, and it will probably be done fairly quickly.
  28. Bob, first I agree with your posting note, some of my best arguments have remained hidden inside unclosed HTML tags. To answer your "It appears to me" paragraph, I accept WV feedback in the short term by which I mean the last few decades in total since WV can fluctuate naturally in shorter intervals. Running the models for the longer run into the future results in circulation pattern changes and associated localized weather changes. Some of the uncertainty in those changes are known to be at the lower end of sensitivity. For example the models underestimate the intensity of precipitation, they underestimate the penetration of cold air masses,, they underestimate storm intensity compared to finer resolution models. These all result in underestimation of negative feedback in particular underestimated latent heat flux.
  29. Eric: "Some of the uncertainty in those changes are known to be at the lower end of sensitivity."

    Alas, asserting this beyond the knowledge of the scientists is a "dog that won't hunt". If these things are "known to be at the lower end of the sensitivity", then they aren't "some of the uncertainty". You are engaged entirely in wishful thinking.
  30. Bob,

    "...wishful thinking."

    Hmmmmmm.
  31. New study of seven climate models finds skill demonstrated for periods of 30 years and longer, at geographic scales of continent and larger: Sakaguchi, Zeng, and Brunke.
  32. dvaytw, other than Christy I have not heard of anyone using that for a baseline referent, let alone any climate models. Climate model referents are typically based on 30-year periods or more.

    Suggestion: Have your friend cite a source for that claim. Because it reeks like bunkum.
  33. I'm sorry if someone has already brought this up, but I don't want to read the entire comments section. An AGW denier friend of mine has brought up this question:

    "Why oh why do so many models use 1979-1982 as a base?"

    I dismissed his question for the anomaly-hunting it is, but I am curious if, assuming his observation is correct, anyone here happens to know the answer.
  34. dvaytw,

    Can you ask your friend if he can cite a single instance of a model that uses a 1979-1982 base? I have never heard of such a baseline. Usually climate models use a thirty year base. Hansen uses 1950-1979 and others use more recent data. A few special data sets use a 20 year base because the baseline is changing so fast, due to AGW, that 30 years is not representative of the true baseline.

    It is difficult to counter an argument that is completely non factual.
  35. On another thread, Snorbert Zangox asked:
    I wonder why, if that works so well, that the models cannot reproduce the past 16 years of temperatures not following carbon dioxide concentrations.
    This one's easy. They can.


    This is the output from a very simple 2-box model. I wrote it in R in ~20 lines of code. All it does is find the response function which matches forcing to temperature over the past 130 years, with an additional term to account for the substantial impact of ENSO on temperatures. Red is model, blue is GISTEMP. You can see that the model also shows similar 1998 peak with a higher trend before and lower trend after. Why? Because there have been more La Ninas over the past few years, and the difference between an El Nino and La Nina is roughtly equivalent to 15 years of warming (see for example this article). The model reproduces reality very well indeed.

    Now, since the ENSO cycle is chaotic, we can't predict when a run of La Ninas or El Ninos will occur, so this couldn't be predicted in advance. But if you look in the model runs for real climate models which reproduce ENSO well, you see exactly this sort of behaviour. The models predict it will happen from time to time, but not when.

    There is a second aspect to your question, which you reveal in the 16 year figure. I guess you are referring to the viral '16 years of no warming' story. Ask yourself the following two questions: 'Why do this stories always HadCRUT and not GISTEMP?' and 'Why does no-one ever show a comparison of the gridded data?' Now look at this image which shows the change in temperature between the beginning and the end of the period from various sources. Which datasets have the best coverage? What is going on in the regions omitted in the HadCRUT data? You should now understand why HadCRUT shows less warming that GISTEMP over this period.

    I also wonder, admittedly without having thoroughly read the papers, whether we are using an elaborate logical scheme that is circular.
    No, because we are talking about completely different models. If a climate model was being used to determine the aerosol forcing, you would have a potential case, however we are talking about completely different models. Atmospheric chemistry models are used to describe the behaviour of gasses in the atmosphere and are based on physics and chemistry which is observed in the laboratory. The results are combined with radar, IR, microwave and optical measurements to determine the state of the atmosphere - so far everything is empirical. This empirical data is tested against economic variables to determine how well the atomspheric chemistry is predicted by industrial activity. The robust agreement provides a basis for reconstructing atmospheric data from industrial activity before the observation period. The chain of inference is linear, not circular. Furthermore, no climate models are involved.

    (There appear to be several other approaches. Some involve climate models as a consistency check.)
  36. So, if I am to accept the science presented here, I must accept that:
    1) We have a comprehensive understanding of all the inputs, feedbacks, timing, and interactions of the global climate,
    2) We have defined this in a computer model identically with that understanding,
    3) We have no bugs or unintended effects programmed into our computer models (Microsoft is very jealous),
    4) We have an absolutely accurate understanding of the current climate, and
    5) we have done all of this without opening up the models for public review by those who might disagree with them.
    If not, we have, at best, an approximation with lots of guesses and estimations that might be full of holes and bugs, yet we used it to make predictions that we trust enough to propose spending Trillions of dollars in response. Of course, at worst, we have crappy code that can cause more harm than good by convincing us of things that we don't know enough about how they came about to doubt.
    Please forgive me if I am still skeptical of the computer models. I do enough modeling in computers to be dubious of anything you get out of an imperfect model.
  37. Jack... You're making completely erroneous assumptions. Even Ben Santer says that models don't do a great job. That is why they rely on model ensembles and multiple model runs. Santer also says that some models are better than others, but ensembles perform better than even the better models. AND weighting the better models makes the ensembles perform even better.

    It sounds more to me like you are looking for reasons to dismiss the science rather than honestly attempting to understand the science.
  38. Sigh. Are the models, in fact, untestable? Are they unable to make valid predictions? Let's review the record. Global Climate Models have successfully predicted:

    • That the globe would warm, and about how fast, and about how much.
    • That the troposphere would warm and the stratosphere would cool.
    • That nighttime temperatures would increase more than daytime temperatures.
    • That winter temperatures would increase more than summer temperatures.
    • Polar amplification (greater temperature increase as you move toward the poles).
    • That the Arctic would warm faster than the Antarctic.
    • The magnitude (0.3 K) and duration (two years) of the cooling from the Mt. Pinatubo eruption.
    • They made a retrodiction for Last Glacial Maximum sea surface temperatures which was inconsistent with the paleo evidence, and better paleo evidence showed the models were right.
    • They predicted a trend significantly different and differently signed from UAH satellite temperatures, and then a bug was found in the satellite data.
    • The amount of water vapor feedback due to ENSO.
    • The response of southern ocean winds to the ozone hole.
    • The expansion of the Hadley cells.
    • The poleward movement of storm tracks.
    • The rising of the tropopause and the effective radiating altitude.
    • The clear sky super greenhouse effect from increased water vapor in the tropics.
    • The near constancy of relative humidity on global average.
    • That coastal upwelling of ocean water would increase.

    Seventeen correct predictions? Looks like a pretty good track record to me.
  39. Another useful page on model reliability which provides model prediction, the papers that made it, the data that verifies it.

    Incomplete models with varies degrees of known and unknown uncertainties are just part of life - you should see my ones which help petroleum companies make multi-million dollar drilling decisions. A model useful if it has skill - able to outperform a naive prediction. The trick is understanding what those uncertainties are and what are robust predictions can be made. The models have no skill at decadal or sub-decadal climate (unless there is a very strong forcing). They have considerable skill in long term trends.

    You a mathematical model to calculate detailed climate change. You dont need a complicated model to see into the underlying physics and its implication, nor to observe the changes in climate.
  40. Jack, come on. You ask for precision and then vomit a bunch of hearsay.

    On models, a hypothetical: you're walking down the street carrying a cake you've spent hours making. You see someone step out of a car 900 yards down the street. The person aims what looks to be a 30-06 in your direction. You see a flash from the muzzle. What do you do?

    You're arguing to do nothing, because the precise trajectory of the slug is unknown -- the aim of the person may be bad, or you may not be the target, or the bullet may have been inferior, or other conditions might significantly affect the trajectory.

    You wouldn't actually do nothing, though. The only question is whether you'd drop the cake to roll to safety, or whether you'd try to roll with the cake. Your instinctual modeling would calculate a probable range for each variable, and the resulting range of probable outcomes would be fairly limited, and most of those outcomes would be unpleasant.

    So it is with climate modeling. The variables are variable, but the range for each is limited. Climate modelers don't choose one run and say, "Here's our prediction." They work through the range of probable scenarios. Solar output, for example, is likely not going to drop or rise beyond beyond certain limits. We know the power of the various GHGs, and their forcing is likely going to be not far from the known power (see, for example, Puckrin et al. 2004). Climate feedbacks aren't going to go beyond certain bounds. Even the range of net cloud feedback--still understudied--is fairly well-established.

    To put it back in terms of the analogy, it's not like the shooter is wearing a blindfold, or is non-human, or the gun is a toy, or the shooter keeps changing from a couch to a sock to a human to breakfast cereal. It's not like the shooter is working under a different physical model than the target. And it's not like people haven't been shot before. There's plenty of geologic timescale precedence that supports the theorized behavior of atmospheric CO2.

    Finally, your assumption that we must drop the cake (spend alleged trillions, with the assumption being that these alleged trillions will not re-enter the economy in various ways) is a bad one. We don't have to drop the cake. We just have to be really smart about our moves. We have to have vision, and that is sorely lacking under the current economic and political regime(s).
  41. Daniel Bailey & scaddenp: Thank you, that's helpful to me to see that. However, if Foster and Rahmsdorf are correct, then the models are wrong recently b/c they missed some critical information. One of them has to be 'wrong', since they either explain why warming is hidden or predict that warming happened and isn't hidden. I'm not throwing the GCMs out the window b/c they need better tuning, but shouldn't we support identifying their weaknesses and correcting them so the GCMs can make better predictions with their next run?
    I'm not a scientist, I don't play one on TV, but I'm trying hard to better understand all this. I will try to ask better questions as I learn more, but I'm thick skinned enough to tolerate being berated when I ask a stupid one.
    However, I am an economist by training, and I do a lot of computer modeling in my job, so I am quite familiar with those aspects of this topic. That's also why the economic arguments about a lack of 'real' costs to changing policies is one I dismiss easily. It's the classic 'broken window' proposition, thinking that breaking a window benefits the economy by getting a glass repairman paid to fix the window and then he spends that money on a new TV, which means the worker who made the TV spends his increased wages on a...
    It only works if you assume that the money to pay the glass repairman was magically created and didn't devalue the remaining currency. Otherwise, you are pulling money from an investment that can increase economic efficiency to spend on a repair to get back to the same level of efficiency you were before the glass was broken.
    It has been shown in numerous manners that it is a flawed proposition, and it also doesn't 'make sense' (no economy has been helped by being bombed by the US). Yet, it get repeated often to justify spending money on things that don't increase efficiency but cost a lot.
    I freely admit there are times when it makes sense to spend the money that is being proposed here, but don't try to pretend that their aren't real financial costs.
  42. JackO'Fall @591,

    1) If you are an economist, you know that the true cost to the economy of a fee and dividend carbon tax (or similar) is not measured by the cost of the fee alone; and indeed is a small fraction of it. Your characterizing such costs in terms of "trillions" of dollars is, therefore unwarranted (to be polite).

    2) If you are an economist worth your salt, you will recognize that uncompensated negative externalities make the economy inefficient, and would be advocating a carbon tax to fund the medical costs, plus costs in lost income for those affected, associated with the burning of coal irrespective of your opinions on global warming.

    3) If you were a modeler worth your salt, you would recognize the difference between a prediction, and a conditional prediction premised on a particular forcing scenario. A slight difference between a conditional prediction premised on a particular forcing scenario and observations when the actual forcings differed from those in the scenario does not make the model wrong. It just means the modelers where not perfect in predicting political and economic activity ten or more years into the future. The test of the model is the comparison between prediction and observations once the model is run on actual forcings.
  43. Tom Curtis @592,
    Re 1): Tax and dividend policies are deceptive, in my view. It is not just a wealth transfer, but it changes behavior (as you intend it to do). This change in behavior has ripples, and the ripples have efficiency costs. For example, a carbon tax in India or China would prevent most of the new coal plants from opening (if it didn't, I think it's fair to call the tax a failure). There is no 'second best' option available to replace those plants at a price that is viable (today). Thus, the tax is retarding the economic growth and advancement of millions of our most poor people without actually collecting any revenue from those power plants. There would be no redistribution of wealth as a result, instead there would be a lack of growth and no taxation to show for it. To produce that missing power with a 'greener' technology will indeed push the price tag into the trillions. Unless you have a cheaper way to produce that volume of power. (side note: I don't want them to build those coal plants for a number of health reasons, but I recognize the economics of it for them and that until they are at a higher economic level, clean isn't a concern for them)
    2) Uncompensated or not, ALL negative externalities cause inefficiencies. Taxing them is helpful in reducing the net effect, but compensating for them is actually counterproductive (it creates an incentive to by 'harmed' and eliminates the incentive to avoid harm). To pay for the costs associated with coal, I would fully support a targeted tax on what causes the medical issues (clean coal [in spite of being a misnomer] produces a lot less harmful byproducts than dirty coal, but little difference in CO2). Taxing the carbon would be a very inefficient way to deal with that problem, compared to taxing the release of specific combustion byproducts. But, yes, in a general sense, taxing externalities, such as coal byproducts, is an efficient way to try to compensate for the negative consequences.
    3) You are 100% correct that the method you propose to test a GCM would be the best. However, I have never seen a model used that way. Hindcasting is a distant cousin, as best, as the models were developed to account for the known inputs and known climate. Taking the exact models used in AR4 and updating all the unknown variables, specifically CO2 emissions (as opposed to CO2 levels), volcanic activity, & Solar output for the following 8 years (unknown to the modeler at the time of finishing their model), you should be able to eliminate the range of results normally produced and create a single predictive result. That should be much more useful to compare than a range of predictions to cover the uncertainty. Comparing that 'prediction' with actual measurements would be the best way to test the GCMs and would even provide a result that 'could' be proven wrong. Having the possibility of being proven wrong by observations is actually a needed step for AGW, otherwise it hardly fits the definition of a theory. Though, I suspect the climate models of today are more accurate than those used in AR4 (at least I hope they are, otherwise our process is really broken).
    OTOH, if you are a modeler worth your salt, you will freely admit the range of shortcomings in models, the inherent problems with any computer model, the difficulty with trying to model as chaotic and complex a system as our climate, and the dangers introduced with any assumptions included. At least, those are the caveats I accept in my modeling (except for the difficulty modeling the climate; I have much more simple tasks, but ones with more immediate and absolute feedback to test my predictions).
  44. Jack O'Fall:

    Please move any further discussion on the economics of climate change to an appropriate thread. It is completely off-topic on this thread. If you have references to economic analyses supporting your position please provide them on an appropriate thread; otherwise you are engaged in unsubstantiated assertion, which will not get you very far here.

    (All: Any responses to Jack regarding ecnomics should also be on an appropriate thread.)

    As far as the rest of the wall of text goes, please note that climate models are attempts to create forecasts based on the known physics, existing empirical data, and the reconstructed paleoclimate record. If you want to "disprove" AGW, meaningfully, you must show the physics is wrong, not the models.
  45. JackO'Fall, probably you do not realize how fundamentally wrong some of your contentions are, due to your admitted lack of background in climate modeling, climatology, and science in general. You are overconfident in your experience with modeling in general, too. Your expressed overconfidence is going to trigger some strong reactions. I hope you do not take offense and retreat, but instead get some humility and learn. Many lay people new to the global warming discussions enter with similar overconfidence.

    You need to learn the fundamentals. Start with this short set of short videos from the National Academy of Sciences: Climate Modeling 101.

    Your implication that climate modelers do not want to, or have not considered, improving their models is not just offensive to them, but reflects poorly on your understanding and ascribing of motivations. Modelers are consumed by the desire to improve their models, which you would know if you had even a passing familiarity with their work; every paper on models describes ideas for how the models can be improved. Just one example is the National Strategy for Advancing Climate Modeling.

    Then look at the Further Reading green box at the bottom of the original post on this page (right above the comments).

    Your contention that models are not open for public inspection is wildly wrong. A handy set of links to model code is the "Model Codes" section on the Data Sources page at RealClimate. Also see the last bullet, "Can I use a climate model myself?", on the RealClimate page FAQ on Climate Models. (You should also read the FAQs.) The Clear Climate Code project is an open source, volunteer rewriting of climate models. So far it has successfully reproduced the results of the GISTEMP model.

    But computer models are not needed for the fundamental predictions, as Tamino nicely demonstrated. Successful predictions were made long before computers existed, as Ray Pierrehumbert recently explained concisely in his AGU lecture, Successful Predictions.

    The vertical line in this first graph separates hindcasts from forecasts by a bunch of models.

    Your baseless, snide remark about "crappy code" reveals your ignorance of software engineering. You can start to learn from Steve Easterbrook's site. Start with this post, but continue to browse through some of his others.
  46. JackO'Fall

    Tom Curtis - "The test of the model is the comparison between prediction and observations once the model is run on actual forcings."

    JackO'Fall - "You are 100% correct that the method you propose to test a GCM would be the best. However, I have never seen a model used that way."


    Then I would suggest looking at the performance of various models here on SkS, both of "skeptics" and of climate researchers, as well as the considerable resources on Realclimate:

    Evaluation of Hansen 1981
    Evaluation of Hansen 1988
    2011 Updates to model-data comparisons

    And even a cursory look via Google Scholar provides a few items worth considering (2.4 million results?). I find your statement quite surprising, and suggest you read further.

    "OTOH, if you are a modeler worth your salt, you will freely admit the range of shortcomings in models, the inherent problems with any computer model, the difficulty with trying to model as chaotic and complex a system as our climate, and the dangers introduced with any assumptions included."

    If you are a modeler worth your salt, you will know that all models are wrong, but many are close enough to be useful. And the record of reasonable models in predicting/tracking climate is quite good.

    Your statements regarding models appear to be Arguments from Uncertainty - the existence of uncertainty does not mean we know nothing at all.
  47. No such thing as a "throwaway remark" in our brave new world; you can't just roll down the window and toss out litter without finding it stubbornly affixed to your reputation, later on.

    I hope Jack will deal with Tom and KR's remarks; judging by those I count perhaps a half-dozen assertions on Jack's part that appear to be baseless.
  48. " if Foster and Rahmsdorf are correct"

    If they are correct, then short term surface temperatures are dominated by ENSO, and so for climate models to have the accuracy you want, then they would have to make accurate predictions about ENSO for decades in advance. In practice, ENSO is difficult to predict even month's in advance. Is the current rash of La Ninas unusual historically? No, nor would a rash of El Ninos be but I will bet that if it happens there wont be complaints about models underestimating rate of warming.

    Climate models have no skill at decadal prediction, nor do they pretend to. It's a fake-skeptic trick to try and prove models wrong by looking at short term trends. Climate is 30 year weather trends, and these are the robust outputs of models.

    There are attempts at decadal predictions - look at Keenlyside et al 2008. How is this one working out?

    If surface temperatures are dominated by chaotic weather, then instead of looking at the surface temperature to validate models, then how about looking at indicators that are long-term integrators. Eg OHC and say global glacial volume?
  49. @ Tom Dayton: I never said a modeler doesn't want to improve their model, just that I would like them to continuously improve and that the need to include new feedbacks is not bad. I threw that in there in hopes of showing that I'm not rooting against the models. Apparently I missed getting that across. My apologies.
    My time is limited, so I know I miss a lot of data out there (and don't have a chance to reply to a lot of what gets written back at me). However, I looked at the GISTEMP link, it doesn't look like a climate model, but an attempt to recreate the corrective actions that go into adjusting the raw data from the temperature stations and producing the GISS results. Still, cool that they are doing it. What I was referring to is a lack of source-code with documentation for the GCMs. If it exists, I am clearly wrong and fully retract that statement.
    I also read the RealClimate FAQs on GCMs, as suggested. It seemed to agree with many of my basic contentions. (they have estimations that they know are off, they don't include everything we know, they are prone to drifting-though less than in the past, they are primarily tuned by trial&error, not scientific principles [please note the word 'tuned'])
    @KR: While both of Hansen's graphs seem to do a good job estimating future temperatures, that's not what I was referring to. I believed Tom Curtis was proposing re-running a 2004 scenario (for example), yet adding in known 'future' levels of things like CO2 and volcano emissions. If that exists, please let me know.
    Of course, the other link was inconclusive, so be polite. The range of uncertainty for those models is so large it doesn't really tell us anything. A result so broad that it would be difficult for it to ever be wrong is also not very right.
    @scaddenp: At the very least, if the ENSO correction is more natural variability than previously understood, that is very helpful. In terms of modeling that, it will probably increase the uncertainty range, but would allow for a better run at what I believe Tom Curtis proposed. That may be more helpful in time scales of less than 30 years (I'm not sure anyone will wait 30 years to see if the current models accurately predicted the future).

    My apologies for off-topic discussion. I should not have made my initial off-the-cuff economic response, and certainly should not have replied more extensively.
  50. Jack,
    Anyone who has looked at this issue at all should know that GISTEMP has two web links. One gives their code and documentation for determining the anomaly of surface temperatures and the other gives code and documentation for their climate model. That includes all the source code and documentation that you can desire,including old models. You need to do your homework before you criticize hard-working scientists efforts. Look at GISS again and find the climate model link.
    Response: [TD] Typo: "GISTEMP has two web links" should be plain "GISS."

Prev  1  2  3  4  5  6  7  8  9  10  11  12  

Post a Comment

Political, off-topic or ad hominem comments will be deleted. Comments Policy...

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.

Link to this page



The Consensus Project Website

TEXTBOOK

THE ESCALATOR

(free to republish)

THE DEBUNKING HANDBOOK

BOOK NOW AVAILABLE

The Scientific Guide to
Global Warming Skepticism

Smartphone Apps

iPhone
Android
Nokia

© Copyright 2014 John Cook
Home | Links | Translations | About Us | Contact Us