Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Donate

Twitter Facebook YouTube Pinterest

RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
Keep me logged in
New? Register here
Forgot your password?

Latest Posts

Archives

How reliable are climate models?

What the science says...

Select a level... Basic Intermediate

Models successfully reproduce temperatures since 1900 globally, by land, in the air and the ocean.

Climate Myth...

Models are unreliable
"[Models] are full of fudge factors that are fitted to the existing climate, so the models more or less agree with the observed data. But there is no reason to believe that the same fudge factors would give the right behaviour in a world with different chemistry, for example in a world with increased CO2 in the atmosphere."  (Freeman Dyson)

Climate models are mathematical representations of the interactions between the atmosphere, oceans, land surface, ice – and the sun. This is clearly a very complex task, so models are built to estimate trends rather than events. For example, a climate model can tell you it will be cold in winter, but it can’t tell you what the temperature will be on a specific day – that’s weather forecasting. Climate trends are weather, averaged out over time - usually 30 years. Trends are important because they eliminate - or "smooth out" - single events that may be extreme, but quite rare.

Climate models have to be tested to find out if they work. We can’t wait for 30 years to see if a model is any good or not; models are tested against the past, against what we know happened. If a model can correctly predict trends from a starting point somewhere in the past, we could expect it to predict with reasonable certainty what might happen in the future.

So all models are first tested in a process called Hindcasting. The models used to predict future global warming can accurately map past climate changes. If they get the past right, there is no reason to think their predictions would be wrong. Testing models against the existing instrumental record suggested CO2 must cause global warming, because the models could not simulate what had already happened unless the extra CO2 was added to the model. All other known forcings are adequate in explaining temperature variations prior to the rise in temperature over the last thirty years, while none of them are capable of explaining the rise in the past thirty years.  CO2 does explain that rise, and explains it completely without any need for additional, as yet unknown forcings.

Where models have been running for sufficient time, they have also been proved to make accurate predictions. For example, the eruption of Mt. Pinatubo allowed modellers to test the accuracy of models by feeding in the data about the eruption. The models successfully predicted the climatic response after the eruption. Models also correctly predicted other effects subsequently confirmed by observation, including greater warming in the Arctic and over land, greater warming at night, and stratospheric cooling.

The climate models, far from being melodramatic, may be conservative in the predictions they produce. For example, here’s a graph of sea level rise:

Observed sea level rise since 1970 from tide gauge data (red) and satellite measurements (blue) compared to model projections for 1990-2010 from the IPCC Third Assessment Report (grey band).  (Source: The Copenhagen Diagnosis, 2009)

Here, the models have understated the problem. In reality, observed sea level is tracking at the upper range of the model projections. There are other examples of models being too conservative, rather than alarmist as some portray them. All models have limits - uncertainties - for they are modelling complex systems. However, all models improve over time, and with increasing sources of real-world information such as satellites, the output of climate models can be constantly refined to increase their power and usefulness.

Climate models have already predicted many of the phenomena for which we now have empirical evidence. Climate models form a reliable guide to potential climate change.

Basic rebuttal written by GPWayne

Last updated on 26 February 2014 by LarryM. View Archives

Printable Version  |  Offline PDF Version  |  Link to this page

Further reading

Update

On 21 January 2012, 'the skeptic argument' was revised to correct for some small formatting errors.

Comments

Prev  1  2  3  4  5  6  7  8  9  10  11  Next

Comments 151 to 200 out of 505:

  1. cloneof,
    well, it's hard to make any comment on a paper if you do not give a reference. Assuming it's J. Clim. 2008, 21, 5624, it has nothing to do with models. It's on the empirical (mainly from satellite data) determination of feedback operation.
  2. cloneof,
    i'd like to add that, as a rule of thumb, when a paper is ignored by the other scientists you can safely assume that it is considered of no value, not even worth of a reply or a quote.
  3. Riccardo

    As I have just a few minutes let me first just apologize for not giving the refrence. It was almost ill responsible to brag about a paper that I didn't even give the refrence to.

    I will give you an response later today.
  4. Riccardo

    I was going to write about the paper, but I actually found a good video where Spencer explains hi's paper.

    And at least he gives the impression that it is supposed to do something about climate models.

    http://www.youtube.com/watch?v=LpFk0zTW-ik
  5. cloneof,
    the video is a sort of short version of the paper. He calculated a possible cause of error in empirical sensitivity estimates based on random variability of some internal forcing factor. Tuning the variability with just satellite tropical data he found a lower climate sensitivity than the global sensitivities obtained by various GCMs.
    This is all it has to do with models, Spencer compares his sort of "tropical sensitivity" with GCMs. While the effect of random variability on short term empirical estimates of the sensitivity might in principle make sense, it does not make any sense when he compares his tropical sensitivity with GCMs.
    This paper is on the very same track of the more recent (and wrong) Lindzen and Choi 2009 paper, similar wrong reasoning that they sell as the definitive proof.
  6. Riccardo

    Alright, thank you Riccardo. That totally cleared the things for me.
  7. cloneof,
    you might be interested in this paper in press which specifically addresses Spencer and Braswell 2009 and to this discussion.
    Thanks to Ari Jokimäki at AGWObserver for informing us on this not yet published paper.
  8. Riccardo:

    Oh really? Sweet, got to see that when it get's published.
    Thanks for the link, even though I'm a few weeks late.
  9. transferred over from "CO2 is not the only driver of climate"

    re yocta at 09:53 AM on 4 June, 2010

    You say:
    ...and as can be seen with the IPCC tracked models there is quite a divergence...
    Please quantify this statement

    These projections from the IPCC Fourth Assessment Report: must surely be the most recognisable of any and quantify the divergence sought.


    Figure 10.5. Time series of globally averaged (left) surface warming (surface air temperature change, °C) and (right) precipitation change (%) from the various global coupled models for the scenarios A2 (top), A1B (middle) and B1 (bottom). Numbers in parentheses following the scenario name represent the number of simulations shown. Values are annual means, relative to the 1980 to 1999 average from the corresponding 20th-century simulations, with any linear trends in the corresponding control run simulations removed. A three-point smoothing was applied. Multi-model (ensemble) mean series are marked with black dots.


    just the opinion of the forecasters as to which one was most likely to eventuate.
    Can you provide evidence of the forecaster's opinion?

    If weather was relevant to your livelihood rather than merely a subject of academic interest or topic of conversation, then you would surely follow professional forecasters rather than those who present it as part of the evening entertainment.
    By following the professional services, the processes by which forecasts are developed will over time become clearer as forecasts are continually updated as situations develop and the forecast period shortens.

    all the models should begin converging until about 24 hours out they all should be fairly well aligned.
    why 24 hours? What physical basis do you have for this?

    See above.

    However there is another scenario that can and does occur, they are all proved wrong. It is obviously impossible for them all to be proved right.
    This statement is too vague.

    See above.
  10. johnd - we (if I recall correctly) got into this topic over on CO2 is not the only driver of climate, where we were discussing weather. In weather a 24 hour outlook is part of forecasting, with declining accuracy over longer periods due to the non-linear chaotic system of atmospheric physics - deterministic (not stochastic, sorry about that in an earlier post) progression but complex and with extreme sensitivity to starting conditions, which we don't absolutely know.

    The IPCC models and predictions you give above are for climate changes; the temporal progression of averages over the period of years. 24 hour time periods make absolutely no sense whatsoever with regard to climate predictions.

    As to the differences between the models - these are related to different estimates on feedback (active research to establish amounts and time constants), and different human future actions (how much CO2 do we continue to put into the air, how many aerosols?). They are multiple year "what if" scenarios. Since they're dependent on feedback refinement and how we respond to the issue, they of course are different in predictions.

    Weather and climate are not the same thing, not in time scale, variability, or predictive range. It's important not to confuse them.
  11. A clarification - the differences shown between the means in the three scenarios is due to different what-if postulations, while the spread of predictions under different assumptions is the difference in multiple year predictions of different models, with different estimations of feedback and sensitivity. Again, the exact values for feedback and time constants are under refinement, which is where climate scientists get to write papers.

    You'll note, however, that all of the models go "up"... regardless of assumptions.
  12. KR at 23:29 PM on 4 June, 2010, the point I started out to make was that models, be they forecasting the weather or the climate, should be within themselves 100% valid.
    That is, the combination of assumptions cannot be shown to be incorrect. If they could be then that particular model would be flawed and should not be used.

    Because each individual model is based on valid assumptions then it has as much chance of being correct as any other individual model.
    With the IPCC they take the mean as being the most likely outcome.
    With weather forecasters the process is similar with a number of different models all being run simultaneously with a range of different outcomes. When the forecasters are required to give a forecast for an extended outlook the use their best JUDGEMENT to select the output of whatever model they think at that time to be the most likely to eventuate.
    As I had mentioned earlier, this at times has resulted in different agencies simultaneously issuing forecasts totally 100% opposing each other. Obviously someones best judgement is different to someone else. They both can't be right, just as all models, be they weather or climate models, cannot all be right. Only one can hope to be right.
    HOWEVER as does happen with weather forecast models, at times ALL can be wrong.
    There is no fundemental reason also why all the climate models tracked by IPCC cannot be all wrong.
    After all, weather forecasting provides much of the data that that is plugged into GCM's that end up being plugged into all the climate models.
  13. After all, weather forecasting provides much of the data that that is plugged into GCM's that end up being plugged into all the climate models.

    No.
  14. Johnd, forecasting is not what climate models do. You're probably on top of that but it's an important distinction for folks less up on the topic, frequently the source of confusion.
  15. doug_bostrom at 07:57 AM, doug, which part do you disagree with, that weather data is plugged into GCM's, or that GCM's are plugged into climate models?
  16. johnd - when dealing with models, it's really not a black-and-white issue of right/wrong. What's important is the predictive capability of the model, which is a sliding scale; how close is the prediction to the actual outcome? Newtonian physics is "wrong" according to General Relativity - but accurate enough to compute most orbital paths outside of Mercury.

    Each of these models is 'valid' for the assumptions used - the relationships, the feedbacks, time scales, input values, etc. These assumptions can be shown to be incorrect - if a feedback value is incorrect, or an important relationship neglected, discovering the more accurate value or relationship can lead to abandoning or modifying a model. And if your assumptions are wildly off, your model is as well.

    These different models all disagree where questions about actual values (current and future research questions!) are still open. If the climate sensitivity is somewhere between 2-4.5 degrees C to a CO2 doubling, then any assumption in that range is in itself valid, and the models predictions will vary. This doesn't mean that the climate sensitivity is therefore 0.1 or 15! The models are close.

    None of the models are perfect - they are not exactly right on the input assumptions, input conditions, relationships, etc. The only complete model would be a copy of the Earth! But after sufficient testing (multiple runs with historic data compared to present, future predictions checked after a couple of years, etc.), they are close, or they are abandoned. And if they are close, they are useful for decision making.

    In my opinion (for whatever that's worth) weather predictions are far more likely to be wrong than climate models, given equal accuracy on assumptions - weather is a short term non-linear chaotic system, and the smallest bit of error in starting conditions, or insufficient granularity of the model, will result in the weather departing from the model after a time. Climate, on the other hand, is far less chaotic - long term averaging overrides any short term non-linear variance.

    And as doug_bostrom said, detailed weather forcasting models have nothing to do with the GCM's - only the long term average measurements are inputs to GCM's.
  17. Johnd, I suspect you made a simple typographical error. Weather forecast outputs are not a part of general circulation model inputs.

    Meanwhile, GCM's -are- climate models; the full term is "general circulation model of climate."

    Finally, just to be extra clear for bystanders, climate models do not produce forecasts nor is that the purpose of such models.

    For the curious, see background information on general circulation models here.
  18. doug - thanks for the link! Fascinating reading... I hadn't realized the complexity of the models used.

    My apologies for inaccuracies, johnd - looks like GCM models have some similarities to short term weather forcasts, but are far more extensive and detailed. I'll repeat, though, that each refinement brings the GCM's closer to matching the actual world behaviors, and makes them more and more useful for looking at the "what-if" scenarios.
  19. doug_bostrom at 08:17 AM, the link below may help illustrate the overlap of weather data and climate modelling.
    Note that the various forecasts used are grouped as Coupled GCM's, Ensembles, and Statistical, and are identified with each agency that produces each.
    I personally favour the Japanese Sintex model as being one of the most accurate, often identifying any change in trends well ahead of any of the others.
    Until May last year they were extremely accurate, correctly forecasting conditions completely opposed to the more recognised agencies that generally had rather more dismal success.
    They then updated their computer system which, without absolutely any changes to the models or the data being inputted, began throwing up forecasts more in line with other agencies. Even when they ran old data through, the results turned up different to the forecasts produced on the old system, even though the original forecasts were extremely accurate.
    I think they are still trying to identify as to why this has occurred, but it does then make one wonder if all agencies use similar computing systems, is there some inbuilt logic in the computer itself that will influence how the data is processed.

    The Fast Break Newsletter
  20. doug_bostrom at 08:17 AM, thanks for an interesting article.
    Especially interesting that the article should mention the following:-

    "It was now evident, in particular, where clouds brought warming and where they made for cooling. Overall, it turned out that clouds tended to cool the planet — strongly enough so that small changes in cloudiness would have a serious feedback on climate.(89)"
  21. Johnd, let me reiterate that general circulation models are not used to produce forecasts in the sense that we use the word to describe predicting weather. GCM utility lies in predicting tendencies. There's a huge difference between the two objectives.

    With regard to clouds, from all that I've read any real skeptic would do well to zero in on those as the single largest possible weakness of GCM's. But don't get your hopes up.
  22. doug_bostrom at 10:28 AM, with regards to your last comment. They do, and we have.
  23. This review article is a little long in the tooth but is pleasingly boggy in terms of showing the difficulty wading through the complexity of cloud treatments. It's also a nice illustration why so few skeptics are capable of emerging from the other side of the cloud swamp bearing useful contributions to the problem; one might say the "Cloud Swamp" is a test capable of identifying what real skepticism looks like.

    Cloud feedbacks in the climate system: A critical review
  24. johnd, with regard to the Sintex model and changes based on computer platform - it might be worthwhile for them to look at any differences in floating point calculations: IEEE compliant or not, single versus double precision, compiler/math library updates, etc. That kind of change is enough to make a difference on these scales.

    The original work on chaos and the Lorenz attractor came out of a very simplified weather model (3 variables, planar planet, etc.) that exhibited chaotic behavior - extreme dependence on starting conditions. Lorenz found that restarting his simulation with values rounded by 1/1000 (from a printout) was sufficient to get entirely different results! That result in the early 70's was sufficient to jump start non-linear system analysis and chaos theory.
  25. Most relevant to this thread about climate models, is this snippet from the Spencer Weart site that Doug linked to:
    That was a fundamentally different type of problem from forecasting. Weather prediction is what physicists and mathematicians call an "initial value" problem, where you start with the particular set of conditions found at one moment and compute how the system evolves, getting less and less accurate results as you push forward in time. Calculating the climate is a "boundary value" problem, where you define a set of unchanging conditions, the physics of air and sunlight and the geography of mountains and oceans, and compute the unchanging average of the weather that these conditions determine.
  26. Explanation of "initial value" (weather) versus "boundary value" (climate) models is provided by Steve Easterbrook at his site Serendipity.
  27. KR at 11:51 AM on 5 June, 2010, KR, I have copied an email exchange relating to the change of computer which may be of interest to you.

    The email was not to me so I blanked out the recipient, but they are available on the internet to view. The most recent reply is on top.

    From: Jing-Jia Luo [mailto:jingjia.luo@...com]
    Sent: Monday, 22 June 2009 2:35 PM
    To: ======================
    Cc: Toshio Yamagata
    Subject: Re: Seasonal forecasts from 1 June 2009 (monthly mean maps)

    Dear Peter,

    Nothing except the computer has changed since 1 April 2009; the forecast model is the same as before. We repeated the forecasts initiated from 1 March 2009 (with the same model and initial conditions), 9-ensemble mean did show certain differences as I mentioned before. I am still not quite sure what the actual reasons for this difference are. One possible factor can be due to the different FORTRAN compiler. This means the executable codes of the coupled model are different now though the source code itself has no any change.

    I asked NEC system engineering. The answer is that it is basically no way to get the same results on the new Earth Simulator (like chaos). Theoretically, if we have infinite ensembles, the results may be equal if the new compiler does not change the code systematically. But who knows (sometimes, bug fix in the compiler can induce big changes in the model results).

    We are planing to redo the hindcast step by step (we are facing another technical problem. Our model speed become slower despite the much faster new machine).

    Bets regards, Jing-Jia



    On Mon, Jun 22, 2009 at 1:08 PM, wrote:

    Dear Jing-Jia

    I regularly talk to wheat farmers in NW Victoria, Australia, at a place called Birchip. The Birchip Cropping Group are the most active farmer group in Australia, and they hold their annual Grains Expo in early July each year. This year, Australia's Governor-General will be attending. Over the years I have given talks about the various climate models, including SINTEX, and they have come to trust SINTEX forecasts. As you know, SINTEX has been successful at predicting the three positive IOD events recently, and the IOD seems to be the most important effect on rainfall at Birchip.

    I will certainly get questions regarding the change of forecast in SINTEX this year, and I would like to be able to answer as clearly as possible. Can you explain to me why the SINTEX forecasts changed so much? I don't understand why changing computers would make such a big difference. Normally one would expect very minor changes going from one computer to another. Were software changes required in order to change computers? Did data sets change? Any information you can give me will be helpful.

    Regards, Peter.

    Dr Peter==============
    Centre for Australian Weather and Climate Research (CAWCR)
    CSIRO Marine Laboratories



    From: Jing-Jia Luo
    Date: Wed, Jun 17, 2009 at 10:06 PM
    Subject: Re: no skill for predicting the IOD before June [SEC=UNCLASSIFIED]
    To: Harry ========
    Cc: David Jones, Toshio Yamagata, Grant Beard, Oscar Alves

    Dear Harry,

    So we are reaching some agreements. The averaged hindcast skill just gives you a rough reference. If you do carefully for the real time forecasts, I believe you should go beyond this (even without the increase of ensemble members); you have much more information/analysis than the mean hindcast skill tells.

    Concerning the smoothing issue: When we look at the monthly prediction plumes of 12 target months, we will focus on the signal beyond the intraseasonal disturbance. And we will look at the consecutive forecasts performed during several months. In this sense, we are also doing the smoothing. Or like IRI, we can directly do 3-month average to remove the noise in the prediction plumes.

    Because of the uncertainty caused by the new Earth Simulator, I do not know how much we can still trust the SINTEX-F model forecast, particularly for the current IOD prediction. I hope POAMA model forecast would be correct. Let's see the forecasts in following months and see what will happen in the real ocean.

    Best regards,
    Jing-Jia


    It is said that the widespread use of Microsoft software will result in most people only able to complete tasks in one way, that being the Microsoft way.
    I wonder if computers somehow exert the same power when it comes to processing data.

    Incidentally, my son quickly discovered that the reputed leading secondary school we sent him to, expected all the students to do all things the same way, which was their, the schools way.
    He is now progressing better in a school that is more accepting of, and better able to cultivate a diversity of thought, thankfully missing out on the opportunity to claim membership of the old boy's club of what many old boys, and their parents, consider an elite school.
  28. doug_bostrom at 11:06 AM, just adding to my earlier reply to you.
    It may be of even more value to non-sceptics over sceptics to wade through the "Cloud Swamp" and become more familiar with the complexities of clouds.
    I would be surprised if many do, yet it is probably more vital to understanding climate change than many, perhaps most other issues given it is the least understood,.
  29. Tom Dayton at 11:52 AM, there is a fundamental similarity however, in that weather is about redistributing heat imbalances. But where does it start and stop in limiting the rate of incoming heat or removing heat from the system.
  30. Johnd, I'm wondering how a problem with a seasonal weather or interannual climate forecasting application of a GCM relates to the application of GCM's to produce projections of global climate? The two objectives are not the same. The initializing conditions allowing the model to produce specific regional forecasts of seasonal and interannual climate behavior will cause this application to suffer from the same issues as other weather forecasting systems. More, they'll be sensitive to small perturbations such as those alluded to in the correspondence you quote. In all probability the issue there is w/the compiler change, btw.

    The goal of the SINTEX application you're worrying over is that of producing -forecasts- of specific weather and climate behavior in specific regions of the globe. That's not the same objective of GCM application to describe gross behavior of the global climate over multi-decade periods.

    In short, your concern with this problem w/SINTEX is not relevant, or at the least you've not shown how it is.

    More here on SINTEX applications for seasonal and interannual climate forecasts, for the curious:

    Seasonal Climate Predictability Using the SINTEX-F1 Coupled GCM
  31. doug_bostrom at 13:36 PM, Doug, where my interest really lies with the Japanese researchers is the work they are doing with the Indian Ocean Dipole. They are the ones that identified it about a decade ago and it's relevance to the Australian climate, and beyond, is gradually being appreciated.
    Previously our most eminent researchers had hopped on the El-Nino wagon and it became identified as supposedly the dominating influence over most of Australia.
    However for those whose understanding of the Australian weather and climate was based on what actually can be observed or happens on the ground, rather than what is being said in the media, or even in peer reviewed papers, a lot simply didn't reflect what was being, or had been observed for generations.
    Then some other independent researchers and forecasters started working the IOD into their calculations and models and suddenly a lot of what had been attributed to ENSO began to appear as being due to the IOD, at least over wide parts of Australia.
    This began to show how the independent cycles of each system could at times either enhance or offset each other, or remain neutral, thus throwing a completely different understanding into the picture.
    Now as I understand it, research is being carried out in other parts of the world to determine if the ENSO system is as dominant an influence as previously considered.
    Given that ENSO is relevant to the climate research and how the climate is modelled, particularly with models having to be validated by backcasting, if the understanding of ENSO changes, that may require some aspects of the models to change.
    That's where I see the relevance.
  32. Johnd, I think I see what you're driving at. ENSO is a driver of natural variability on a large regional scale, much of the Northern hemisphere really, and thus is of relevance on a fairly short interannual time scale. So models work better for modeling regional climate on short time scales as ENSO is better simulated. But on an interdecadal scale ENSO does not seem as though it is an important factor; ENSO does not offer a means of shedding heat from the globe, only rearranging it. That being said, the better models handle ENSO the better they'll work in finer time and space resolution. That about it?
  33. doug_bostrom at 16:23 PM, doug, basically yes. I'm not sure whether we will ever understand on how all the complexities interrelate or even if we can be sure we have the relationships in the right perspective, clear on cause and effect.

    Just an aside which relates to El-Nino and the IOD.
    El-Nino was given greater significance by Australian scientists when it was found that a high proportion of El-Nino events coincided with drought years in Australia, which was true, thus El-Nino became a supposedly reliable indicator for forecasting droughts in Australia.
    However those who observed what was actually occurring on the ground noted that a much smaller proportion of all drought years happened to coincide with El-Nino events, indicating that perhaps El-Nino was instead a relatively poor indicator to forecast droughts, less than an even chance.
    When the appropriate IOD cycles were worked into consideration, the correlation jumped substantially, to about 80% I think.
    It was interesting comparing the initial conclusions reached by those whose primary focus was El-Nino, as against those whose primary focus was droughts.
  34. johnd,
    corect me if i'm wrong, i do not know much about precipitations in Australia. As far as i can understand, IDO affects mainly the south and south-east of Australia while ENSO central and north Australia, or something like that.
    This should be the reason why, in general, we see different patterns in different regions. Australia is huge, after all, and bound by two different oceans.
  35. Riccardo at 18:13 PM, basically the weather comes from the west and the northern tropics across the country, at times north western cyclones bring rain to the south east.
    The eastern regions bounded by the Pacific Ocean do come under the influence of systems originating there, but the mountains that follow the east coast all the way down, the Great Dividing Range, are aptly named and provide some barrier to systems heading inland.
    The effects of systems originating in the Indian Ocean means that the weather over some of Australia, Indonesia, India and Africa are all interconnected, something that was observed from the early days of settlement with settlers in the north who had lived in other regions bounded by the Indian Ocean, made the connection.
    This is now being recognised more so since the identification of the IOD about a decade ago, and it is all still being digested, still with some differences of opinion as to the how it all relates. Given that some cycles take many decades to complete, the debate may continue for a long time yet.
    The El-Nino phenomenon is historically identified with South America, Peru, which is logical given how systems move around the globe.
    The Southern Ocean is also being given more consideration as to how it all helps influence the mix.
  36. Just adding to earlier posts, a new type of El-Nino has been identified in recent years and is being worked into the models used by the Japanese who work on the Sintex forecasts and research.
    I believe again it is these Japanese researchers who first identified it, and likely is contributing to the reliability of their forecasts.
    It is a modified form of the ENSO pattern and called ENSO Modoki or El Nino Modoki. The link below provides some information.
    The researchers believe that perhaps the conventional El-Nino is evolving into something different.
    However only time will tell whether this is something new evolving, or just part of an even bigger cycle where these changes may be periodic, and it is our understanding instead that is in a state of evolving.
    El Niño Modoki
    http://www.agu.org/pubs/crossref/2007/2006JC003798.shtml
  37. John has asked me to make my comments on this thread rather than on his more recent “Rebutting skeptic arguments in a single line” thread (please see those I made today – Note A)

    Let me kick off here by posting a comment that software developer James Annan, who is presently involved with the Coupled Model Intercomparison Project (CMIP) (Note B) declined to post on his "Penn State Live - Investigation of climate scientist at Penn State complete" thread (Note C).

    ***************
    I have been engaged in an interesting discussion over at William Connolley’s blog (Note 1) about the relevance of established VV&T procedures to climate models and Steve Easterbrook appeared to be suggesting that CMIP was the appropriate solution. I understand that CMIP is a project for comparing climate model outputs and I asked IPCC reviewer Dr. Vincent Gray for his views on this and he rejects the notion that the inter-comparison of the outputs of different models is anything to do with validation. As you may be aware, Dr. Gray is author of “The Greenhouse Delusion”, a member of the New Zealand Climate Science Coalition (Note 1) and was responsible for having the IPCC admit that climate models had never been properly validated, despite the IPCC trying to suggest otherwise. (In response to his comment the word "validation" was replaced by "evaluation" no less than 50 times in the chapter on "Climate Models - Validation" in an early draft of the IPCC's "The Science of Climate Change".)

    Since you are a member of the Global Change Projection Research Programme at the Research Institute for Global Change working on CMIP3 with an eye on AR5 you may be aware of the extent to which VV&T procedures will be applied. Are you able to point me in the direction of the team responsible for these activities?

    William seems reluctant to let me participate further on his blog since I commented on his Green Party activism and his reported Wikipedia activities (Note 2 & 3) so I hope that you will take up the debate about the relevance of VV&T for climate models. I get the impression some of you involved in climate modelling see little further than your software engineering and the quality of that software. VV&T as applied in the development of Telecommunications support systems when I was involved in it considerd the full picture from end user requirements definition through to final system integration and operation. (Contrary to what is claimed in “Engineering the Software for Understanding Climate Change” by Steve Easterbrook and Timothy Johns (Note 4) in the case of climate modelling systems, the primary end user is not the scientists who develop these systems.)

    Although VV+T alone will not produce quality software I recall plenty of instances when professionally applied and independent VV&T procedures identified defects in system performance due to deficient software engineering. Consequently deficiencies were rectified much earlier (and much cheaper) than would have occurred if left to the software engineers and defects only identified during operation. It is possible but highly unlikely that VV&T doubled the cost of the software, as claimed by one software engineer on William’s blog but it would certainly have cost many times more if those defects had remained undetected until during operational use. I don’t expect to be the only person who has had such experiences. I would be surprised if rectification of these software deficiencies led to “quality” software but they did lead to software and operational systems that more closely satisfied the end users’ requirements, used throughout the system development program as the prime objective.

    Steve Easterbrook and Timothy Johns said (Note 4) “V&V practices rely on the fact that the developers are also the primary users”. It could be argued that the prime users are the policymakers who are guided by the IPCC’s SPMs which depend upon the projections of those climate models. Steve and Timothy “hypothesized that .. the developers will gradually evolve a set of processes that are highly customized to their context, irrespective of the advice of the software engineering literature .. ”.

    I prefer the hypothesis of Post & Votta who say in their excellent 2005 paper “Computational Science Demands a New Paradigm” (Note 5) that “ .. computational science needs a new paradigm to address the prediction challenge .. They point out that most fields of computational science lack a mature, systematic software validation process that would give confidence in predictions made from computational models”. What they say about VV&T aligns with my own experience, including “ .. Verification, validation, and quality management, we found, are all crucial to the success of a large-scale code-writing project. Although some computational science projects—those illustrated by figures 1–4, for example—stress all three requirements, many other current and planned projects give them insufficient attention. In the absence of any one of those requirements, one doesn’t have the assurance of independent assessment, confirmation, and repeatability of results. Because it’s impossible to judge the validity of such results, they often have little credibility and no impact ..”.

    Relevant to climate models they say “A computational simulation is only a model of physical reality. Such models may not accurately reflect the phenomena of interest. By verification we mean the determination that the code solves the chosen model correctly. Validation, on the other hand, is the determination that the model itself captures the essential physical phenomena with adequate fidelity. Without adequate verification and validation, computational results are not credible .. ”.

    I agree with Steve that “further research into such comparisons is needed to investigate these observations” and suggest that in the meantime the VV&T procedures should be applied as understood by Post and Votta and currently practised successfully outside of the climate modelling community.

    NOTES:
    1) see http://nzclimatescience.net/index.php?option=com_content&task=view&id=374&Itemid=1
    2) see http://www.spectator.co.uk/columnists/all/6099208/part_3/i-feel-the-need-to-offer-wikipedia-some-ammunition-in-its-quest-to-discredit-me.thtml
    3) see http://www.thedailybell.com/683/Wikipedia-as-Elite-Propaganda-Mill.html
    4) see http://www.cs.toronto.edu/~sme/papers/2008/Easterbrook-Johns-2008.pdf
    5) see http://www.highproductivity.org/vol58no1p35_41.pdf

    **************

    I’m disappointed (but not surprised) by the reluctance of software developers like James, William Connolley and Steve Easterbrook to have open debate with sceptics about the extent to which their climate models have been validated.

    NOTES:
    A) see http://www.skepticalscience.com/Rebutting-skeptic-arguments-in-a-single-line.html#comments
    B) see http://www-pcmdi.llnl.gov/projects/cmip/index.php
    C see https://www.blogger.com/comment.g?blogID=9959776&postID=2466855496959474407

    Best regards, Pete Ridley.
  38. Pete Ridley (whoever you are) - it seems your points were addressed on the other thread - yet you repeat them here. Can you state your beef in a a sentence or two, instead of a fully annotated paper?

    I think the strongest rebuttal to the skepticism of the model-making process is that they can both hindcast (the easy part) and predict - 22 years isn't as good as 5 decades, but how many years of successful modeling will you need before you are satisified?
    Response: Quick comment - they're repeated here because I emailed Peter and asked him to move the discussion to this more relevant thread.
  39. actually thoughtfull and Pete Ridley, I suggest avoiding carrying over Pete's comments about user names from the previous thread. As far as I know there is no policy here of privileging "realistic-sounding full names" over others. Comments that disparage the choice to use a pseudonym are offtopic and will probably be deleted.
  40. Pete Ridley, evidence of predictive ability was described in the original post at the top of this page. I suggest you read it.
  41. And why do you want climate modelers to debate, other than as yet another delay tactic? Why don't you create a model that account for everything the AGW models do, and prove to the world that man-made CO2 is simply not an issue? The debate of ideas, not names, not titles, not one-liners is the debate that matters to me.

    So Pete Ridley, where is your competing, complete non-AGW theory of climate change, complete with models that have excellent hind-cast ability, and whose ability we can compare to Hansen 1988, or any of the newer, better models that have come out in the last 22 years?
  42. Pete Ridley at 22:54 PM on 21 July, 2010

    I was intrigued by your note 2) which contains this:

    "A meta-analysis of all the articles written on the subject showed that the vast majority of experts believe that not only was the MWP widespread but that average temperatures were warmer than they are now"

    Perhaps this needs a bit of VV&T...?
  43. Pete, I do however have more than a passing acquaintance with this type of physical modelling. You ask for evidence of models predictive skill. You are pointed at comparisons between models prediction and actual data. I dont know what other kind of evidence you could mean. Hansen (and everybody else) cant predict what future CO2 emissions will be so of course he works with scenarios. "If you emit this, you will get this climate". You verify by comparing actual forcings (not just emissions but volcanoes, solar etc), versus the models prediction for the scenario that closest matches these forcings.


    Please also note that you need to distinguish between people giving you "opinions" versus people giving you verifiable facts. The source of facts is important, not the person giving it to you.
  44. Pete Ridley - reading Hansen et al 2006 is quite clear; Hansen's most likely scenario (scenario B), with a particular CO2 increase plugged in, actually predicted temperatures over the last 22 years quite well. If you take Hansen's model and put in the actual CO2 numbers (5-10% less than his scenario) his model is even more accurate.

    This is discussed in the paper in the section labeled Early Climate Change Predictions, pages 1-3 of the PDF. It's a major part of the paper!

    The quality of a model lies in whether it makes correct predictions based on various inputs (Given 'A', you will get 'B'). His scenarios covered a range of different inputs (CO2 production), and the the prediction given closely matches the real-world result of that range of CO2 numbers.

    You can't ask for much more - it's a decent model, and predicts the correct result given a particular set of our actions, even at it's 1988 level of simplicity. That's what a good model does!
  45. Hi folks, thanks to those of you who have responded with an attempt to debate in a reasonable manner the issue of evidence that is supposed to support the claim that climate model predictions/projections have been validated. I will respond to each of you in turn in subsequent comments. First, since for some reason my fist comment on this blog (on the “Rebutting skeptic arguments in a single line” thread has been removed let me repeat it here.

    The IPCC and other supporters of The (significant human-made global climate change) Hypothesis depend very much upon the “projections” of the computerised climate models. The validity of those projections has been challenged repeatedly by sceptics but they are still depended upon to support the notion that our use of fossil fuels will cause catastrophic global climate change.

    I have been debating this on several blogs with software developers with an interest in this subject, such as Steve Easterbrook, William Connolley (Note 2) and James Annan (Note 3) but it seemed that as soon as I mentioned Dr. Vincent Gray the debate stopped. Dr. Gray is author of “The Greenhouse Delusion”, a member of the New Zealand Climate Science Coalition (Note 4) and was responsible for having the IPCC admit that climate models had never been properly validated, despite the IPCC trying to suggest otherwise. (In response to his comment the word "validation" was replaced by "evaluation" no less than 50 times in the chapter on "Climate Models - Validation" in an early draft of the IPCC's "The Science of Climate Change".)

    Yesterday I sent the following comment to Steve Easterbrook’s blog (Note 5) but he refused to post it. Is Skeptical Science prepared to?
    ________

    Climate models are inadequate because of our poor understanding of the fundamental global climate processes and drivers. Improving the quality of the software will not improve their performance. You can't make a silk purse out of a sow's ear.

    Popular Tehnology has put together a list of respected scientists who recognise this fact. One of these, Freeman Dyson says "My first heresy says that all the fuss about global warming is grossly exaggerated. Here I am opposing the holy brotherhood of climate model experts and the crowd of deluded citizens who believe the numbers predicted by the computer models. Of course, they say, I have no degree in meteorology and I am therefore not qualified to speak. But I have studied the climate models and I know what they can do. The models solve the equations of fluid dynamics, and they do a very good job of describing the fluid motions of the atmosphere and the oceans. They do a very poor job of describing the clouds, the dust, the chemistry and the biology of fields and farms and forests. They do not begin to describe the real world that we live in. The real world is muddy and messy and full of things that we do not yet understand. It is much easier for a scientist to sit in an air-conditioned building and run computer models, than to put on winter clothes and measure what is really happening outside in the swamps and the clouds. That is why the climate model experts end up believing their own models."

    Have a read of what the rest have to say (http://www.populartechnology.net/2010/07/eminent-physicists-skeptical-of-agw.html).

    IPCC reviewer Dr. Vincent Gray is putting together an article on the subject of climate model validation. I’ll let you know when it’s available.
    --------------

    There is no one-liner that will rebut this criticism by a true climate science sceptic.

    NOTES:
    1) see http://www.thefreedictionary.com/rebut
    2) see http://scienceblogs.com/stoat/2010/06/engineering_the_software_for_u.php
    3) see http://julesandjames.blogspot.com/2010/07/penn-state-live-investigation-of.html
    4) see http://nzclimatescience.net/index.php?option=com_content&task=view&id=374&Itemid=1
    5) see http://www.easterbrook.ca/steve/?p=1785&cpage=1#comment-3436

    Best regards, Pete Ridley
  46. Pete Ridley - a 'scientific model' is a simplified system for making reasonable projections and exploring system interactions, especially useful when it's not practical to subject the real system to repeated tests and inputs.

    Evaluating a model takes into consideration several things:

    - Ability to match previous observations (historic data)
    - Ability to predict future observations
    - Ability to estimate different future states based on different inputs (Given 'A', predict 'B')
    - Match of model internal relationships to known physical phenomena
    - Simplicity (no nested 'crystal spheres' for epicycles)

    The 1988 Hansen model was, by current standards, fairly simple. Ocean heat content/circulation, ice melt rates, some additional aerosol information, etc., weren't in it. But it still shows close predictive agreement with inputs matched to what has happened since 1988! That's a pretty decent model.

    And no, it's not 1-to-1 agreement. Short term variation (a couple of years) is really weather, not climate. You need to make running averages of >10 years to average out the short term fluctuations and identify the climate trend.


    On a side note, you complain about reliability of surface temperature measures. That's a fairly common skeptic argument, and has been discussed here and here, as well as in a very recent topic on cherry picking data. The surface temperature measures are reliable - that argument really doesn't hold water.
  47. Pete Ridley at 00:59 AM on 24 July, 2010

    You state "contrary to reality which is a turning point or even downturn in globa tmperature trend". I assume you mean surface or lower tropospheric temperature trends as measured by land/vessel based stations, satellites or radiosondes? Could you please explain how you arrive at your conclusion? I have quite a few data sets available, and it is difficult to see any turning points or downturns in trend unless you are extremely selective or narrow on start and end times.
  48. Pete Ridley - A question for you. The Hansen 1988 model appears to satisfy all the criteria I know of for a reasonable scientific model. You seem to disagree.

    Can you tell us where the Hansen model fails these criteria? Or perhaps tell us what your definition of a scientific model might be?
  49. Peter, your faith in Vince is touching. Perhaps you should google for some other opinions? (Or look at his review comments at IPCC and editor's response). I've known Vince all my working life and I would trust him to do a coal analysis for me.

    You still seem to think Hansen's model is somehow flawed because it's deals with "fictious scenarios". Would you complain about say an automobile model not predicting speed because it cant tell how hard you press the accelerator? With the Hansen model however, you can rerun it with ACTUAL forcings instead of the scenario. What else can demand of a model? You are also ignoring all the other model/observation matches in the above article. Where have the models failed?
  50. Pete Ridley, since you are failing to understand the repeated explanations of how Hansen's model has been shown to successfully predict temperatures, you should read the more detailed posting at RealClimate, "Hansen's 1988 Projections."

    With regard to your more general confusions about models, you should go to RealClimate's Index, scroll down to the section "Climate Modelling," and read all of the posts listed there. For example, in the post "Is Climate Modelling Science?", there appears:
    "I use the term validating not in the sense of ‘proving true’ (an impossibility), but in the sense of ‘being good enough to be useful’). In essence, the validation must be done for the whole system if we are to have any confidence in the predictions about the whole system in the future. This validation is what most climate modellers spend almost all their time doing."

Prev  1  2  3  4  5  6  7  8  9  10  11  Next

Post a Comment

Political, off-topic or ad hominem comments will be deleted. Comments Policy...

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.

Link to this page



The Consensus Project Website

TEXTBOOK

THE ESCALATOR

(free to republish)

THE DEBUNKING HANDBOOK

BOOK NOW AVAILABLE

The Scientific Guide to
Global Warming Skepticism

Smartphone Apps

iPhone
Android
Nokia

© Copyright 2014 John Cook
Home | Links | Translations | About Us | Contact Us