Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Donate

Twitter Facebook YouTube Pinterest MeWe

RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

How climate skeptics mislead

Posted on 13 June 2010 by John Cook

In science, the only thing better than measurements made in the real world are multiple sets of measurements – all pointing to the same answer. That’s what we find with climate change. The case for human caused global warming is based on many independent lines of evidence. Our understanding of climate comes from considering all this evidence. In contrast, global warming skepticism focuses on narrow pieces of the puzzle while neglecting the full picture.

What is the full picture? Humans are emitting around 30 billion tonnes of carbon dioxide into the air every year. This is leaving a distinct human fingerprint:

Signs of warming are found all over the globe (here are just a few):

On the question of human caused global warming, there’s not just a consensus of scientists – there’s a consensus of evidence. In the face of an overwhelming body of evidence, the most common approach of climate skepticism is to focus on narrow pieces of data while neglecting the full picture.

Let's look at an example. One popular skeptic argument has been to cast doubt on the surface temperature record. Skeptics claim thermometers are unreliable because surroundings can influence the reading. They reinforce this by showing photo after photo of weather stations positioned near warming influences like air conditioners, barbeques and carparks. The Skeptics Handbook goes so far as to say "the main 'cause' of global warming is air conditioners".

This myopic approach fails to recognise that air conditioners aren't melting the ice sheets. Carparks aren't causing the sea levels to rise and glaciers to retreat. The thousands of biological changes being observed all over the world aren't happening because someone placed a weather station near an air conditioner. When you step back and survey the full array of evidence, you see inescapable evidence of warming happening throughout our planet.

Our understanding of climate doesn't come from a single line of evidence. We use multiple sets of measurements, using independent methods, to further our understanding. Satellites find similar temperature trends to thermometer measurements. This is despite the fact that no carpark or barbeque has ever been found in space. Prominent skeptic Roy Spencer (head of the team that collects the satellite data) concluded about the HadCRUT surface record:

“Frankly our data set agrees with his, so unless we are all making the same mistake we’re not likely to find out anything new from the data anyway"

Our climate is changing and we are a major cause through our emissions of greenhouse gases. Considering all the facts about climate change is essential for us to understand the world around us, and to make informed decisions about the future.

0 0

Printable Version  |  Link to this page

Comments

Prev  1  2  3  4  5  Next

Comments 151 to 200 out of 229:

  1. What appears to be most misleading is that all the examples shown in the lead post, the independent lines of evidence, the signs of warming, are simply just that, signs of warming, but that is not what should be in dispute. The misleading part comes about by declaring that these are signs of AGW and dismissing that they may be due to natural warming. Where is the evidence that CO2 leads temperature and it is not temperature that leads CO2? Where is the evidence that clouds are temperature dependent and not that temperature is cloud dependent? This question of clouds is vitally important. According to NASA, "The balance between the cooling and warming actions of clouds is very close although, overall, averaging the effects of all the clouds around the globe, cooling predominates", COOLING PREDOMINATES. http://earthobservatory.nasa.gov/Features/Clouds/ We know also that clouds increase over winter, yet are not responsible for the winter. So lets not be misled into the debate as to whether the planet is warming or not, but instead examine the evidence that can prove it is due to man and not natural causes.
    0 0
  2. e - excellent comment, I like that Popper article. Berényi, you might also look at this article on inductive science by Wesley Salmon - he was an accomplished philosopher of science, not to mention a really nice guy. I'll note that purely deductive, self-contained logic is excellent for absolute proofs. However, correct inductive inferences allow you to learn new things, even if you cannot view every case, follow every lead, examine every single example in the observable universe and beyond. That's where new knowledge comes from. I spent several years studying epistemology and the philosophy of science - I'm well aware of the differences, and your tone is quite insulting. Let's step back a bit. The whole skeptic issue with UHI is calling into doubt the temperature record, and hence questioning AGW. While I cannot agree with your criticism of the GHCN data for any number of reasons (not least of which Spencer's data errors and the repeated validations of said data over many variations of location and subset), put that to the side. Even if your critique proves issues with the GHCN data it doesn't invalidate any of the independent temperature records or other evidence indicating AGW. I think all of AGW has been a subtext of this discussion, and I wanted to make that separation explicit. The UHI issue is tied specifically and entirely to the GHCN data and data sets dependent on it - and even Spencer notes that the MSU data are not calibrated using the surface temperatures. The inductive and robust part - The idea of AGW is based on multiple lines of evidence indicating a common conclusion, providing a probablistic (with a tied assumption of a reasonable uniformity of nature and results) inductive support for the idea that our carbon emissions are increasing the radiative greenhouse effect, causing long term climate warming. And as a tie-in, the GHCN data agrees. Supporting premises for GHCN are the (deductive) reasoning for area, UHI, and statistical effects and their correction, repeatable results with multiple subsets, AND each of the dovetailing independent data sets, which provide additional premises for a separate inductive inference that the GHCN data is correct. Induction is NOT perfect by any means. But deduction cannot teach you anything you don't already know.
    0 0
  3. Ah - correction to my last post, the temperature records and radiative balances indicate global warming. The rates, and much additional evidence (carbon isotopes, deductive reasoning about energy use, ocean pH, paleo evidence about forcings, etc.) point toward AGW.
    0 0
  4. johnd, that was a pretty rich post to make at this site. How about What happened to evidence for man-made warning and CO2 lead temperature and There is no empirical evidence If we have "natural warming", then what is causing it. Please remember that energy must be conserved so surface temperature increases must imply a flow of energy from some other source. How have we missed it? But then what about TOA energy imbalance? Everything in climate must have a physical causation whether long term climate or tomorrow weather and I dont buy "as yet undiscovered natural energy flows" compared to existing perfectly good theory of climate which matches what we observe.
    0 0
  5. johnd. On a more general level...Why prefer some some mysterious and unknown hypothesis over a perfectly legitimate explanation that is consistent with the our preexisting understanding, compatible with the observations and predictive of novel patterns? What scientists would do that? Who even does that in their everyday lives? If my bank account was loosing money and a budget exercise showed that my spending was greater than my income, it would be wishful thinking (not to mention irresponsible) to blame my losses on unpredictable variation in interest rates or some conspiracy on the part of the bankers, or the fickleness of the gods. There is certainly much to learn and I for one am excited to watch how our knowledge of climate develops in the future. But we should't ignore what the evidence is showing us right now in hopes that some ghost in the machine will show up to the party unannounced.
    0 0
  6. I must apologize for all my mispellings now and in the future. I'm a truly abysmal typist.
    0 0
  7. scaddenp at 12:03 PM, Phil, my comments were in response to the title of this thread, and how the thread itself was lead, mislead, into a debate over the magnitude of a symptom, warming, rather than what are the reasons for it, which in itself is neglecting the full picture, a charge made in the lead post against the sceptics generally. Even the statement in the lead post "we see more heat being trapped by carbon dioxide" is misleading as it is accepted that water vapour is directly responsible whilst CO2 and other greenhouse gases are merely the forcing agents and only responsible for a very small part of direct warming. But how much attention is devoted to the other greenhouse gases. Ongoing research is being conducted on nitrous oxide which is supposedly 300 times as potent as CO2 and accounts for about 5% of Australia's national emissions. The rise in the global use of nitrogenous fertilisers traces a similar path to the global temperature rise. Is that coincidence or a factor? Is it misleading to leave it out of any calculations relating CO2 to temperatures? Action required to curb N2O will be different to that required to curb CO2. With regards to your comment about undiscovered energy flows. The recent discovery of a major deep sea current of Antarctic bottom water east of the Kerguelen plateau which deposits cold oxygen-rich water in the deep ocean basins further north must displace an equal quantity of warmer water. Could this be a newly discovered energy flow? How does the displaced energy in the water manifest itself elsewhere? Is this related to the cycles that have been identified in the various oceans, in this case the IOD? Finally your comment about the TOA energy imbalance. What about it? Would the same TOA energy imbalance have been present during each of the previous interglacial periods? Without such an imbalance, warming would not be possible irrespective of the cause.
    0 0
  8. johnd said... The misleading part comes about by declaring that these are signs of AGW and dismissing that they may be due to natural warming. I'm sorry but I have to take exception to this. If we look at almost any combination of proxy temp records of the past few thousand to million years I think it becomes obvious that something is extremely different now. I fail to see how such a dramatic change in temp could occur naturally. It "may" be natural but that is an extremely small possibility because if it were natural one would expect that we would be just as dramatically aware of the natural cause. The Siberian Traps have not reasserted themselves last I heard. What we do face is the reality that humans and technology have dramatically changed over the past 150 years.
    0 0
  9. Johnd. The IPCC WG1 discusses all the anthropogenic gases. N2O is 0.16W/m whereas CO2 is 1.66W/m2. Methane is 0.48W/m2. I fully agree that action is required on ALL. The cold current discovery - no I dont see how this is moving energy from ocean to surface. Are you postulating that this current just appeared in recent years? We have good data to 700m and reasonable data to 2000m. To affect surface temperatures, you have to find the energy flow in there. As to TOA - well I guess the important point here is the nature of radiative balance in terms of it cause.
    0 0
  10. scaddenp at 14:31 PM, Phil, nobody is suggesting the undersea current referred to has just appeared, only it being identified, and being mapped and quantified more recently. No doubt there are many more questions that they will be seeking answers to other than those I posed. It is at about 3500m and some of the water displaced from the northern basins returns to the Antarctic waters. Being warmer than the water that displaced it, there will be a transfer of energy. Strong export of Antarctic Bottom Water east of the Kerguelen plateau
    0 0
  11. johnd - the phrase "we see more heat being trapped by carbon dioxide", I think refers specifically to the fact that we can see in the spectrum of outgoing longwave radiation the spectral signature of CO2 (and other greenhouse gases) changing in a manner that reflects an increase of those specific gases. Hence, we can literally "see" the heat being trapped by these gases. CO2 is more than *just* a driver, it traps a lot of that heat (in addition to driving water vapour). There are good posts on it here, including I think two of scaddenp's refs. And the point of the environmental data (glaciers, cores etc) is to show not only the direction of the clange, but frequently that it is an unusual change in the context of millennia of natural change. They don't alone show that it's humans, but other observations (stratospheric cooling, spectral signatures of OLR and downwelling LR, radiative physics, night-time warming etc show that it's our greenhouse gases and not a natural cause. And certainly not a coincidental recent ocean current change (yes that might add some energy, but did it cross your mind that it might not be very much energy?). #136 BP: So you are now rejecting Spencer (because his data are inadequate), yet now expecting us to believe your hypothesis (which is still basically Spencer's) with no supporting evidence! That's pretty remarkably bold. You cannot 'prove' theoretically an empirical effect without providing some detailed real-world data to show that this theoretical effect is real. I doubt it has even crossed your mind that you might be wrong? even given the robust independent supporting evidence very strongly indicating that you and Spencer are wrong? I'm all for the consideration of alternative evidence, but here there is no alternative evidence to the multiple observations that the world is warming (I agree with johnd on that point at least).
    0 0
  12. #147 kdkd at 09:15 AM on 16 June, 2010 Deductive reasoning is the preserve of mathematics, science is the home of induction Incorrect. Induction, along with several other techniques is a heuristic method. It may be useful for finding your path through the bush of alleged facts and to establish some order, but the true test of a scientific theory always relies on deduction. And that's the part where things start to get genuinely scientific. From a small set of basic principles a wealth of sharp propositions can be derived by rigorous deductive reasoning. At the same time results of experiments or observations are translated to the same language of binary logic. If some member of the former set is negated by any member of the latter one, then either there was a problem with the experiment/measurement/observation (the first thing to do is to go back and check it) or some of the premises forming the core of the theory should be abandoned (along with all the propositions that can not be derived without it). The very process of translating measurement results to propositions having the logical form comparable to those derived from theory involves deduction, relying on a smaller set of principles considered firmer than the ones to be tested. See the example above about translating radiance temperatures in narrow infrared bands to atmospheric temperatures using sophisticated models. The whole procedure described above is valid only if no deductive chain contains fuzzy steps. #148 Stephen Baines at 10:15 AM on 16 June, 2010 This argument is semantic red herring. "Robust" existed as a word well before software engineering and is not always used in the way you state. For instance [etc., etc.] Of course it existed. But its specific usage as a terminus technicus comes from informatics. You may notice that without it propositions like "this theory is robust" (i.e. "healthy", "full of strength") do not even make sense. These qualities belong to living organisms and no theory has a biological nature. The usage of the term in this context is clearly metaphoric and if in this case you mix up its specific meaning with the vernacular one, you end up with an untestable poetic proposition whose truth value is a matter of taste. On the other hand the robustness of a piece of software/hardware is testable indeed in the sense its overall performance should be preserved even if parts of it would fail. A very desirable property for software and an undesirable one for scientific theories. The more rigid and fragile a theory is the better, provided of course it happens not to be broken. I show you an example of this kind of robust reasoning, from this fine blog. Robust warming of the global upper ocean The figure is from a peer reviewed paper of the same title (Lyman at al. 2010, Nature). You may notice the error bars given for different curves by different teams do not overlap. That means these OHC history reconstructions are inconsistent with each other. Individual curves with error bars can be easily translated into propositions (rather long, complicated and boring ones) and if you join these individual propositions by the logical operation of conjunction, the resulting (even longer) proposition is false. As from a false proposition anything follows, of course the implicit proposition of the authors "if these OHC history reconstructions are correct, then OHC trend for the last sixteen years is +0.64 W/m2 on average over the surface of the Earth" is a true one. It does not make the part after the "then" true. It does not prove its falsehood either. Its truth value is simply independent of what those teams have done, it is indeterminate. In cases like this the proper scientific method is not to look for robustness in the data and extract it on whatever cost, but to send the individual teams back to their respective curves, error bars included and tell them find the flaw. The error bars indicate that no more than one of the reconstructions is correct, possibly none. The average value of many incorrect numbers is an incorrect one. Further steps like extracting a common trend can only be taken if correct and bogus curves are told apart. When I was a kid, at high school, robust babbling like this was not tolerated. Sit down, please, F.
    0 0
  13. Pedantry will get you nowhere BP - "robust" is perfectly acceptable as a term to define a theory, based on observations wich are subject to greater or lesser error. For example, the graph you point to is not a measurement of a single property many times over, it is measures of ocean heat content where the measurements are taken in different locations with different instrumentation, each subject to different errors but, due to the variability in sampling locations, would not necessarily record an identical depth-temperature curve anyway. Each one of those curves can be correct within error, yet not overlap - ie not 'flawed' as you suggest. That's the nature of real-world measurements. And the 'if...then' point is absolutely valid as a result, based on those observations. BP, are you going to suggest that the oceans are not warming? When I was a kid at school, I would get an F for producing that kind of conclusion from the data available. I fear you are desperately unaccustomed to dealing with observations from the real world.
    0 0
  14. skywatcher at 21:22 PM, BP is correct. The graph in question does indeed represent a single property, attempted to be arrived at by a variety of reconstructions each apparently using different measurements and formulas. If the true value of that single property is "X" then "X" should fall within the error range of each reconstruction for each reconstruction to be considered valid. Each of the curves were derived from a combination of real world measurements, assumptions and formulas. If within each reconstruction, the combination and relationship of all the inputs are valid, then each reconstruction has an equal chance of determining the true value of "X", but they all cannot be right, unless "X" falls within the error bars of each reconstruction. If that is not the case, there are two possibilities, either one reconstruction is correct and the others are not, or they are all incorrect. The error bars should be such that they account for the reality of real world measurements.
    0 0
  15. skywatcher at 19:39 PM, re "did it cross your mind that it might not be very much energy?" That may or may not turn out to be the case once it has throughly been researched, but at the moment they see it as a significant flow, not previously properly allowed for. However the Southern Ocean is considered one of the most important of the worlds oceans but perhaps the least understood. Scientists believe that until they understand its circulation they cannot make really confident predictions about future climate change.
    0 0
  16. Yes, a single property, measured with different equipment at different locations and with different corrections applied. Lyman et al 2010 is all about assessing the reasons for those differences, and therefore establishing what is the most likely "right" answer based on those different measurements. Maybe you should read Trenberth et al 2010 which is freely available, especially the bottom of column 2 and the top of column 3, and not the graph posted above which is before th edetailed analysis of the errors and why the discrepancies exist. I would be more concerned if some of the curves showed decreasing OHC, but they don't, all are increasing, and there are good reasons why the measurements don't exactly correspond. Quite clearly, reducing these uncertainties is a key area of research, but it hardly invalidates the previous analyses, and I think Lyman's assessment is a step forward in that regard.
    0 0
  17. Here we're seeing a sort of master on skeptics strategy. There are schematically three possibilities: 1) attack one single point regardless of the others, then switch to the next tolerating contradictions; 2) find trivial and sometimes plain wrong math or analisys allowing the claim that AGW or some aspect of it are hoaxes; 3) "invent" new physics throughout. Point 1 can be easily seen following the discussion in this post from comment #10 onward. It all started with population impact on temperature measurements in a very special situation, then UHI in general, GHCN quality, satellites, OHC and who knows which will be the next. Only radiosondes are found correct, ignoring, this time, their well known and documented biases ... A good example of point 2 is Lon Hocker in another post or the infamous PIPS images analisys of ice volume. For point 3 you have an ample choice elsewhere over the internet, here we're relatively safe. All in all, two out of three of the strategies pertinent with the topic of this post are confirmed here. Not bad Mr John Cook, good job :)
    0 0
  18. Stephen Baines #148 Regardless of the inconsistency of the team's curves, they roughly follow the same pattern; the problem is the transition from XBT to Argo. The exampled 'Upper Ocean Heat Content Chart" shows a huge increase in OHC from roughly a 2 year period 2001 to 2003 in which the OHC rises from the zero axis to about 7E22 Joules or about 700E20 Joules. This is about 350E20 Joules/year heat gain. Dr Trenberth's 0.9W/sq.m TOA energy flux imbalance equalled 145E20 Joules/year. Therefore a rise of 350E20 Joules/year in OHC equals about 2.1W/sq.m TOA imbalance - a seemingly impossible number. BP identified the same issue in the "Robust Warming of the global upper ocean" thread and showed that the year to year satellite TOA flux data showed no change anywhere near 2.1W/sq.m. Coinciding with the start of full deployment of the Argo buoys around 2003-04 this impossibly steep rise in 2001-03 looks like an offset calibration error. In such case, fitting a linear curve from 1993-2009 and calling it a 'robust' 0.64W/sq.m is just nonsense. One might also note that the better the Argo coverage and analysis gets from about 2005 onward - the more the teams curves converge on a flattening trend - no OHC rise - no TOA imbalance. No TOA imbalance seems to present a problem for CO2GHG theory which requires an ever-present increasing warming imbalance at TOA.
    0 0
  19. #148 Stephen Baines at 10:15 AM on 16 June, 2010 modern scientific theories are probabilistic Yes, they are. But you should always keep meta-level and subject-level propositions apart. In scientific propositions probabilistic concepts are of course allowed. However, it does not and should not make the truth-value of the proposition itself probabilistic. The proposition "mean and standard deviation of measurement is such-and-such" is not a probabilistic one, but a proposition having definite truth-value about probabilistic phenomena, which is a very different thing. There are some preconditions of the very applicability of probability theory for any subject matter, the first one being the existence of a predetermined event field. Until it is given, it does not even make sense to guess the probability measure. If you try to apply probabilistic reasoning at the meta-level of science, as IPCC AR4 tries
    Where uncertainty in specific outcomes is assessed using expert judgment and statistical analysis of a body of evidence (e.g. observations or model results), then the following likelihood ranges are used to express the assessed probability of occurrence: virtually certain >99%; extremely likely >95%; very likely >90%; likely >66%; more likely than not > 50%; about as likely as not 33% to 66%; unlikely <33%; very unlikely <10%; extremely unlikely <5%; exceptionally unlikely <1%.
    you run into trouble. Based on this scheme they state for example "Overall, it is very likely that the response to anthropogenic forcing contributed to sea level rise during the latter half of the 20th century" (IPCC AR4 WG1 9.5.2) Now, by very likely they mean something with an assessed probability of occurrence between 90% and 95%. OK, we have the probability measure for a specific event. But what is the entire field of events? What kind of events are included in the set with an assessed probability of occurrence between 5% and 10% for which it is not the case that the response to anthropogenic forcing contributed to sea level rise during the latter half of the 20th century? Does this set include counterfactulas like "people went extinct during WWII" or not? What is the assessed probability of occurrence for that event? Does it include worlds where sea level is declining? Or is it rising for all elements of the complementer set in the field of events considered, just with a 5-10% assessed probability of occurrence the response to anthropogenic forcing has somehow not contributed to sea level rise at all during the latter half of the 20th century? Does this sentence make sense at all? Without any doubt some message is transmitted by the qualification "very likely" in this case, but it has nothing to do with probabilities as they occur in science. The field of events is not defined and can't be defined, therefore the numbers supplied can't possibly be estimated values of a probability measure, but something else. True, all kind of things happen all the time and we seldom have the luxury to know all possibilities in advance. That's simply human fate. While staying in NYC a crane collapsed at a construction site crashing the roof of a nearby hotel and killing a guy in his bed instantly who slept in the top apartment at high noon. Now, what's the assessed probability of occurrence for that event? Can it be taken into account in any prior risk assessment? Has the guy considered the probability of a crane coming down on his head before taking a nap? Still, people somehow manage to handle risks in situations where preconditions for applicability of probability theory are lacking. There are empirical studies on this with some weird findings. It is not even easy to construct a conceptual framework where actual human behavior in obscure risky situations can be interpreted as rational. But the fact people have managed to survive so far indicates it can't be too irrational either. BTW, these things have far reaching consequences for e.g. economics. This kind of ability of experts is relied on when assigning "probability" to various propositions being true or false. It has nothing to do with science as such and it is utterly misleading to mix everyday language used in this semi-instinctive risk taking behavior with scientific terms. I don't know what the term "post-modern science" even means No one knows for sure. But everyone seems to do it.
    0 0
  20. #148 Stephen Baines at 10:15 AM on 16 June, 2010 If you showed any willingness to acknowledge that this situation raises questions about the validity of your method, and that maybe it needs revision or rejection as a consequence, people would be more receptive. As it stands, its appears your idea is the one that is unfalsifiable and subject to confirmation bias. Hereby I do acknowledge that this situation raises questions about the validity of my method. However, the questions raised should be formulated. Having done that answers are to be supplied. So you are welcome, start that falsification job and let others join in. At least give it a try.
    0 0
  21. BP > it does not and should not make the truth-value of the proposition itself probabilistic. You are quite simply wrong, and yes we are talking about the meta-level of science not within science itself. Any scientific knowledge applied to unobserved events (inductive reasoning) is strictly probabilistic. Did you read the Karl Popper essay? Surely you don't think we can have positive knowledge with 100% certainty? If we can't say anything with certainty, and if we can't say anything probabilistically, what is there left to say? Here's another link that neatly summarizes the topic of scientific "proof". I quote: "Thus, it is important that you shift your frame of reference from one of proof and certainty of knowledge and interpretation of facts to one that is PROBABILISTIC in nature, where our confidence in whether or not we understand something properly is not and never can be absolute."
    0 0
  22. BP, "So you are welcome, start that falsification job and let others join in. At least give it a try." That's what others have been doing previously. My point at #148 was that you responded to many of their arguments (which were based on apparent consistency among data sets) with an apparent attack on any approach that appealed to such consistency as evidence. I was also correcting your interpretation of KRs use of the term "robust." KR meant it in a different way that you took it. While I didn't actually say (in #148 at least) that science produces provisional and probabalistic statements(e and KR were making that point, and quite well I might add), I agree with the idea. Read the Popper link in e's comment for context. Scientific theories, because they project beyond the realm of experience to make predictions regarding new data, are inherently inferential. Deduction is only possible when the the logical loop can be closed. Deduction is useful in specific circumstances, obviously. Ken Lambert. I didn't comment on the OHC data in my post at 148. I think you're addressing someone else?
    0 0
  23. Riccardo at #167, an excellent assessment Riccardo. Thanks. Another observation that has been made regarding "skeptical" arguments is the contradictory nature of their arguments. For example, a little while ago John posted this story "Is the long-term trend in CO2 caused by warming of the oceans?" So there "skeptics" acknowledge that the oceans are warming and indeed use that very fact to try and claim that warmer oceans are driving the increase in atmospheric CO2 (FYI commentator Ned has just posted an excellent rebuttal to that misguided hypothesis). Yet, here we have skeptics arguing (#162 and #168) that the oceans are not warming, or more specifically that the warming trends are not robust. It seems that they chose to ignore Fig. 1 shown in Trenberth's comment on the Lyman et al. (2010) paper which clearly shows otherwise. And that introduces another tactic used by "skeptics", cherry-picking incredibly very short windows (e.g., 2001-2003) to try and make a case that OHC or global surface temperatures are no longer warming or to claim that the long term trends are not robust.
    0 0
  24. Stephen Baines, I hate to do this but ... "I must apologize for all my mispellings now and in the future." You've misspelled "misspellings". Keep on hammering BP, though, misspellings and all! :)
    0 0
  25. It's a curse, but one that I hope can bring a little joy to the world ... at my expense of course.
    0 0
  26. #171 e at 01:02 AM on 17 June, 2010 Surely you don't think we can have positive knowledge with 100% certainty? If we can't say anything with certainty, and if we can't say anything probabilistically, what is there left to say? Good question. But simply that's how things are. Probability has a very specific meaning as applied inside science. At least since 1933, when Andrei Nikolaevich Kolmogorov presented his axiom system for probability, it has. It is only applicable if the sample space (field of events) is given. This is often overlooked in some mistaken probability calculations; if you cannot precisely define the whole sample space, then the probability of any subset cannot be defined either. If you try to apply this concept of probability for certainty of knowledge, you have to know in advance everything there is to be known. You need to define the set of conceivable propositions, with no truth-value assigned to them at this stage of course. But this is an impossible quest. Therefore yo can't have a probability measure either, there is no proper way to assign probability values to propositions (except under very specific conditions, which are seldom granted). Your usage of the term probability is like that of energy in Seven Tips for Deriving Energy from Your Relationships. E = m×c2 clearly does not apply here. Same word, different concept. That 100% in your rhetoric question above can't be a number, it should be understood in a metaphoric sense. In that sense we can never have absolute certainty indeed, that belongs to someone else. But it does not imply propositions should be inherently fuzzy. They can have perfectly sharp truth values even if our knowledge of it is imperfect. In a sense it is the clash of two belief systems. You seem to believe truth was something to be constructed while I think it is given, it simply is irrespective of our state of ignorance. I pursue discovery, your business seems to be invention. It is a metaphysical difference with far reaching consequences.
    0 0
  27. BP #162 Nope, you're still incorrect. What you seem to be describing are the conditions required to develop a scientific law. However outside a very small range of disciplines these are very rare, and the only things we can be sure to develop are scientific theories. Fields where issues around complexity are substantial (such as climate science), the sheer number of variables, and problematic measurement models preclude the further development of scientific laws, and we have to rely on induction driven theory instead. In my experience, people with backgrounds in some parts of physical science and engineering fail to appreciate this, in much the same way that molecular biologists often fail to understand, and discount the importance of ecology. So my conclusion is that you're showing your bias as someone with a background in a small part of the physical sciences or engineering, and your education has not prepared you to deal with the consequences of complexity and uncertainty properly. I could be wrong, but I don't think so.
    0 0
  28. #172 Stephen Baines at 01:45 AM on 17 June, 2010 That [i.e. starting the falsification job] is what others have been doing previously No, they have not. Everyone started to talk about something else. Perfect example of preoccupation with robustness, when merit of a claim is not considered in itself, but in its external connections (networking properties). This same approach, as applied to persons, is a post-modern phenomenon in cultures with European roots, and a sad one. Never ask who the guy actually is, ask about his supporters (this way you don't have to take responsibility for him). It is a moral failure on a mass scale. The quest for all of you in this specific case is to find the error inside the module, not outside of it. A rather simple line of reasoning is presented here as a response to #128 skywatcher at 01:21 AM on 16 June, 2010.
    0 0
  29. BP's here stripped to essentials: If land surface temperature trend is reduced... A big if. I'm not going to say Berényi Péter's misleading but here we are, once again scrutinizing some very fascinating side issues visible only with a vanishingly small field of view. What a great example of rotating the microscope turret, turning up our magnification until we're effectively blind.
    0 0
  30. #177 kdkd at 08:59 AM on 17 June, 2010 Fields where issues around complexity are substantial (such as climate science), the sheer number of variables, and problematic measurement models preclude the further development of scientific laws, and we have to rely on induction driven theory instead The same problem arises in software engineering. When complexity is skyrocketing, people simply get lost. However, there is a solution to this problem, and an old one for that matter. Modularization. You should break down the problem to its constituent parts, treat sub-problems separately, verify, define standard interfaces, re-assemble. Do it in a recursive manner if necessary. The tricky part of course is to find proper module boundaries. BTW, I am still not sure climate is not governed by laws (I mean high level ones). It is an energetically open system with many internal degrees of freedom. Systems like this tend to be self-organizing and develop minimax properties.
    0 0
  31. BP #180 Again there's a problem here. Reductionism will only get you so far, and can not explain much of the variability of complex natural systems. My own field of research (in the social sciences) suffers from these same problems, and a reductionist approach simply won't work. Same for ecologists. With a single dependent variable, mathematical chaos can result from a three (yes, 3) parameter model, without including stochasticity. In this kind of situation, a reductionist approach will not help.
    0 0
  32. #179 doug_bostrom at 09:43 AM on 17 June, 2010 here we are, once again scrutinizing some very fascinating side issues visible only with a vanishingly small field of view I would not call it a side issue. Quantifying uncertainty in an absolutely bogus way is a key issue with the Uncertainty Guidance Note for the Fourth Assessment Report.
    Likelihood Terminology Likelihood of the occurrence / outcome  
    Virtually certain > 99% probability 
    Extremely likely > 95% probability  
    Very likely > 90% probability 
    Likely > 66% probability 
    More likely than not > 50% probability 
    About as likely as not 33 to 66% probability 
    Unlikely < 33% probability 
    Very unlikely < 10% probability 
    Extremely unlikely < 5% probability 
    Exceptionally unlikely < 1% probability 
    0 0
  33. BP I forgot to mention that I found your post #176 very thought-provoking, though I slighted it as a "side issue." It seems to me that overzealous application of Kolmogorov's axiom system could lead us to effective paralysis in a host of different fields beyond climate research, including for instance medicine and my decision over whether to add earthquake insurance to my homeowners' policy.
    0 0
  34. Berényi - your post on probability is excellent. it is, however, not the same definition as the probabilistic statements I discussed with regard to inductive proofs. An inductive argument cannot, by it's nature, be assigned distinct probabilities. You are generalizing from the specific to the general case, from some set of observations to the 'universe' of possibilities. Since you have not observed all cases in all situations over the entire universe, you don't know the solution space, and can't assign a specific and numeric probability. This is a different domain from an inductive argument. In scientific induction, what you can do is to take multiple inductive arguments, evaluate the deductive and probabilistic premises, and decide based on those which of the inductive arguments carries more weight. This is often a deferred judgement - awaiting the predictions of the various inductive arguments to see which has the most predictive or widely applied generalizations. But it is a judgement call. Initial reviewers of the General Theory of Relativity didn't assign a numeric probability to it's correctness - they looked at its consistency with multiple sets of observed data, parsimony of explanation (no complex system of crystal spheres, no backbending of the theory to explain certain observations), and predictive power in ways that differed from competing hypotheses. Even then, when a few unique predictions were confirmed, it took multiple avenues of independent evidence to raise the General Theory to the state of an accepted consensus. Inductive arguments cannot be assigned hard probabilities - that would be a deductive argument based upon complete knowledge, another creature entirely. An inductive argument can indeed be more probable than alternatives - in the definition of supported by evidence strong enough to establish presumption but not proof (a probable hypothesis)" - Merriam Webster, 1st definition, as opposed to the 2nd definition, "establishing a probability (probable evidence)". It's important not to confuse those, which I feel you have in your most recent post. The 10 numeric alternatives you noted for agreement in the Fourth report are indeed judgement calls, not deterministic probabilities based upon complete knowledge. Perhaps you would be more satisfied with a range of "wholeheartedly agree" to "ambivalent" to "You must be kidding"? A numeric range at least gives readers some weighting on how strong the agreement is! Inductive arguments cannot be proven; they can be better supported than the alternatives, or, eventually, they can be disproven by contradictory evidence. You have to accept some uncertainty in science, or you will never be able to add to your knowledge by generalizing to cases and combinatorics you haven't yet seen.
    0 0
  35. BP >You seem to believe truth was something to be constructed while I think it is given, it simply is irrespective of our state of ignorance. I pursue discovery, your business seems to be invention. The underlying "truth" of reality and our imperfect knowledge of reality are two very distinct entities. I believe truth may be absolute, but our knowledge of that truth certainly is not. We are not born with this knowledge implanted in our minds, we have no choice but to construct that knowledge from our senses and our ability to apply logic. When that application of logic is used to derive general principles from given observations, that logic is by its very nature inductive, and thus can never give us a truly binary answer. Asking whether these conlusions should be fuzzy is irrelevant, we have no choice in the matter. I won't disagree that your proposition has a binary truth value in the underlying fabric of reality (though that point is philosophically debatable); the problem is that, as we lack omniscience, humans are never privy to the "true" nature of reality. The best we can possibly do is weigh inductive conclusions against one another based on our current limited knowledge of the world, and that's exactly what I was doing when I pointed out the improbability of your specific claim. And yes I do understand you cannot assign hard probabilities to inductive conclusions, that wasn't what I was doing. I was qualitatively judging the likelihood of your claim relative to the competing claim. KR's post above gives a great explanation of the types of probabilistic statements we are making. As for your talk of "modules", this a general post discussing the relevance of the sum of all current evidence on climate change. In the spirit of this post and the theme of this entire blog, I ask a very relevant question, why should this very speculative hypothesis cast doubt on the current state of climate science and its evidence taken as a whole? Your obsession with trying to steer the conversation back to a "narrow piece of the puzzle" does a great job of proving John's point, and highlights your stubborn refusal to admit the simple point that lots of evidence is better than a little evidence.
    0 0
  36. #185 e at 15:04 PM on 17 June, 2010 lots of evidence is better than a little evidence And some firm evidence is better than lots of shaky one.
    0 0
  37. BP #186 You're now verging into solipsism which is yet another technique that so called climate sceptics use to mislead. This is especially true in that you are demanding reductionist deductive proof in a field of knowledge where such things are not possible.
    0 0
  38. BP > And some firm evidence is better than lots of shaky one. Agreed.
    0 0
  39. BP: And some firm evidence is better than lots of shaky one. That's fine, but we've established there was no evidence for your hypothesis that led us down this fascinating road. So I guess that leaves us with lots of 'shaky' evidence, although I'd hardly call the multiple independent lines of evidence terribly shaky as they've not been successfully challenged. Each line of evidence is pretty sound, many together is very strong. One line on it's own, perhaps could be questioned... but when several lines, with different measurement strategies converge on one answer, that answer is, ah, robust. From what I can guess you're a software engineer? I think kdkd @177 may have it right that it is your training that is blinding you to the concepts required in environmental science, be they the right kind of inductive reasoning, or the dealing with multiple lines of evidence, none of which may show you exactly what you want to know, but all of which point strongly to some overall conclusion.
    0 0
  40. #185 e at 15:04 PM on 17 June, 2010 We are not born with this knowledge implanted in our minds, we have no choice but to construct that knowledge from our senses and our ability to apply logic. When that application of logic is used to derive general principles from given observations, that logic is by its very nature inductive, and thus can never give us a truly binary answer. Except it usually does not happen that way. What we actually do is to postulate universal principles very early in the process, based on little observational data. This step can be called inductive if you will, but it goes far beyond what is strictly necessary to explain the set of observations available at the moment. Ancient Greeks postulated circular motion for the Heavenly Bodies this way, because the Circle is the only perfect closed curve (whatever "perfect" means) and the behavior of the Heavens certainly looked like somewhat cyclic even at a first glance. The theory was extremely successful, had considerable predictive power, Ptolemaic cosmology has prevailed for one and a half millennia. As soon as the conceptual framework is given, we can happily rely on deductive reasoning using perfectly binary logic. Observation is still necessary to fine tune model to reality (you still need to determine the number, sizes, positions, orientations, orbits, periods of epicycles), but otherwise all you do is to calculate projections of these motions to the sphere of Heavens (which needs quite a bit of spherical geometry). Even its demise is enlightening. From retrospective analysis we know any quasi-periodic motion can be approximated by a sufficient number of epicycles with arbitrary precision. The proof goes something like the one for Fourier series. Therefore there was no way observation could falsify the theory provided of course the challenge was the accurate description of kinematic behavior of projections of Heavenly Lights to the Celestial Sphere. The model could be refined ad infinitum, with an ever increasing number of epicycles. Unfortunately during this process it became less and less understandable, and that was the real problem with it. With our vast computing power we could do even better on Ptolemaic calculations than medieval thinkers, there would be almost no limit to increasing the number of epicycles recursively. In reality came Nicolaus Copernicus and failed miserably. His model was much more transparent, than Ptolemy's (after all those epicycles added), but he was sill sticking to circular motion (this time around the Sun). Initially his theory was rejected not because of theological objections of the Catholic Church (those came later, preceded by early expression of distaste by Luther), but because it was all too easy to falsify it. Parallax predicted by his theory was unobservable and on top of that, with simple circular orbits its performance was much inferior to improved Ptolemaic predictions. One could of course add epicycles to planetary orbits around the Sun, but in that case what's the point of the whole exercise? Just to leave poor birdies behind in empty Air as Earth orbits the Sun? It was only after Johannes Kepler discovered elliptic orbits that the system got actually simpler. At least in a conceptual sense, if not computationally. By the way, the first two laws of Kepler were derived from a single case (Martian orbit), not from some induction on a wide sample of orbits. The pattern is the same even much later. Albert Einstein in developing his theory On the Electrodynamics of Moving Bodies didn't have to do inductive inferences on vast observational databases. He only used a single experiment (Michelson & Morley, 1887, not even citing them by their name, but just as unsuccessful attempts to discover any motion of the earth relatively to the ``light medium,'') and some symmetry properties of the Maxwell equations discovered earlier by Lorentz. Compared to this the inductive step he took was enormous. Ten years later he repeated the performance with his Geometrodynamics, this time only using the Eötvös experiment, geometrization of Electrodynamics by Minkowski along with some more symmetry speculations. I could go on with this ad nauseam from QM to String Theory. The general pattern is that very little empirical data is used for huge inductive leaps and most of the induction is done at rather high level by introducing some invariance principle, transforming the mathematical form of existing laws or even better, by finding mathematical structures that include the description of several unrelated fields as limit cases. The role of induction is more like a heuristic principle here, rather than a systematic tool working on many instances of observation. The bulk of work goes into derivation of specific cases from general equations obtained this easy and reckless way on the one hand and performing experiments to check these consequences on the other hand. Mathematics seems to play a central role in this process. Already Galileo has noted the great Book of Nature was somehow written in the language of Mathematics. It means even induction can be performed mainly on the symbolic level, as with quantization of certain representations in classical physics that are directly transformed to QM equations. Wigner's fifty years old essay, The Unreasonable Effectiveness of Mathematics in the Natural Sciences still ponders on this question. We can never be sure if these signs give us truth or not. However we have no choice but consider them true until proven false (by experiment or observation). With fuzzy truth-values assigned to propositions, even proper falsification becomes impossible. If something is 95% true, it may take quite a lot of counterexamples to get one convinced it must be false after all. Even then only a lower certainty might be claimed, 90% perhaps - still very likely. If even falsifiability is abandoned, we are left in the outer darkness. Binary logic is not for all. Spouses, not driven by logic, can perfectly well love and hate each other at the same time and one still have to deal with situations like this somehow. But right now we are not doing zen, we are doing science and in this fine tradition you should let your communication be, Yea, yea; Nay, nay: for whatsoever is more than these cometh of evil.
    0 0
  41. #189 skywatcher at 19:32 PM on 17 June, 2010 we've established there was no evidence for your hypothesis No, you have not. dealing with multiple lines of evidence, none of which may show you exactly what you want to know, but all of which point strongly to some overall conclusion Sounds like the prosecutor's job.
    0 0
  42. Differences in temperature anomaly (10-year running mean) in nearby stations. Base period is 1971-2000. Sniezka is mountain without UHI effect and Wroclaw is city with pop= ~600000.
    0 0
  43. BP #191 This latest comment, and others really does look like your own training has ill equipped you to understand how to deal with uncertainty and poor measureability properly. Which is leading me down the track of thinking that rather than trying to actively mislead, a lot of your comments are showing your susceptibility to the Dunning-Kreuger effect.
    0 0
  44. Berényi Péter while I still cannot seem to arrive at a state of paralysis based on your thoughts and opinions, that was really nicely written post. Thanks also to 'e' and 'KR.'
    0 0
  45. Retracing our footsteps back from the interesting and informative and completely necessary conversation on epistemology, I'm still left with hints about climate behavior that my feeble brain can readily analogize to a more prosaic level. My vehicle's engine is making some unusual clattering noises. The oil pressure light is flickering. I've not checked my oil level recently, I can't really remember how many miles ago. I know my car consumes a certain amount of oil but my notion of exactly how much oil is consumed per mile is hazy. None of these things are a certain indication that my engine is about to burn up. The clattering could be a collapsed lifter, the flickering lamp could be a short. My assumptions about a potentially diminishing quantity of oil are hazy at best. I should add, this is all behavioral information from my actual experience with one of my cars. So none of the indirect information I have about what's going on under the hood is anything like conclusive when I consider each clue in isolation. I've had a clattering lifter before. The engine wiring harness is in poor shape and I've seen the oil pressure light flicker and even light solidly in past only to find a full oil pan. My last measurement of oil level is even more uncertain, I have only the vaguest notion of how much oil ought to be present. Taking all that information together, however, I can form a reasonably useful judgment that my vehicle is about to undergo a drastic change and I ought to pull off and check my oil level. In all probability I'll find the level to be critically low. For me, that's the model of what I'm seeing with regard to climate. We've got all sorts of signs and portents pointing more or less in the same direction. None are perfectly reliable, some are quite imperfect, but it would actually be unreasonable to ignore the overall message.
    0 0
  46. doug_bostrom at 07:33 AM, doug, perhaps your oil level may be low, but there is no direct correlation between a flickering oil light and the oil level. They may coincide much of the time, and be plotted on a graph, but the mechanism that drives each are separate with only indirect links. PS. Don't trade in the car yet, there is a lot of mileage left in the analogy tank.
    0 0
  47. #195 doug_bostrom at 07:33 AM on 18 June, 2010 I ought to pull off and check my oil level Except your car is the world economy, already on a bumpy road along the river, on a floodplain with swamp on both sides, water level is rising fast so you have to reach high ground as soon as possible. Make your choice.
    0 0
  48. #193 kdkd at 05:53 AM on 18 June, 2010 how to deal with uncertainty and poor measureability properly In science the standard practice is to get rid of uncertainty by improvements to your measurement system, postponing your judgment until the job is done. In real life this procedure is not always practicable, because decision is urgent and resources are lacking. In this case you have to make-do with what you have. But do not call that science please. your comments are showing your susceptibility to the Dunning-Kreuger effect So are yours :)
    0 0
  49. John if you don't mind, I'm going to take some initiative here. BP et al., you have steered us way off topic. As fascinating as the philosophy behind the science is, it should not detract from the fact that "skeptics" show a propensity to distort and mislead when it comes to the science. Perhaps BP you are trying to detract from that inconvenient fact? BP, please, at least have the gumption to call foul when "skeptics" mislead, which is actually the topic of this post. Or do you disagree with John's (and others') assertion that "skeptics" mislead?
    0 0
  50. Thank you johnd, you extended the life of my analogy by another few miles. Notice how John seized on a single indicator or metric and began to work it as a source of doubt? We've also got the strange clattering from the engine but let's ignore that aberration and focus instead on how the flickering oil light might not be telling us anything because it's an indirect diagnostic of the oil level. Later-- after we've wrestled the oil pressure light to the ground-- we'll forget about lubrication quality and instead exclusively quibble over what a mechanical thrashing noise may or may not tell us about the engine. Don't ever consider the symptoms as a whole because that might lead to a conclusion. This form of abstract mental disintegration will lead to physical engine disintegration Berényi Péter, the fossil fuel gas tank is rapidly approaching empty and there's no fuel station in sight. The car's shortly going to run out of gas leaving us stranded regardless of the actual state of the oil pan. The anthropogenic warming thing is one issue, fossil fuels are another. The two are closely related but fossil fuels are on their own trajectory quite apart from climate problems.
    0 0

Prev  1  2  3  4  5  Next

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2020 John Cook
Home | Links | Translations | About Us | Privacy | Contact Us