A presentation such as this, going as straight to the point as this one does, will necessarily rely on a fairly extensive ‘back catalogue’ of supporting graphs, figures and clarifications, most of which were generated in response to the likely questions and objections that might arise along the way, in the course of the argument being set up. Seeing how addressing these at every turn and having to substantiate all choices made at each single step would bog down and/or sidetrack the presentation to such an extent that the overall message would ultimately become lost in the noise, they will however have to remain in the background for now, rather kept ready at hand for the next round, so to speak. My point is this: Let the argument, as it stands, be presented first, to its completion – and then bring on the critique.
INTRODUCTION – THE THEORY BEHIND
The ‘AGW (CO2) warming hypothesis’ (really just another name for ‘the general idea of an «enhanced greenhouse effect» causing global warming’) says that, as the total content of CO2 in the atmosphere rises over time, so will global temperatures – in short: «Temps should go up». The scientific method demands that any scientific hypothesis should be able to make predictions like this, statements or claims about the world that can be tested, thus allowing us to either strengthen or weaken our trust in the explanatory power of our hypothesis. However, if there is to be any point in performing such a test, the prediction being tested needs to be relevant, i.e. it should be more or less unique to our particular hypothesis. So is «Temps should go up» a relevant prediction? No. It’s a prediction, but it’s not a relevant one. Because it isn’t specific enough. It isn’t unique to the ‘CO2 warming hypothesis’. It cannot separate between one proposed cause and another. For example, ‘more solar heat being absorbed by the Earth system over time’ would be an alternative explanation of multidecadal global warming to the «enhanced-greenhouse-effect» proposition. Both would predict the world to get warmer. So how do you choose one over the other? You hone in on an observation that would be unique to your favoured explanation. And now you’ve got yourself a relevant prediction to be tested …!
We, after all, want to find the cause behind the observed effect (‘global warming’), not the effect itself – that has already been found. That’s merely our starting point.
Update (March 24th) at the end of this post – a kind of response from Feldman.
There was much ado recently about a new paper published in ‘Nature’(“Observational determination of surface radiative forcing by CO2 from 2000 to 2010″ by Feldman et al.) claiming to have observed a strengthening in CO2-specific “surface radiative forcing” at two sites in North America going from 2000 to the end of 2010 (a period of 11 years) of about 0.2 W/m2 per decade, and through this observation further claiming how they have shown empirically (allegedly for the first time outside the laboratory) how the rise in atmospheric CO2 concentration directly and positively affects the surface energy balance, by adding more and more energy to it as “back radiation” (“downwelling longwave (infrared) radiation” (DWLWIR)), thus – by implication – leading to surface warming.
In other words, Feldman et al. claim to have obtained direct empirical evidence – from the field – of a strengthening of the “greenhouse effect”, a result, it would seem, lending considerable support to the hypothesis that our industrial emissions of CO2 and other similar gaseous substances to the atmosphere has enhanced, and is indeed enhancing still, the Earth’s atmospheric rGHE, thus causing a warming global surface – the AGW proposition.
From the abstract:
“(…) we present observationally based evidence of clear-sky CO2 surface radiative forcing that is directly attributable to the increase, between 2000 and 2010, of 22 parts per million atmospheric CO2.”
“These results confirm theoretical predictions of the atmospheric greenhouse effect due to anthropogenic emissions, and provide empirical evidence of how rising CO2 levels (…) are affecting the surface energy balance.”
So the question is: Do these results really “confirm theoretical predictions of the atmospheric greenhouse effect due to anthropogenic emissions”?
Of course they don’t. As usual, the warmists refuse to look at the whole picture, insisting rather on staying inside the tightly confined space of their own little bubble model world. Continue reading →
In July I wrote a blog post where a strange and very conspicuous step change indeed in global mean temps relative to the trended AMO (North Atlantic SSTa), occurring across the 8-year period of 1963-70, was pointed out:
As you can clearly see, the two curves generally follow each other in remarkable style all the way from 1860 till today, except for the relatively sudden and substantial global upward shift taking place across the last half of the 60s, being firmly established by the end of 1970. After this point, the curves are back to tracking each other to an equally impressive degree as before the shift, only now with the global raised 0.25 degrees above the North Atlantic.
“So, how to sort this out and do a more realistic job of detecting climate change and (…) attributing it to natural variability versus anthropogenic forcing? Observationally based methods and simple models have been underutilized in this regard.”
There is a very simple way of doing this that people at large still seem to be absolutely blind to. To echo the words of ‘Statistician to the Stars!’ William M. Briggs: “Just look at the data!” You have to do it in detail. Both temporally and spatially. I have done this already here, here and here + a summary of the first three here. In this post I plan to highlight even more clearly the difference between what an anthropogenic (‘CO2 forcing’) signal would and should look like and a signal pointing to natural processes.
Curry has many sensible points. She says among other things:
“Because historical records aren’t long enough and paleo reconstructions are not reliable, the climate models ‘detect’ AGW by comparing natural forcing simulations with anthropogenically forced simulations. When the spectra of the variability of the unforced simulations is compared with the observed spectra of variability, the AR4 simulations show insufficient variability at 40-100 yrs, whereas AR5 simulations show reasonable variability. The IPCC then regards the divergence between unforced and anthropogenically forced simulations after ~1980 as the heart of the their detection and attribution argument. (…)
The glaring flaw in their logic is this. If you are trying to attribute warming over a short period, e.g. since 1980, detection requires that you explicitly consider the phasing of multidecadal natural internal variability during that period (e.g. AMO, PDO), not just the spectra over a long time period. Attribution arguments of late 20th century warming have failed to pass the detection threshold which requires accounting for the phasing of the AMO and PDO. It is typically argued that these oscillations go up and down, in net they are a wash. Maybe, but they are NOT a wash when you are considering a period of the order, or shorter than, the multidecadal time scales associated with these oscillations.
Further, in the presence of multidecadal oscillations with a nominal 60-80 yr time scale, convincing attribution requires that you can attribute the variability for more than one 60-80 yr period, preferably back to the mid 19th century. Not being able to address the attribution of change in the early 20th century to my mind precludes any highly confident attribution of change in the late 20th century.“