A presentation such as this, going as straight to the point as this one does, will necessarily rely on a fairly extensive ‘back catalogue’ of supporting graphs, figures and clarifications, most of which were generated in response to the likely questions and objections that might arise along the way, in the course of the argument being set up. Seeing how addressing these at every turn and having to substantiate all choices made at each single step would bog down and/or sidetrack the presentation to such an extent that the overall message would ultimately become lost in the noise, they will however have to remain in the background for now, rather kept ready at hand for the next round, so to speak. My point is this: Let the argument, as it stands, be presented first, to its completion – and then bring on the critique.
INTRODUCTION – THE THEORY BEHIND
The ‘AGW (CO2) warming hypothesis’ (really just another name for ‘the general idea of an «enhanced greenhouse effect» causing global warming’) says that, as the total content of CO2 in the atmosphere rises over time, so will global temperatures – in short: «Temps should go up». The scientific method demands that any scientific hypothesis should be able to make predictions like this, statements or claims about the world that can be tested, thus allowing us to either strengthen or weaken our trust in the explanatory power of our hypothesis. However, if there is to be any point in performing such a test, the prediction being tested needs to be relevant, i.e. it should be more or less unique to our particular hypothesis. So is «Temps should go up» a relevant prediction? No. It’s a prediction, but it’s not a relevant one. Because it isn’t specific enough. It isn’t unique to the ‘CO2 warming hypothesis’. It cannot separate between one proposed cause and another. For example, ‘more solar heat being absorbed by the Earth system over time’ would be an alternative explanation of multidecadal global warming to the «enhanced-greenhouse-effect» proposition. Both would predict the world to get warmer. So how do you choose one over the other? You hone in on an observation that would be unique to your favoured explanation. And now you’ve got yourself a relevant prediction to be tested …!
We, after all, want to find the cause behind the observed effect (‘global warming’), not the effect itself – that has already been found. That’s merely our starting point.
Turns out the results from my last blog post were challenged even before I published them. In a paper from 2014, Allan et al., the alii notably including principal investigator of the CERES team, Dr. Norman Loeb, went about reconstructing the ToA net balance (including the ASR and OLR contributing fluxes) from 1985 onwards, just like I did; in fact, it’s all right there in the title itself: “Changes in global net radiative imbalance 1985–2012”. I missed this paper completely, even when specifically managing to catch and discuss (in the supplementing post, Addendum I) its follow-up (Allan, 2017). The results and conclusions of Allan et al., 2014, regarding the downward (SW) and upward (LW) radiative fluxes at the ToA and how they’ve evolved since 1985, appear to disagree to a significant extent with mine. I was only very recently made aware of the existence of this paper, by a commenter on Dr. Roy Spencer’s blog, “Nate”, when he was kind enough to notify me (albeit in an ever so slightly hostile manner):
“So, how to sort this out and do a more realistic job of detecting climate change and (…) attributing it to natural variability versus anthropogenic forcing? Observationally based methods and simple models have been underutilized in this regard.”
There is a very simple way of doing this that people at large still seem to be absolutely blind to. To echo the words of ‘Statistician to the Stars!’ William M. Briggs: “Just look at the data!” You have to do it in detail. Both temporally and spatially. I have done this already here, here and here + a summary of the first three here. In this post I plan to highlight even more clearly the difference between what an anthropogenic (‘CO2 forcing’) signal would and should look like and a signal pointing to natural processes.
Curry has many sensible points. She says among other things:
“Because historical records aren’t long enough and paleo reconstructions are not reliable, the climate models ‘detect’ AGW by comparing natural forcing simulations with anthropogenically forced simulations. When the spectra of the variability of the unforced simulations is compared with the observed spectra of variability, the AR4 simulations show insufficient variability at 40-100 yrs, whereas AR5 simulations show reasonable variability. The IPCC then regards the divergence between unforced and anthropogenically forced simulations after ~1980 as the heart of the their detection and attribution argument. (…)
The glaring flaw in their logic is this. If you are trying to attribute warming over a short period, e.g. since 1980, detection requires that you explicitly consider the phasing of multidecadal natural internal variability during that period (e.g. AMO, PDO), not just the spectra over a long time period. Attribution arguments of late 20th century warming have failed to pass the detection threshold which requires accounting for the phasing of the AMO and PDO. It is typically argued that these oscillations go up and down, in net they are a wash. Maybe, but they are NOT a wash when you are considering a period of the order, or shorter than, the multidecadal time scales associated with these oscillations.
Further, in the presence of multidecadal oscillations with a nominal 60-80 yr time scale, convincing attribution requires that you can attribute the variability for more than one 60-80 yr period, preferably back to the mid 19th century. Not being able to address the attribution of change in the early 20th century to my mind precludes any highly confident attribution of change in the late 20th century.“