A presentation such as this, going as straight to the point as this one does, will necessarily rely on a fairly extensive ‘back catalogue’ of supporting graphs, figures and clarifications, most of which were generated in response to the likely questions and objections that might arise along the way, in the course of the argument being set up. Seeing how addressing these at every turn and having to substantiate all choices made at each single step would bog down and/or sidetrack the presentation to such an extent that the overall message would ultimately become lost in the noise, they will however have to remain in the background for now, rather kept ready at hand for the next round, so to speak. My point is this: Let the argument, as it stands, be presented first, to its completion – and then bring on the critique.


The ‘AGW (CO2) warming hypothesis’ (really just another name for ‘the general idea of an «enhanced greenhouse effect» causing global warming’) says that, as the total content of CO2 in the atmosphere rises over time, so will global temperatures – in short: «Temps should go up». The scientific method demands that any scientific hypothesis should be able to make predictions like this, statements or claims about the world that can be tested, thus allowing us to either strengthen or weaken our trust in the explanatory power of our hypothesis. However, if there is to be any point in performing such a test, the prediction being tested needs to be relevant, i.e. it should be more or less unique to our particular hypothesis. So is «Temps should go up» a relevant prediction? No. It’s a prediction, but it’s not a relevant one. Because it isn’t specific enough. It isn’t unique to the ‘CO2 warming hypothesis’. It cannot separate between one proposed cause and another. For example, ‘more solar heat being absorbed by the Earth system over time’ would be an alternative explanation of multidecadal global warming to the «enhanced-greenhouse-effect» proposition. Both would predict the world to get warmer. So how do you choose one over the other? You hone in on an observation that would be unique to your favoured explanation. And now you’ve got yourself a relevant prediction to be tested …!

We, after all, want to find the cause behind the observed effect (‘global warming’), not the effect itself – that has already been found. That’s merely our starting point.

Continue reading

How the CERES EBAF Ed4 data disconfirms “AGW” in 3 different ways …..

And also how – in the process – it shows the new RSSv4 TLT series to be wrong and the UAHv6 TLT series to be right.

For those of you who aren’t entirely up to date with the hypothetical idea of an “(anthropogenically) enhanced GHE” (the “AGW”) and its supposed mechanism for (CO2-driven) global warming, the general principle is fairly neatly summed up here:

Figure 1. From Held and Soden, 2000 (Fig.1, p.447).

I’ve modified this diagram below somewhat, so as to clarify even further the concept of “the raised ERL (Effective Radiating Level)” – referred to as Ze in the schematic above – and how it is meant to ‘drive’ warming within the Earth system; to simply bring the message of this fundamental premise of “AGW” thinking more clearly across. Continue reading

Tamino’s radiosonde problem, Part 1

RSS vs. RATPAC tamino

Figure 1. Original found here: https://tamino.wordpress.com/2015/12/11/ted-cruz-just-plain-wrong/

A good month ago, the perennially unsavoury character calling himself Tamino once again tried to hold up the spotty “global” network of radiosondes (weather balloons) as somehow a better gauge of the progression and trend of tropospheric temperature anomalies over the last 37 years than the satellites, by virtue of being essentially – as he would glibly put it – “thermometers in the sky”.

So his simple take on the glaring “drift” between current surface records and the satellites over the last 10-12 years is this: The surface records are right and the satellites are wrong. Why? Because the surface records agree with the radiosondes while the satellites don’t! The radiosondes implicitly – in his world – representing “Troposphere Truth”.

And so, when your starting premise goes like this: the radiosondes = thermometers in the sky = troposphere truth, then any “drift” observed between them and the satellites (as in Fig.1 above) will – by default – be interpreted by you as a problem with the latter.

To repeat Tamino’s fairly simplistic reasoning, then, in the form of some sort of logical-sounding argument: Surface and satellites don’t agree. Radiosondes and satellites don’t agree. But surface and radiosondes do agree. Which means the latter two are right, their agreement robustly verifying the ‘rightness’ of each. (And also, the radiosondes represent “Troposphere Truth”.) Which leaves the satellites out in the cold …

There is, however, a definite issue to be had with this line of argument.

It doesn’t hold up to scrutiny … Continue reading

Why “GISTEMP LOTI global mean” is wrong and “UAHv6 tlt gl” is right

Ten days ago, Nick Stokes wrote a post on his “moyhu” blog where he – in his regular, guileful manner – tries his best to distract from the pretty obvious fact (pointed out in this recent post of mine) that GISS poleward of ~55 degrees of latitude, most notably in the Arctic, basically use land data only, effectively rendering their “GISTEMP LOTI global mean” product a bogus record of actual global surface temps.

Among other things, he says:

“The SST products OI V2 and ERSST, used by GISS then and now, adopted the somewhat annoying custom of entering the SST under sea ice as -1.8°C. They did this right up to the North Pole. But the N Pole does not have a climate at a steady -1.8°C. GISS treats this -1.8 as NA data and uses alternative, land-based measure. It’s true that the extrapolation required can be over long distances. But there is a basis for it – using -1.8 for climate has none, and is clearly wrong.

So is GISS “deleting data”? Of course not. No-one actually measured -1.8°C there. It is the standard freezing point of sea water. I guess that is data in a way, but it isn’t SST data measured for the Arctic Sea.”

The -1.8°C averaging bit is actually a fair and interesting point in itself, but this is what Stokes does; he finds a peripheral detail somehow related to the actual argument being made and proceeds to misrepresent its significance in an attempt to divert people’s attention from the real issue at hand. The real issue in this case of course being GISS’s (bad) habit of smearing anomaly values from a small collection of land data points all across the vast polar cap regions, over wide tracts of land (where for the main part we don’t have any data), over expansive stretches of ocean (where we do have SST data readily available) AND over complex regions affected by sea ice (where we indeed do have data (SSTs, once again) when and where there isn’t any sea ice cover, but none whatsoever when there is), all the way down to 55-60 degrees of latitude. Continue reading


Happy New Year to everyone!

There is a very good reason why the trend and general progression of tropospheric temp anomalies since 2000, as rendered by the new UAH.v6 dataset, are most likely correct. (Read this post to understand why it was necessary for UAH to update their tlt product from its version 5.6 in the first place.)

The reason is that they both match to near perfection the trends and general progression of incoming and outgoing radiation flux anomalies, as rendered by the CERES EBAF ToA Ed2.8 dataset, over that same period. They’re all flat …:


Figure 1. Incoming radiant heat (ASR, “absorbed solar radiation”) (gold) vs. outgoing radiant heat (OLR, “outgoing longwave radiation”) (red) at the global ToA, from March 2000 to July 2015. Continue reading