Aletho News

ΑΛΗΘΩΣ

New Study: 67% Of Scientific Papers Can Be Said To Reject The AGW Hypothesis…

… when using the same assumption-based methodology to arrive at the conclusion only 0.5% of scientific papers reject AGW.

By Kenneth Richard | No Tricks Zone | November 30, 2023

In a new study, six scientists (Dentelski et al., 2023) effectively eviscerate a methodologically flawed 2021 study (Lynas et al.) that claims 99.53% of 3,000 scientific papers examined (by subjectively classifying papers based only on what is written in the abstracts) support the anthropogenic global warming, or AGW, position.

Image Source: Dentelski et al., 2023

The Lynas et al. authors begin with the assumption that a consensus on the human attribution for global warming not only exists, it is ensconced as the unquestioned, prevailing viewpoint in the scientific literature. So their intent was to effectively quantify the strength of this assumed widespread agreement by devising a rating system that only assesses the explicit rejection of AGW in the paper’s abstract as not supporting the presumed “consensus.”

Of the 3,000 papers analyzed in Lynas et al., 282 were deemed not sufficiently “climate-related.” Another 2,104 papers were placed in Category 4, which meant either the paper’s authors took “no position” or the position on AGW was deemed “uncertain”… in the abstract. So, exploiting the “if you are not against, you are for” classification bias, Lynas and colleagues decided that the authors of these 2,104 scientific papers in Category 4 do indeed agree with AGW, as what is written in the abstract does not explicitly state they do not agree.

Interestingly, if this classification bias had not been utilized and the thousands of Category 4 (“no position” or “uncertain”) papers were not counted as supporting AGW, only 892 of the 2,718 (climate-related) papers, or 32%, could be said to have affirmatively stated they support AGW. So, simply by assuming one cannot divine the AGW opinions of authors of scientific papers by reading abstracts, it could just as facilely be said that 67% (1,826 of 2,718) of climate-related papers reject AGW.

Dentelski and colleagues also point out that by their own analysis, 54% of the papers they examined that were classified by Lynas et al. as only “implying” support (Category 3) for AGW or Category 4 (“no position” or “uncertain”) actually described a lack of support for AGW in the body of the paper itself. But since this expressed non-endorsement of AGW was not presented in the abstract, these papers were wrongly classified as supporting AGW anyway.

To fully grasp the subjective nature of the methodology employed by Lynas and colleagues, Dentelski et al. uncover the internals of the study indicating 58% of the time two independent examiners did not agree on numerical classification scale (from 1 to 7) for a paper. If two people agree just 42% of the time when classifying papers, it cannot be said that the rating system is sufficiently objective.

The Lynas et al. paper appears to be little more than an exercise in propaganda.

December 2, 2023 Posted by | Science and Pseudo-Science, Timeless or most popular | | Leave a comment

The conceits of consensus

By Judith Curry | Climate Etc. | August 27, 2015

Critiques, the 3%, and is 47 the new 97?

For background, see my previous post The 97% feud.

Cook et al. critiques

At the heart of the consensus controversy is the paper by Cook et al. (2013), which inferred a 97% consensus by classifying abstracts from published papers.The study was based on a search of broad academic literature using casual English terms like “global warming”, which missed many climate science papers but included lots of non-climate-science papers that mentioned climate change – social science papers, surveys of the general public, surveys of cooking stove use, the economics of a carbon tax, and scientific papers from non-climate science fields that studied impacts and mitigation.

The Cook et al. paper has been refuted in the published literature in an article by Richard Tol:  Quantifying the consensus on anthropogenic global warming in the literature: A re-analysis (behind paywall).  Summary points from the abstract:

A trend in composition is mistaken for a trend in endorsement. Reported results are inconsistent and biased. The sample is not representative and contains many irrelevant papers. Overall, data quality is low. Cook׳s validation test shows that the data are invalid. Data disclosure is incomplete so that key results cannot be reproduced or tested.

Social psychologist Jose Duarte has a series of blog posts that document the ludicrousness of the selection and categorization of papers by Cook et al., including citation of specific articles that they categorized as supporting the climate change consensus:

From this analysis, Duarte concludes: ignore climate consensus studies based on random people rating journal article abstracts.  I find it difficult to disagree with him on this.

The 3%

So, does all this leave you wondering what the 3% of papers not included in the consensus had to say?  Well, wonder no more. There is a new paper out, published by Cook and colleagues:

Learning from mistakes

Rasmus Benestad, Dana Nuccitelli, Stephan Lewandowski, Katherine Hayhoe, Hans Olav Hygen, Rob van Dorland, John Cook

Abstract.  Among papers stating a position on anthropogenic global warming (AGW), 97 % endorse AGW. What is happening with the 2 % of papers that reject AGW? We examine a selection of papers rejecting AGW. An analytical tool has been developed to replicate and test the results and methods used in these studies; our replication reveals a number of methodological flaws, and a pattern of common mistakes emerges that is not visible when looking at single isolated cases. Thus, real-life scientific disputes in some cases can be resolved, and we can learn from mistakes. A common denominator seems to be missing contextual information or ignoring information that does not fit the conclusions, be it other relevant work or related geophysical data. In many cases, shortcomings are due to insufficient model evaluation, leading to results that are not universally valid but rather are an artifact of a particular experimental setup. Other typical weaknesses include false dichotomies, inappropriate statistical methods, or basing conclusions on misconceived or incomplete physics. We also argue that science is never settled and that both mainstream and contrarian papers must be subject to sustained scrutiny. The merit of replication is highlighted and we discuss how the quality of the scientific literature may benefit from replication.

Published in Theoretical and Applied Climatology [link to full paper].

A look at the Supplementary Material shows that they considered credible skeptical papers (38 in total) – by Humlum, Scafetta, Solheim and others.

The gist of their analysis is that the authors were ‘outsiders’, not fully steeped in consensus lore and not referencing their preferred papers.

RealClimate has an entertaining post on the paper, Let’s learn from mistakes, where we learn that this paper was rejected by five journals before being published by Theoretical and Applied Climatology. I guess the real lesson from this paper is that you can get any kind of twaddle published, if you keep trying and submit it to different journals.

A consensus on what, exactly?

The consensus inferred from the Cook et al. analysis is a vague one indeed; exactly what are these scientists agreeing on? The ‘97% of the world’s climate scientists agree that humans are causing climate change’ is a fairly meaningless statement unless the relative amount (%) of human caused climate change is specified. Roy Spencer’s 2013 Senate testimony included the following statement:

“It should also be noted that the fact that I believe at least some of recent warming is human-caused places me in the 97% of researchers recently claimed to support the global warming consensus (actually, it’s 97% of the published papers, Cook et al., 2013). The 97% statement is therefore rather innocuous, since it probably includes all of the global warming “skeptics” I know of who are actively working in the field. Skeptics generally are skeptical of the view that recent warming is all human-caused, and/or that it is of a sufficient magnitude to warrant immediate action given the cost of energy policies to the poor. They do not claim humans have no impact on climate whatsoever.

The only credible way to ascertain whether scientists support the consensus on climate change is through surveys of climate scientists. This point is eloquently made in another post by Joe Duarte: The climate science consensus is 78-84%. Now I don’t agree with Duarte’s conclusion on that, but he makes some very salient points:

Tips for being a good science consumer and science writer. When you see an estimate of the climate science consensus:

  • Make sure it’s a direct survey of climate scientists. Climate scientists have full speech faculties and reading comprehension. Anyone wishing to know their views can fruitfully ask them. Also, be alert to the inclusion of people outside of climate science.
  • Make sure that the researchers are actual, qualified professionals. You would think you could take this for granted in a study published in a peer-reviewed journal, but sadly this is simply not the case when it comes to climate consensus research. They’ll publish anything with high estimates.
  • Be wary of researchers who are political activists. Their conflicts of interest will be at least as strong as that of an oil company that had produced a consensus study – moral and ideological identity is incredibly powerful, and is often a larger concern than money.
  • In general, do not trust methods that rest on intermediaries or interpreters, like people reviewing the climate science literature. Thus far, such work has been dominated by untrained amateurs motivated by political agendas.
  • Be mindful of the exact questions asked. The wording of a survey is everything.
  • Be cautious about papers published in climate science journals, or really in any journal that is not a survey research journal. Our experience with the ERL fraud illustrated that climate science journals may not be able to properly review consensus studies, since the methods (surveys or subjective coding of text) are outside their domains of expertise. The risk of junk science is even greater if the journal is run by political interests and is motivated to publish inflated estimates. For example, I would advise strong skepticism of anything published by Environmental Research Letters on the consensus – they’re run by political people like Kammen.

Is 47 the new 97?

The key question is to what extent climate scientists agree with key consensus statement of the IPCC:

“It is extremely likely {95%+ certainty} that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together. ”

Several surveys of climate scientists have addressed using survey questions that more or less address the issue of whether humans are the dominant cause of recent warming (discussed in the previous post by Duarte and summarized in my post The 97% feud).

The survey that I like the best is:

Verheggan et al. (2014) Scientists view about attribution of climate change. Environmental Science & Technology [link]

Recently, a more detailed report on the survey was made available [link]. Fabius Maximus has a fascinating post New study undercuts key IPCC finding (the text below draws liberally from this post). This survey examines agreement with the keynote statement of the IPCC AR5:

“It is extremely likely {95%+ certainty} that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together. ”

The survey examines both facets of the attribution statement – how much warming is caused by humans, and what is the confidence in that assessment.

In response to the question: What fraction of global warming since the mid 20th century can be attributed to human induced increases in atmospheric greenhouse gas concentrations? A total of 1,222 of 1,868 (64% of respondents) agreed with AR5 that the answer was over 50%. Excluding the 164 (8.8%) “I don’t know” respondents, yields 72% agree with the IPCC.

 

Slide1

The second question is: “What confidence level would you ascribe to your estimate that the anthropogenic greenhouse gas warming is more than 50%?” Of the 1,222 respondents who said that the anthropogenic contribution was over 50%, 797 (65%) said it was 95%+ certain (which the IPCC defines as “virtually certain” or “extremely likely”).

Slide2The 797 respondents who are highly confident that more than 50% of the warming is human caused) are 43% of all 1,868 respondents (47% excluding the “don’t know” group). Hence this survey finds that slightly less than half of climate scientists surveyed agree with the AR5 keynote statement in terms of confidence in the attribution statement.

 Who’s opinion ‘counts’?

Surveys of actual climate scientists is a much better way to elicit the actual opinions of scientist on this issue. But surveys raise the issue as to exactly who are the experts on the issue of attribution of climate change? The Verheggan et al. study was criticized in a published comment by Duarte, in terms of the basis for selecting participants to respond to the survey:

“There is a deeper problem. Inclusion of mitigation and impacts papers – even from physical sciences or engineering – creates a structural bias that will inflate estimates of consensus, because these categories have no symmetric disconfirming counterparts. These researchers have simply imported a consensus in global warming. They then proceed to their area of expertise. [These papers] do not carry any data or epistemic information about climate change or its causes, and the authors are unlikely to be experts on the subject, since it is not their field.

Increased public interest in any topic will reliably draw scholars from various fields. However, their endorsement (or rejection) of human-caused warming does not represent knowledge or independent assessments. Their votes are not quanta of consnsensus, but simply artifacts of career choices, and the changing political climate. Their inclusion will artificially inflate sample sizes, and will likely bias the results.”

Roy Spencer also addresses this issue in his Senate testimony (cited above):

“(R)elatively few researchers in the world – probably not much more than a dozen – have researched how sensitive today’s climate system is based upon actual measurements. This is why popular surveys of climate scientists and their beliefs regarding global warming have little meaning: very few of them have actually worked on the details involved in determining exactly how much warming might result from anthropogenic greenhouse gas emissions.”

The number of real experts on the detection and attribution of climate change is small, only a fraction of the respondents to these surveys. I raised this same issue in the pre-Climate Etc. days in response to the Anderegg et al. paper, in a comment at Collide-a-Scape (referenced by Columbia Journalism Review ):

The scientific litmus test for the paper is the AR4 statement: “anthropogenic greenhouse gases have been responsible for “most” of the “unequivocal” warming of the Earth’s average global temperature over the second half of the 20th century”.

The climate experts with credibility in evaluating this statement are those scientists that are active in the area of detection and attribution. “Climate” scientists whose research areas is ecosystems, carbon cycle, economics, etc speak with no more authority on this subject than say Freeman Dyson.

I define the 20th century detection and attribution field to include those that create datasets, climate dynamicists that interpret the variability, radiative forcing, climate modeling, sensitivity analysis, feedback analysis. With this definition, 75% of the names on the list disappear. If you further eliminate people that create datasets but don’t interpret the datasets, you have less than 20% of the original list.

Apart from Anderegg’s classification of the likes of Freeman Dyson as not a ‘climate expert’ (since he didn’t have 20 peer reviewed publications that they classed as ‘climate papers’), they also did not include solar – climate experts such as Syun Akasofu (since apparently Akasofu’s solar papers do not count as ‘climate’).

But perhaps the most important point is that of the scientists who are skeptical of the IPCC consensus, a disproportionately large number of these skeptical scientists are experts on climate change detection/attribution. Think Spencer, Christy, Lindzen, etc. etc.

Bottom line: inflating the numbers of ‘climate scientists’ in such surveys attempts to hide that there is a serious scientific debate about the detection and attribution of recent warming, and that scientists who are skeptical of the IPCC consensus conclusion are disproportionately expert in the area of climate change detection and attribution.

Conceits of consensus

And finally, a fascinating article The conceits of ‘consensus’ in Halakhic rhetoric.  Read the whole thing, it is superb.  A few choice excerpts:

The distinguishing characteristic of these appeals to consensus is that the legitimacy or rejection of an opinion is not determined by intrinsic, objective, qualifiable criteria or its merits, but by its adoption by certain people. The primary premise of such arguments is that unanimity or a plurality of agreement among a given collective is halakhically binding on the Jewish population  and cannot be further contested or subject to review.

Just as the appeal to consensus stresses people over logic, subsequent debate will also focus on the merits of individuals and their worthiness to be included or excluded from the conversation. This situation runs the risk of the ‘No True Scotsman’ fallacy whereby one excludes a contradictory opinion on the grounds that no one who could possibly hold such an opinion is worth consideration.

Debates over inclusion and exclusion for consensus are susceptible to social manipulations as well. Since these determinations imply a hierarchy or rank of some sort, attempts which disturb an existing order may be met with various forms of bullying or intimidation – either in terms of giving too much credit to one opinion or individual or not enough deference to another. Thus any consensus reached on this basis would not be not based out of genuine agreement, but fear of reprisals. The consensus of the collective may be similarly manipulated through implicit or overt marketing as a way to artificially besmirch or enhance someone’s reputation.

The next premise to consider is the correlation between consensus and correctness such that if most (or all) people believe something to be true, then by the value of its widespread acceptance and popularity, it must be correct. This is a well known logical fallacy known as argumentum ad populum, sometimes called the ‘bandwagon fallacy’. This should be familiar to anyone who has ever been admonished, “if all your friends would jump off a bridge would you follow?” It should also be obvious that at face value that Jews, especially Orthodox Jews, ought to reject this idea as a matter of principle.

Appeals to consensus are common and relatively simply to assert, but those who rely on consensus rarely if ever acknowledge, address, or defend, the assumptions inherent with the invoking of consensus as a source – if not the determinant – of practical Jewish law. As I will demonstrate, appeals to consensus are laden with problematic logical and halakhic assumptions such that while “consensus” may constitute one factor in determining a specific psak, it is not nearly the definitive halakhic criterion its proponents would like to believe.

August 27, 2015 Posted by | Deception, Science and Pseudo-Science | , , | Leave a comment

Root Cause Analysis of the Modern Warming

By Matt Skaggs | Climate Etc. | October 23, 2014

For years, climate scientists have followed reasoning that goes from climate model simulations to expert opinion, declaring that to be sufficient. But that is not how attribution works.

The concept of attribution is important in descriptive science, and is a key part of engineering. Engineers typically use the term “root cause analysis” rather than attribution. There is nothing particularly clever about root cause methodology, and once someone is introduced to the basics, it all seems fairly obvious. It is really just a system for keeping track of what you know and what you still need to figure out.

I have been performing root cause analysis throughout my entire, long career, generally in an engineering setting. The effort consists of applying well established tools to new problems. This means that in many cases, I am not providing subject matter expertise on the problem itself, although it is always useful to understand the basics. Earlier in my career I also performed laboratory forensic work, but these days I am usually merely a facilitator. I will refer to those that are most knowledgeable about a particular problem as the “subject matter experts” (SMEs).

This essay consists of three basic sections. First I will briefly touch on root cause methodology. Next I will step through how a fault tree would be conducted for a topic such as the recent warming, including showing what the top tiers of the tree might look like. I will conclude with some remarks about the current status of the attribution effort in global warming. As is typical for a technical blog post, I will be covering a lot of ground while barely touching on most topics, but I promise that I will do my best to explain the concepts as clearly and concisely as I can.

Part 1: Established Root Cause Methodology

Definitions and Scope

Formal root cause analysis requires very clear definitions and scope to avoid chaos. It is a tool specifically for situations in which we have detected an effect with no obvious cause, but discerning the cause is valuable in some way. This means that we can only apply our methodology to events that have already occurred, since predicting the future exploits different tools. We will define an effect subject to attribution as a significant excursion from stable output in an otherwise stable system. One reason this is important is that a significant excursion from stable behavior in an otherwise stable system can be assumed to have a single root cause. Full justification of this is beyond the scope of this essay, but consider that if your car suddenly stops progressing forward while you are driving, the failure has a single root cause. After having no trouble for a year, the wheel does not fall off at the exact same instant that the fuel pump seizes. I will define a “stable” system as one in which significant excursions are so rare in time that they can safely be assumed to have a single root cause.

Climate science is currently engaged in an attribution effort pertaining to a recent temperature excursion, which I will refer to as the “modern warming.” For purposes of defining the scope of our attribution effort, we will consider the term “modern warming” to represent the rise in global temperature since 1980. This is sufficiently precise to prevent confusion, we can always go back and tweak this date if the evidence warrants. 

Choosing a Tool from the Toolbox 

There are two basic methods to conclusively attribute an effect to a cause. The short route to attribution is to recognize a unique signature in the evidence that can only be explained by a single root cause. This is familiar from daily life; the transformer in front of your house shorted and there is a dead black squirrel hanging up there. The need for a systematic approach such as a fault tree only arises when there is no black squirrel. We will return to the question of a unique signature later, after discussing what an exhaustive effort would look like.

Once we have determined that we cannot simply look at the outcome of an event and see the obvious cause, and we find no unique signature in the data, we must take a more systematic approach. The primary tools in engineering root cause analysis are the fault tree and the cause map. The fault tree is the tool of choice for when things fail (or more generally, execute an excursion), while the cause map is a better tool for when a process breaks down. The fault tree asks “how?,” while the cause map asks “why?” Both tools are forms of logic trees with all logical bifurcations mapped out. Fault trees can be quite complex with various types of logic gates. The key attributes of a fault tree are accuracy, clarity, and comprehensiveness. What does it mean to be comprehensive? The tree must address all plausible root causes, even ones considered highly unlikely, but there is a limit. The limit concept here is euphemistically referred to as “comet strike” by engineers. If you are trying to figure out why a boiler blew up, you are not obligated to put “comet strike” on your fault tree unless there is some evidence of an actual comet.

Since we are looking at an excursion in a data set, we choose the fault tree as our basic tool. The fault tree approach looks like this:

  1. Verify that a significant excursion has occurred.
  2. Collect sufficient data to characterize the excursion.
  3. Assemble the SMEs and brainstorm possible root causes for the excursion.
  4. Build a formal fault tree showing all the plausible causes. If there is any dispute about plausibility, put the prospective cause on the tree anyway.
  5. Apply documented evidence to each cause. This generally consists of direct observations and experimental results. Parse the data as either supporting or refuting a root cause, and modify the fault tree accordingly.
  6. Determine where evidence is lacking, develop a plan to generate the missing evidence. Consider synthetically modeling the behavior when no better evidence is available.
  7. Execute plan to fill all evidence blocks. Continue until all plausible root causes are refuted except one, and verify that the surviving root cause is supported by robust evidence.
  8. Produce report showing all of the above, and concluding that the root cause of the excursion was the surviving cause on the fault tree.

I will be discussing these steps in more detail below.

The Epistemology of Attribution Evidence

As we work through a fault tree, we inevitably must weigh the value of various forms of evidence. Remaining objective here can be a challenge, but we do have some basic guidelines to help us.

The types of evidence used to support or refute a root cause are not all equal. The differences can be expressed in terms of “fidelity.” When we examine a failed part or an excursion in a data set, our direct observations are based upon evidence that has perfect fidelity. The physical evidence corresponds exactly to the effect of the true root cause upon the system of interest. We may misinterpret the evidence, but the evidence is nevertheless a direct result of the true root cause that we seek. That is not true when we devise experiments to simulate the excursion, nor is it true when we create synthetic models.

When we cannot obtain conclusive root cause evidence by direct observation of the characteristics of the excursion, or direct analysis of performance data, the next best approach is to simulate the excursion by performing input/output (I/O) experimentation on the same or an equivalent system. This requires that we make assumptions about the input parameters, and we cannot assume that our assumptions have perfect fidelity to the excursion we are trying to simulate. Once we can analyze the results of the experiment, we find that it either reproduced our excursion of interest, or it did not. Either way, the outcome of the experiment has high fidelity with respect to the input as long as the system used in test has high fidelity to the system of interest. If the experiment based upon our best guess of the pertinent input parameters does not reproduce the directly-observed characteristics of the excursion, we do not discard the direct observations in favor of the experiment results. We may need to go back and double check our interpretation, but if the experiment does not create the same outcome as the actual event, it means we chose the wrong input parameters. The experiment serves to refute our best guess. The outcomes from experimentation obviously sit lower on an evidence hierarchy than direct observations.

The fidelity of synthetic models is limited in exactly the same way with respect to the input parameters that we plug into the model. But models have other fidelity issues as well. When we perform our experiments on the same system that had the excursion (which is ideal if it is available), or on an equivalent system, we take great care to assure that our test system responds the same way to input as the original system that had the excursion of interest. We can sometimes verify this directly. In a synthetic model, however, an algorithm is substituted for the actual system, and there will always be assumptions that go into the algorithm. This adds up to a situation in which we are unsure of the fidelity of our input parameters, and unsure of the fidelity of our algorithm. The compounded effect of this uncertainty is that we do not apply the same level of confidence to model results that we do to observations or experiment results. So in summary, and with everything else being equal, direct observation will always trump experimental results, and experimental results will always trump model output. Of course, there is no way to conduct meaningful experiments on analogous climates, so one of the best tools is not of any use to us.

Similar objective value judgments can be made about the comparison of two data sets. When we look at two curves and they both seem to show an excursion that matches in onset, duration and amplitude, we consider that to be evidence of correlation. If the wiggles also closely match, that is stronger evidence. Two curves that obviously exhibit the same onset, magnitude, and duration prior to statistical analysis will always be considered better evidence than two curves that can be shown to be similar after sophisticated statistical analysis. The less explanation needed to correlate two curves, the stronger the evidence of correlation.

Sometimes we need to resolve plausible root causes but lack direct evidence and cannot simulate the excursion of interest by I/O testing. Under these circumstances, model output might be considered if it meets certain objective criteria. When attribution of a past event is the goal, engineers shun innovation. In order for model output to be considered in a fault tree effort, the model requires extensive validation, which means the algorithm must be well established. There must be a historical record of input parameters and how changes in those parameters affected the output. Ideally, the model will have already been used successfully to make predictions about system behavior under specific circumstances. Models can be both sophisticated and quite trustworthy, as we see with the model of planetary motion in the solar system. Also, some very clever methods have been developed to substitute for prior knowledge. An example is the Monte Carlo method, which can sometimes tightly constrain an estimation of output without robust data on input. Similarly, if you have good input and output data, we can sometimes develop a useful empirical relationship of the system behavior without really knowing much about how the system works. A simple way to think of this is to consider three types of information, input data, system behavior, and output data. If you know two of the three, you have some options for approximating the third. But if you only have adequate information on one or less of the types of information, your model approach is underspecified. Underspecified model simulations are on the frontier of knowledge and we shun their use on fault trees. To be more precise, simulations from underspecified models are insufficiently trustworthy to adequately refute root causes that are otherwise plausible.

Part 2: Established Attribution Methodology Applied to the Modern Warming

Now that we have briefly covered the basics of objective attribution and how we look at evidence, let’s apply the tools to the modern warming. Recall that attribution can only be applied to events in the past or present, so we are looking at only the modern warming, not the physics of AGW. A hockey stick shape in a data set provides a perfect opportunity, since the blade of the stick represents a significant excursion from the shaft of the stick, while the shaft represents the stable system that we need to start with.

I mentioned at the beginning that it is useful for an attribution facilitator to be familiar with the basics of the science. While I am not a climate scientist, I have put plenty of hours into keeping up with climate science, and I am capable of reading the primary literature as long as it is not theoretical physics or advanced statistics. I am familiar with the IPCC Annual Report (AR) sections on attribution, and I have read all the posts at RealClimate.org for a number of years. I also keep up with some of the skeptical blogs including Climate Etc. although I rarely enter the comment fray. I did a little extra reading for this essay, with some help from Dr. Curry. This is plenty of familiarity to act as a facilitator for attribution on a climate topic. Onward to the root cause analysis.

Step 1: Verify that a significant excursion has occurred.

Here we want to evaluate the evidence that the excursion of interest is truly beyond the bounds of the stability region for the system. When we look at mechanical failures, Step 1 is almost never a problem, there is typically indisputable visual evidence that something broke. In electronics, a part will sometimes seem to fail in a circuit but meet all of the manufacturer’s specifications after it is removed. When that happens we shift our analysis to the circuit and the component originally suspected of causing the failure becomes a refuted root cause.

In looking at the modern warming, we first ask whether there are similar multi-decadal excursions in the past millennium of unknown cause. We also need to consider the entire Holocene. While most of the available literature states that the modern excursion is indeed unprecedented, this part of the attribution analysis is not a democratic process. We find that there is at least one entirely plausible temperature reconstruction for the last millennium that shows comparable excursions. Holocene reconstructions suggest that the modern warming is not particularly significant. We find no consensus as to the cause of the Younger Dryas, the Minoan, Roman, and Medieval warmings, or the Little Ice Age, all of which may constitute excursions of at least similar magnitude. I am not comfortable with this because we need to understand the mechanisms that made the system stable in the first place before we can meaningfully attribute a single excursion.

When I am confronted with a situation like this in my role as facilitator, I would have a discussion with my customer as to whether they want to expend the funds to continue the root cause effort given the magnitude of uncertainly regarding the question of whether we even have a legitimate attribution target. I have grave doubts that we have survived Step 1 in this process, but let’s assume that the customer wants us to continue.

Step 2. Collect sufficient data to characterize the excursion.

The methodology can get a little messy here. Before we can meaningfully construct a fault tree, we need to carefully define the excursion of interest, which usually means studying both the input and output data. However, we are not really sure of what input data we need since some may be pertinent to the excursion while other data might not. We tend to rely upon common sense and prior knowledge as to what we should gather at this stage, but any omissions will be caught during the brainstorming so we need not get too worried.

The excursion of interest is in temperature data. We find that there is a general consensus that a warming excursion has occurred. The broad general agreement about trends in surface temperature indices is sufficient for our purposes.

The modern warming temperature excursion exists in the output side of the complex process known as “climate.” A fully characterized excursion would also include robust empirical input data, which for climate change would be tracking data for the climate drivers. When we look for input data at this stage, we are looking for empirical records of the climate both prior to and during the modern warming. We do not have a full list yet, but we know that greenhouse gases, aerosols, volcanoes, water vapor, and clouds are all important. Rather than continue on this topic here, I will discuss it in more detail after we construct the fault tree below. That way we can be specific about what input data we need.

Looking for a Unique Signature

Now that we have chosen to consider the excursion as anomalous and sufficiently characterized, this is a good time to look for a unique signature. Has the modern warming created a signature that is so unique that it can only be associated with a single root cause? If so, we want to know now so that we can save our customer the expense of the full fault tree that we would build in Steps 3 and 4.

Do any SMEs interpret some aspect of the temperature data as a unique signature that could not possibly be associated with more than one root cause? It turns out that some interpret the specific spatio-temporal heterogeneity pattern as being evidence that the warming was driven by the radiation absorbed by increased greenhouse gas (GHG) content in the atmosphere. Based upon what I have read, I don’t think there is anyone arguing for a different root cause creating a unique signature in the modern warming. The skeptic arguments seem to all reside under a claim that the signature is not unique, not that it is unique to something other than GHG warming. So let’s see whether we can take our shortcut to a conclusion that an increase in GHG concentration is the sole plausible root cause due to a unique data signature.

Spatial heterogeneity would be occurring up to the present day, and so can be directly measured. I have seen two spatial pattern claims about GHG warming, 1) the troposphere should warm more quickly, and 2) the poles should warm more quickly. Because this is important, I have attempted to track these claims back through time. The references mostly go back to climate modeling papers from the 1970s and 1980s. In the papers, I was unable to find a single instance where any of the feedbacks thought to enhance warming in specific locations were associated solely with CO2. Instead, some are associated with any GHG, while others such as arctic sea ice decrease occur due to any persistent warming. Nevertheless, the attribution chapter in IPCC AR 5 contains a paragraph that seems to imply that enhanced tropospheric warming supports attribution of the modern warming to anthropogenic CO2. I cannot make the dots connect. But here is one point that cannot be overemphasized: the search for a unique signature in the modern warming is the best hope we have for resolving the attribution question.

Step 3. Assemble the SMEs and brainstorm plausible root causes for the excursion.

Without an overwhelmingly strong argument that we have a unique signature situation, we must do the heavy lifting involved with the exhaustive approach. Of course, I am not going to be offered the luxury of a room full of climate SMEs, so I will have to attempt this myself for the purposes of this essay.

Step 4. Build a Formal Fault Tree

An attribution analysis is a form of communication, and the effort is purpose-driven in that we plan to execute a corrective action if that is feasible. As a communication tool, we want our fault tree to be in a form that makes sense to those that will be the most difficult to convince, the SMEs themselves. And when we are done, we want the results to clearly point to actions we may take. With these thoughts in mind, I try to find a format that is consistent with what the SMEs already do. Also, we need to emphasize anthropogenic aspects of causality because those are the only ones we can change. So we will base our fault tree on an energy budget approach similar to a General Circulation Model (GCM), and we will take care to ensure that we separate anthropogenic effects from other effects.

GCMs universally, at least as far as I know, use what engineers call a “control volume” approach to track an energy budget. In a control volume, you can imagine an infinitely thin and weightless membrane surrounding the globe at the top of the atmosphere. Climate scientists even have an acronym for the location “top of the atmosphere,” TOA. Energy that migrates inside the membrane must equal energy that migrates outside the membrane over very long time intervals, otherwise the temperature would ramp until all the rocks melted or everything froze. In the rather unusual situation of a planet in space, the control volume is equivalent to a “control mass” equation in which we would track the energy budget based upon a fixed mass. Our imaginary membrane defines a volume but it also contains all of the earth/atmosphere mass. For simplicity, I will continue with the term “control volume.”

The control volume equation in GCMs is roughly equivalent to:

[heat gained] – [heat lost] = [temperature change]

This is just a conceptual equation because the terms on the left are in units of energy, while the units on the right are in degrees of temperature. The complex function between the two makes temperature an emergent property of the climate system, but we needn’t get too wrapped up in this. Regardless of the complexity hidden behind this simple equation, it is useful to keep in mind that each equation term (and later, each fault tree box) represents a single number that we would like to know.

There is a bit of housekeeping we need to do at this point. Recall that we are only considering the modern warming, but we can only be confident about the fidelity of our control volume equation when we consider very long time intervals. To account for the disparity in duration, we need to consider the concept of “capacitance.” A capacitor is a device that will store energy under certain conditions, but then discharge that energy under a different set of conditions. As an instructive example, the argument that the current hiatus in surface temperature rise is being caused by energy storage in the ocean is an invocation of capacitance. So to fit our approach to a discrete time interval, we need the following modification:

[heat gained] + [capacitance discharge] – [heat lost] – [capacitance recharge] = [modern warming]

Note that now we are no longer considering the entire history of the earth, we are only considering the changes in magnitude during the modern warming interval. Our excursion direction is up, so we discard the terms for a downward excursion. Based upon the remaining terms in our control volume equation, the top tier of the tree is this:

Slide1From the control volume standpoint, we have covered heat that enters our imaginary membrane, heat that exits the membrane, and heat that may have been stashed inside the membrane and is only being released now. I should emphasize that this capacitance in the top tier refers to heat stored inside the membrane prior to the modern warming that is subsequently released to create the modern warming.

This top tier contains our first logical bifurcation. The two terms on the left, heat input and heat loss, are based upon a supposition that annual changes in forcing will manifest soon enough that that the change in temperature can be considered a direct response. This can involve a lag as long as the lag does not approach the duration of the excursion. The third term, capacitance, accounts for the possibility that the modern warming was not a direct response to a forcing with an onset near the onset of our excursion. An alternative fault tree can be envisioned here with something else in the top tier, but the question of lags must be dealt with near the top of the tree because it constitutes a basic division of what type of data we need.

The next tier could be based upon basic mechanisms rooted in physics, increasing the granularity:

Slide2The heat input leg represents heat entering the control volume, plus the heat generated inside. We have a few oddball prospective causes here that rarely see the light of day. The heat generated by anthropogenic combustion and geothermal heat are a couple of them. In this case, it is my understanding that there is no dispute that any increases above prior natural background combustion (forest fires, etc.) and geothermal releases are trivial. We put these on the tree to show that we have considered them, but we need not waste time here. Under heat loss, we cover all the possibilities with the two basic mechanisms of heat transfer, radiation and conduction. Conduction is another oddball. The conduction of heat to the vacuum of space is relatively low and would be expected to change only slightly in rough accordance to the temperature at TOA. With conduction changes crossed off, a decrease in outward radiation would be due to a decreased albedo, where albedo represents reflection across the entire electromagnetic spectrum. A control volume approach allows us to lump convection in with conduction.   The last branch in our third tier is the physical mechanism by which a temperature excursion occurs due to heat being released from a reservoir, which is a form of capacitance discharge.

I normally do not start crossing off boxes until the full tree is built. However, if we cross off the oddballs here, we see that the second tier of the tree decomposes to just three mechanisms, solar irradiance increase, albedo decrease, and heat reservoir release. This comes as no revelation to climate scientists.

This is as far as I am going in terms of building the full tree, because the next tier gets big and I probably would not get it right on my own. Finishing it is an exercise left to the reader! But I will continue down the “albedo decrease” leg until we reach anthropogenic CO2-induced warming, the topic du jour. A disclaimer: I suspect that this tier could be improved by the scrutiny of actual SMEs.

Slide3The only leg shown fully expanded is the one related to CO2, the reader is left to envision the entire tree if each leg were to be expanded in a similar manner. The bottom left corner of this tree fragment shows anthropogenic CO2-induced warming in proper context. Note that we could have separated anthropogenic effects at the first tier of the tree, but then we would have two almost identical trees.

Once every leg is completed in this manner, the next phase of adding evidence begins.

Step 5. Apply documented evidence to each cause.

Here we assess the available evidence and decide whether it supports or refutes a root cause. The actual method used is often dictated by how much evidence we are dealing with. One simple way is to make a numbered list of evidence findings. Then when a finding supports a root cause, we can add that number to the fault tree block in green. When the same finding refutes a different root cause, we can add the number to the block in red. All findings must be mapped across the entire tree.

The established approach to attribution looks at the evidence based upon the evidence hierarchy and exploits any reasonable manner of simplification. The entire purpose of a control volume approach is to avoid having to understand the complex relationship that exists between variables within the control volume. For example, if you treat an engine as a control volume, you can put flow meters on the fuel and air intakes, a pressure gauge on the exhaust, and an rpm measurement on the output shaft. With those parameters monitored, and a bit of historical data on them, you can make very good predictions about the trend in rpm of the engine based upon changes in inputs without knowing very much about how the engine translates fuel into motion. This approach does not involve any form of modeling and is, as I mentioned, the rationale for using control volume in the first place.

The first question the fault tree asks of us is captured in the first tier. Was the modern warming caused by a direct response to higher energy input, a direct response to lower energy loss, or as a result of heat stored during an earlier interval being released? If we consider this question in light of our control volume approach (we don’t really care how energy gets converted to surface temperature), we see that we can answer the question with simple data in units of energy, watts or joules. Envision data from, say, 1950 to 1980, in terms of energy. We might find that for the 30-year interval, heat input was x joules, heat loss was y joules, and capacitance release was z joules.   Now we compare that to the same data for the modern warming interval. If any one of the latter numbers is substantially more than the corresponding earlier numbers x, y, or z, we have come a long way already in simplifying our fault tree. A big difference would mean that we can lop off the other legs. If we see big changes in more than one of our energy quantities, we might have to reconsider our assumption that the system is stable.

In order to resolve the lower tiers, we need to take our basic energy change data and break it down by year, so joules/year. If we had reasonably accurate delta joules/year data relating to the various forcings, we could wiggle match between the data and the global temperature curve. If we found a close match, we would have strong evidence that forcings have an important near-term effect, and that (presumably) only one root cause matches the trend. If no forcing has an energy curve that matches the modern warming, we must assume capacitance complicates the picture.

Let’s consider how this would work. Each group of SMEs would produce a simple empirical chart for their fault tree block estimating how much energy was added or lost during a specific year within the modern warming, ideally based upon direct measurement and historical observation. These graphs would then be the primary evidence blocks for the tree. Some curves would presumable vary around zero with no real trend, others might decline, while others might increase. The sums roll up the tree. If the difference between the “heat gained” and “heat lost” legs shows a net positive upward trend in energy gained, we consider that as direct evidence that the modern warming was driven be heat gained rather than capacitance discharge. If those two legs sum to near zero, we can assume that the warming was caused by capacitance discharge. If the capacitance SMEs (those that study El Nino, etc.) estimate that a large discharge likely occurred during the modern warming, we have robust evidence that the warming was a natural cycle.

  1. Determine where evidence is lacking…

Once all the known evidence has been mapped, we look for empty blocks. We then develop a plan to fill those blocks as our top priority.

I cannot find the numbers to fill in the blocks in the AR documents. I suspect that the data does not exist for the earlier interval, and perhaps cannot even be well estimated for the modern warming interval.

  1. Execute plan to fill all evidence blocks.

Here we collect evidence specifically intended to address the fault tree logic. That consists of energy quantities from both before and during the modern warming. Has every effort been made to collect empirical data about planetary albedo prior to the modern warming? I suspect that this is a hopeless situation, but clever SMEs continually surprise me.

In a typical root cause analysis, we continue until we hopefully have just one unrefuted cause left. The final step is to exhaustively document the entire process. In the case of the modern warming, the final report would carefully lay out the necessary data, the missing data, and the conclusion that until and unless we can obtain the missing data, the root cause analysis will remain unresolved.

Part 3: The AGW Fault Tree, Climate Scientists, and the IPCC: A Sober Assessment of Progress to Date

I will begin this section by stating that I am unable to assess how much progress has been made towards resolving the basic fault tree shown above. That is not for lack of trying, I have read all the pertinent material in the IPCC Annual Reports (ARs) on a few occasions. When I read these reports, I am bombarded with information concerning the CO2 box buried deep in the middle of the fault tree. But even for that box, I am not seeing a number that I could plug into the equations above. For other legs of the tree, the ARs are even more bewildering. If climate scientists are making steady progress towards being able to estimate the numbers to go in the control volume equations, I cannot see it in the AR documents.

How much evidence is required to produce a robust conclusion about attribution when the answer is not obvious? For years, climate scientists have followed reasoning that goes from climate model simulations to expert opinion, declaring that to be sufficient. But that is not how attribution works. Decomposition of a fault tree requires either a unique signature, or sufficient data to support or refute every leg of the tree (not every box on the tree, but every leg). At one end of the spectrum, we would not claim resolution if we had zero information, while at the other end, we would be very comfortable with a conclusion if we knew everything about the variables. The fault tree provides guidance on the sufficiency of the evidence when we are somewhere in between. My customers pay me to reach a conclusion, not muck about with a logic tree. But when we lack the basic data to decompose the fault tree, maintaining my credibility (and that of the SMEs as well) demands that we tell the customer that the fault tree cannot be resolved because we lack sufficient information.

The curve showing CO2 rise and the curve showing the modern global temperature rise do not look the same, and signal processing won’t help with the correlation. Instead, there is hypothesized to be a complex function involving capacitance that explains the primary discrepancy, the recent hiatus. But we still have essentially no idea how much capacitance has contributed to historical excursions. We do not know whether there is a single mode of capacitance that swamps all others, or whether there are multiple capacitance modes that go in and out of phase. Ocean capacitance has recently been invoked as perhaps the most widely endorsed explanation for the recent hiatus in global warming, and there is empirical evidence of warming in the ocean. But invoking capacitance to explain a data wiggle down on the fifth tier of a fault tree, when the general topic of capacitance remains unresolved in the first tier, suggests that climate scientists have simply lost the thread of what they were trying to prove. The sword swung in favor of invoking capacitance to explain the hiatus turns out to have two edges. If the system is capable of exhibiting sufficient capacitance to produce the recent hiatus, there is no valid argument against why it could not also have produced the entire modern warming, unless that can be disproven with empirical data or I/O test results.

Closing Comments

Most of the time when corporations experience a catastrophe such as a chemical plant explosion resulting in fatalities, they look to outside entities to conduct the attribution analysis. This may come as a surprise given the large sums of money at stake and the desire to influence the outcome, but consider the value of a report produced internally by the corporation. If the report exonerates the corporation of all culpability, it will have zero credibility. Sure, they can blame themselves to preserve their credibility, but their only hope of a credible exoneration is if it comes from an independent entity. In the real world, the objectivity of an independent study may still leave something to be desired, given the fact that the contracted investigators get their paycheck from the corporation, but the principle still holds. I can only assume when I read the AR documents that this never occurred to climate scientists.

The science of AGW will not be settled until the fault tree is resolved to the point that we can at least estimate a number for each leg in our fault tree based upon objective evidence. The tools available have thus far not been up to the task. With so much effort put into modelling CO2 warming while other fault tree boxes are nearly devoid of evidence, it is not even clear that the available tools are being applied efficiently.

The terms of reference for the IPCC are murky, but it is clear that it was never set up to address attribution in any established manner. There was no valid reason to not use an established method, facilitated by an entity with expertise in the process, if attribution was the true goal. The AR documents are position papers, not attribution studies, as exemplified by the fact that supporting and refuting arguments cannot be followed in any logical manner and the arguments do not roll up into any logical framework. If AGW is really the most important issue that we face, and the science is so robust, why would climate scientists not seek the added credibility that could be gained from an independent and established attribution effort?

October 24, 2014 Posted by | Science and Pseudo-Science | | 2 Comments

AGW has most of the characteristics of an “urban legend”

By Roy W. Spencer, Ph. D. | Watts Up With That? | October 24, 2009

About.com describes an “urban legend” as an apocryphal (of questionable authenticity), secondhand story, told as true and just plausible enough to be believed, about some horrific…series of events….it’s likely to be framed as a cautionary tale. Whether factual or not, an urban legend is meant to be believed. In lieu of evidence, however, the teller of an urban legend is apt to rely on skillful storytelling and reference to putatively trustworthy sources.

I contend that the belief in human-caused global warming as a dangerous event, either now or in the future, has most of the characteristics of an urban legend. Like other urban legends, it is based upon an element of truth. Carbon dioxide is a greenhouse gas whose concentration in the atmosphere is increasing, and since greenhouse gases warm the lower atmosphere, more CO2 can be expected, at least theoretically, to result in some level of warming.

But skillful storytelling has elevated the danger from a theoretical one to one of near-certainty. The actual scientific basis for the plausible hypothesis that humans could be responsible for most recent warming is contained in the cautious scientific language of many scientific papers. Unfortunately, most of the uncertainties and caveats are then minimized with artfully designed prose contained in the Summary for Policymakers (SP) portion of the report of the UN’s Intergovernmental Panel on Climate Change (IPCC). This Summary was clearly meant to instill maximum alarm from a minimum amount of direct evidence.

Next, politicians seized upon the SP, further simplifying and extrapolating its claims to the level of a “climate crisis”. Other politicians embellished the tale even more by claiming they “saw” global warming in Greenland as if it was a sighting of Sasquatch, or that they felt it when they fly in airplanes.

Just as the tales of marauding colonies of alligators living in New York City sewers are based upon some kernel of truth, so too is the science behind anthropogenic global warming. But there is a big difference between reports of people finding pet alligators that have escaped their owners, versus city workers having their limbs torn off by roving colonies of subterranean monsters.

In the case of global warming, the “putatively trustworthy sources” would be the consensus of the world’s scientists. The scientific consensus, after all, says that global warming is… is what? Is happening? Is severe? Is man-made? Is going to burn the Earth up if we do not act? It turns out that those who claim consensus either do not explicitly state what that consensus is about, or they make up something that supports their preconceived notions.

If the consensus is that the presence of humans on Earth has some influence on the climate system, then I would have to even include myself in that consensus. After all, the same thing can be said of the presence of trees on Earth, and hopefully we have at least the same rights as trees do. But too often the consensus is some vague, fill-in-the-blank, implied assumption where the definition of “climate change” includes the phrase “humans are evil”.

It is a peculiar development that scientific truth is now decided through voting. A relatively recent survey of climate scientists who do climate research found that 97.4% agreed that humans have a “significant” effect on climate. But the way the survey question was phrased borders on meaninglessness. To a scientist, “significant” often means non-zero. The survey results would have been quite different if the question was, “Do you believe that natural cycles in the climate system have been sufficiently researched to exclude them as a potential cause of most of our recent warming?”

And it is also a good bet that 100% of those scientists surveyed were funded by the government only after they submitted research proposals which implicitly or explicitly stated they believed in anthropogenic global warming to begin with. If you submit a research proposal to look for alternative explanations for global warming (say, natural climate cycles), it is virtually guaranteed you will not get funded. Is it any wonder that scientists who are required to accept the current scientific orthodoxy in order to receive continued funding, then later agree with that orthodoxy when surveyed? Well, duh.

In my experience, the public has the mistaken impression that a lot of climate research has gone into the search for alternative explanations for warming. They are astounded when I tell them that virtually no research has been performed into the possibility that warming is just part of a natural cycle generated within the climate system itself.

Too often the consensus is implied to be that global warming is so serious that we must do something now in the form of public policy to avert global catastrophe. What? You don’t believe that there are alligators in New York City sewer system? How can you be so unconcerned about the welfare of city workers that have to risk their lives by going down there every day? What are you, some kind of Holocaust-denying, Neanderthal flat-Earther?

It makes complete sense that in this modern era of scientific advances and inventions that we would so readily embrace a compelling tale of global catastrophe resulting from our own excesses. It’s not a new genre of storytelling, of course, as there were many B-movies in the 1950s whose horror themes were influenced by scientists’ development of the atomic bomb.

Our modern equivalent is the 2004 movie, “Day After Tomorrow”, in which all kinds of physically impossible climatic events occur in a matter of days. In one scene, super-cold stratospheric air descends to the Earth’s surface, instantly freezing everything in its path. The meteorological truth, however, is just the opposite. If you were to bring stratospheric air down to the surface, heating by compression would make it warmer than the surrounding air, not colder.

I’m sure it is just coincidence that “Day After Tomorrow” was directed by Roland Emmerich, who also directed the 2006 movie “Independence Day,” in which an alien invasion nearly exterminates humanity. After all, what’s the difference? Aliens purposely killing off humans, or humans accidentally killing off humans? Either way, we all die.

But a global warming catastrophe is so much more believable. After all, climate change does happen, right? So why not claim that ALL climate change is now the result of human activity? And while we are at it, let’s re-write climate history so that we get rid of the Medieval Warm Period and the Little Ice age, with a new ingenious hockey stick-shaped reconstruction of past temperatures that makes it look like climate never changed until the 20th Century? How cool would that be?

The IPCC thought it was way cool… until it was debunked, after which it was quietly downgraded in the IPCC reports from the poster child for anthropogenic global warming, to one possible interpretation of past climate.

And let’s even go further and suppose that the climate system is so precariously balanced that our injection of a little bit of that evil plant food, carbon dioxide, pushes our world over the edge, past all kinds of imaginary tipping points, with the Greenland ice sheet melting away, and swarms of earthquakes being the price of our indiscretions.

In December, hundreds of bureaucrats from around the world will once again assemble, this time in Copenhagen, in their attempts to forge a new international agreement to reduce greenhouse gas emissions as a successor to the Kyoto Protocol. And as has been the case with every other UN meeting of its type, the participants simply assume that the urban legend is true. Indeed, these politicians and governmental representatives need it to be true. Their careers and political power now depend upon it. And the fact that they hold their meetings in all of the best tourist destinations in the world, enjoying the finest exotic foods, suggests that they do not expect to ever have to be personally inconvenienced by whatever restrictions they try to impose on the rest of humanity.

If you present these people with evidence that the global warming crisis might well be a false alarm, you are rewarded with hostility and insults, rather than expressions of relief. The same can be said for most lay believers of the urban legend. I say “most” because I once encountered a true believer who said he hoped my research into the possibility that climate change is mostly natural will eventually be proved correct.

Unfortunately, just as we are irresistibly drawn to disasters – either real ones on the evening news, or ones we pay to watch in movie theaters – the urban legend of a climate crisis will persist, being believed by those whose politics and worldviews depend upon it. Only when they finally realize what a new treaty will cost them in loss of freedoms and standard of living will those who oppose our continuing use of carbon-based energy begin to lose their religion.

April 6, 2014 Posted by | Deception, Timeless or most popular | , , , , | Leave a comment

ManBearPig Attacked by Science!

Image and video hosting by TinyPic

By Khephra | Aletho News | January 28, 2010

Today I’d like to more thoroughly address specific planks of Anthropogenic Global Warming Theory (AGW) that I think deserve further scrutiny. Over the past year AGW rhetoric has reached deafening levels, and advocates have successfully framed the hypothesis as unassailable. Propagandists have yolked AGW with “wise stewardship” and today it’s common for skeptics of AGW to be derided as ignorant anti-environmentalists. But I don’t think that things are nearly so simple.

Unfortunately, once people become emotionally invested in a position, it can be very difficult to provoke them into changing course. Liberals and progressives hailed the election of Obama as the most wonderful thing since sliced-bread. With a battlefield full of broken promises behind him and the insinuation of institutionalized corruption and illegal forced detentions stretching into the foreseeable future, many of those same liberals and progressives have fallen into an exasperated, listless complacency. They became emotionally invested in the “hope” engendered by Obama, and when the reality failed to live up to the myth, they were forced into cognitive dissonance, apathy, or synthesis. If you meet someone who still supports Obama, dig a little and you’ll find the cognitive dissonance – and, I would argue, the same could be said of supporters of AGW.


To get us started, I think we should rehash the essential assumptions of AGW:

• As atmospheric levels of C02 increase, Earth’s median temperature increases.

• As Earth’s median temperature increases, atmospheric imbalances precipitate increases in the frequency and strength of weather events (e.g., hurricanes, tornadoes, droughts).

• Humans are directly exacerbating this process through the burning of fossil fuels and any activity that yields C02 as a byproduct.

• Increased median temperatures are melting the polar ice caps and causing glaciers to recede or vanish.

Since AGW has the pleasant benefit of being a bonafide scientific theory, it suggests falsifiable claims. If these claims can be demonstrated invalid, the theory is in need of reconsideration. On the other hand, if emotional investment and cognitive dissonance are high enough, no amount of contradictory data will matter. Young Earth Creationists make a fine example of this psychopathology. In spite of overwhelming tangible evidence that their theory is invalid, they fall back on dogma or the Bible – and no amount of science will provoke them into reconsidering their position. Thankfully, AGW is far easier to invalidate than dogma from the Bible, because it makes so many suppositions that are easily testable.

Let’s begin with the most crucial component of AGW – C02. Here’s a graph of historical global C02 levels and temperatures. According to their analysis:

“Current climate levels of both C02 and global temperatures are relatively low versus past periods. Throughout time, C02 and temperatures have been radically different and have gone in different directions. As this graph reveals, there is little, if any correlation, between an increase of C02 and a resulting increase in temperatures.”

If we realize that C02’s correlation with global temperature is not a given, the entire edifice of AGW begins to crumble. Therefore, it’s difficult to get adherents of AGW to accept the implications of this data. Again and again they’ll fall back on the assumption that the correlation between C02 and global temperatures is incontrovertible, but they must avoid an ever-expanding amount of dissonant data:

MIT’s professor Richard Lindzen’s peer reviewed work states “we now know that the effect of CO2 on temperature is small, we know why it is small, and we know that it is having very little effect on the climate.”

The global surface temperature record, which we update and publish every month, has shown no statistically-significant “global warming” for almost 15 years. Statistically-significant global cooling has now persisted for very nearly eight years. Even a strong el Nino – expected in the coming months – will be unlikely to reverse the cooling trend. More significantly, the ARGO bathythermographs deployed throughout the world’s oceans since 2003 show that the top 400 fathoms of the oceans, where it is agreed between all parties that at least 80% of all heat caused by manmade “global warming” must accumulate, have been cooling over the past six years. That now prolonged ocean cooling is fatal to the “official” theory that “global warming” will happen on anything other than a minute scale. – Science & Public Policy Institute: Monthly CO2 Report: July 2009


“Just how much of the “Greenhouse Effect” is caused by human activity?

It is about 0.28%, if water vapor is taken into account– about 5.53%, if not.

This point is so crucial to the debate over global warming that how water vapor is or isn’t factored into an analysis of Earth’s greenhouse gases makes the difference between describing a significant human contribution to the greenhouse effect, or a negligible one.” – Geocraft


Next, let’s further consider the hypothetical tangential effects of AGW – e.g., rising global temperatures melt icecaps, etc.:

Climatologists Baffled by Global Warming Time-Out: “Global warming appears to have stalled. Climatologists are puzzled as to why average global temperatures have stopped rising over the last 10 years. Some attribute the trend to a lack of sunspots, while others explain it through ocean currents.”

‘AGW – I refute it thus!’: Central England Temperatures 1659 – 2009: “Summary: Unprecedented warming did not occur in central England during the first decade of the 21st century, nor during the last decade of the 20th century. As the CET dataset is considered a decent proxy for Northern Hemisphere temperatures, and since global temperature trends follow a similar pattern to Northern Hemisphere temps, then the same conclusion about recent warming can potentially be inferred globally. Based on the CET dataset, the global warming scare has been totally blown out of proportion by those who can benefit from the fear.”

50 Years of Cooling Predicted: “‘My findings do not agree with the climate models that conventionally thought that greenhouse gases, mainly CO2, are the major culprits for the global warming seen in the late 20th century,’ Lu said. ‘Instead, the observed data show that CFCs conspiring with cosmic rays most likely caused both the Antarctic ozone hole and global warming….’

In his research, Lu discovers that while there was global warming from 1950 to 2000, there has been global cooling since 2002. The cooling trend will continue for the next 50 years, according to his new research observations.”

A comparison of GISS data for the last 111 years show US cities getting warmer but rural sites are not increasing in temperature at all. Urban Heat Islands may be the only areas warming.


Rise of sea levels is ‘the greatest lie ever told’:

If there is one scientist who knows more about sea levels than anyone else in the world it is the Swedish geologist and physicist Nils-Axel Mörner, formerly chairman of the INQUA International Commission on Sea Level Change. And the uncompromising verdict of Dr Mörner, who for 35 years has been using every known scientific method to study sea levels all over the globe, is that all this talk about the sea rising is nothing but a colossal scare story.

Despite fluctuations down as well as up, “the sea is not rising,” he says. “It hasn’t risen in 50 years.” If there is any rise this century it will “not be more than 10cm (four inches), with an uncertainty of plus or minus 10cm”. And quite apart from examining the hard evidence, he says, the elementary laws of physics (latent heat needed to melt ice) tell us that the apocalypse conjured up by Al Gore and Co could not possibly come about.

The reason why Dr Mörner, formerly a Stockholm professor, is so certain that these claims about sea level rise are 100 per cent wrong is that they are all based on computer model predictions, whereas his findings are based on “going into the field to observe what is actually happening in the real world”. – Telegraph.co.uk


Since the early Holocene, according to the findings of the six scientists, sea-ice cover in the eastern Chuckchi Sea appears to have exhibited a general decreasing trend, in contrast to the eastern Arctic, where sea-ice cover was substantially reduced during the early to mid-Holocene and has increased over the last 3000 years. Superimposed on both of these long-term changes, however, are what they describe as “millennial-scale variations that appear to be quasi-cyclic.” And they write that “it is important to note that the amplitude of these millennial-scale changes in sea-surface conditions far exceed [our italics] those observed at the end of the 20th century.”

Since the change in sea-ice cover observed at the end of the 20th century (which climate alarmists claim to be unnatural) was far exceeded by changes observed multiple times over the past several thousand years of relatively stable atmospheric CO2 concentrations (when values never strayed much below 250 ppm or much above 275 ppm), there is no compelling reason to believe that the increase in the air’s CO2 content that has occurred since the start of the Industrial Revolution has had anything at all to do with the declining sea-ice cover of the recent past; for at a current concentration of 385 ppm, the recent rise in the air’s CO2 content should have led to a decrease in sea-ice cover that far exceeds what has occurred multiple times in the past without any significant change in CO2. – C02 Science.org

See also:

The Global Warming Scandal Heats Up: “The IPCC has been forced to admit that the claim made was actually taken from an article published in 1999. The article was based around a telephone interview with an Indian scientist who has admitted that he was working from pure speculation and his claims were not backed by research.”

The Dam is Cracking: “[The claims of Himalayan glacial melting] turned out to have no basis in scientific fact, even though everything the IPCC produces is meant to be rigorously peer-reviewed, but simply an error recycled by the WWF, which the IPCC swallowed whole.

The truth, as seen by India’s leading expert in glaciers, is that “Himalayan glaciers have not in anyway exhibited, especially in recent years, an abnormal annual retreat.” …

Then at the weekend another howler was exposed. The IPCC 2007 report claimed that global warming was leading to an increase in extreme weather, such as hurricanes and floods. Like its claims about the glaciers, this was also based on an unpublished report which had not been subject to scientific scrutiny — indeed several experts warned the IPCC not to rely on it.”

Arctic Sea Ice Since 2007: “According to the World Meteorological Organization, Arctic sea ice has increased by 19 percent since its minimum in 2007, though they don’t make it very easy to see this in the way that they report the data.”


Now let’s consider some of the agents and institutions that are strong advocates of AGW:

Howard C. Hayden, emeritus professor of physics from the University of Connecticut, told a Pueblo West audience that he was prompted to speak out after a visit to New York where he learned that scaremongering billboards about the long-term effects of global warming were being purchased at a cost of $700,000 a month.

“Someone is willing to spend a huge amount of money to scare us about global warming,” Hayden said. “Big money is behind the global-warming propaganda.”

Image and video hosting by TinyPic


Lawrence Solomon: Wikipedia’s Climate Doctor:

Connolley took control of all things climate in the most used information source the world has ever known – Wikipedia. Starting in February 2003, just when opposition to the claims of the bands members were beginning to gel, Connolley set to work on the Wikipedia site. He rewrote Wikipedia’s articles on global warming, on the greenhouse effect, on the instrumental temperature record, on the urban heat island, on climate models, on global cooling. On Feb. 14, he began to erase the Little Ice Age; on Aug.11, the Medieval Warm Period. In October, he turned his attention to the hockey stick graph. He rewrote articles on the politics of global warming and on the scientists who were skeptical of the band. Richard Lindzen and Fred Singer, two of the world’s most distinguished climate scientists, were among his early targets, followed by others that the band especially hated, such as Willie Soon and Sallie Baliunas of the Harvard-Smithsonian Center for Astrophysics, authorities on the Medieval Warm Period.

All told, Connolley created or rewrote 5,428 unique Wikipedia articles. His control over Wikipedia was greater still, however, through the role he obtained at Wikipedia as a website administrator, which allowed him to act with virtual impunity. When Connolley didn’t like the subject of a certain article, he removed it — more than 500 articles of various descriptions disappeared at his hand. When he disapproved of the arguments that others were making, he often had them barred — over 2,000 Wikipedia contributors who ran afoul of him found themselves blocked from making further contributions. Acolytes whose writing conformed to Connolley’s global warming views, in contrast, were rewarded with Wikipedia’s blessings. In these ways, Connolley turned Wikipedia into the missionary wing of the global warming movement.” – National Post


The ‘ClimateGate’ scandal that broke a couple of months ago warrants some elaboration, too. For previous posts on this topic, see:

Using ClimateGate to Reason with ManBearPig
ClimateGate Crashes ManBearPig’s Party
ManBearPig Meets the Vikings
ManBearPig on Life Support?

That foundation established, let’s take a closer look at who was involved with ClimateGate:

For a thorough, email-by-email elaboration of exactly what the ‘big deal’ is, see here:

Climategate publicly began on November 19, 2009, when a whistle-blower leaked thousands of emails and documents central to a Freedom of Information request placed with the Climatic Research Unit of the University of East Anglia in the United Kingdom. This institution had played a central role in the “climate change” debate: its scientists, together with their international colleagues, quite literally put the “warming” into Global Warming: they were responsible for analyzing and collating the various measurements of temperature from around the globe and going back into the depths of time, that collectively underpinned the entire scientific argument that mankind’s liberation of “greenhouse” gases—such as carbon dioxide—was leading to a relentless, unprecedented, and ultimately catastrophic warming of the entire planet.

The key phrase here, from a scientific point of view, is that it is “unprecedented” warming.


The Proof Behind the CRU ClimateGate Debacle: Because Computers Do Lie When Humans Tell Them Too: “As you can see, (potentially) valid temperature station readings were taken and skewed to fabricate the results the “scientists” at the CRU wanted to believe, not what actually occurred.”

Unearthed Files Include “Rules” for Mass Mind Control Campaign: “The intruded central computer was not only filled to the brim with obvious and attempted ostracizing of scientists who don’t blindly follow the leader, the files also reveal that the folks of the IPCC made use or considered making use of a disinformation campaign through a ‘communication agency’ called Futerra.

The agency describes itself as ‘the sustainability communications agency’ and serves such global players as Shell, Microsoft, BBC, the UN Environment Programme, the UK government and the list goes on. The co-founder of Futerra, Ed Gillespie explains: ‘For brands to succeed in this new world order, they will have to become eco, ethical and wellness champions.’

The document included within the climategate treasure-chest is called ‘Rules of the Game’ and shows deliberate deception on the part of this agency to ensure that the debate would indeed be perceived as being settled. When facts do not convince, they reasoned, let us appeal to emotions in order to get the job done.”

Climategate goes SERIAL: now the Russians confirm that UK climate scientists manipulated data to exaggerate global warming: “Climategate has already affected Russia. On Tuesday, the Moscow-based Institute of Economic Analysis (IEA) issued a report claiming that the Hadley Center for Climate Change based at the headquarters of the British Meteorological Office in Exeter (Devon, England) had probably tampered with Russian-climate data.

The IEA believes that Russian meteorological-station data did not substantiate the anthropogenic global-warming theory. Analysts say Russian meteorological stations cover most of the country’s territory, and that the Hadley Center had used data submitted by only 25% of such stations in its reports. Over 40% of Russian territory was not included in global-temperature calculations for some other reasons, rather than the lack of meteorological stations and observations.”

ClimateGate Expanding, Including Russian Data and Another Research Center: “Well now some Russian climate officials have come forward stating that the data they handed over to the Hadley Centre in England has been cherry-picked, leaving out as much as 40% of the cooler temperature readings and choosing the hottest readings to make it appear things were warmer than they actually are (regardless of whether the temperature is human-induced or natural).”


Scientists using selective temperature data, sceptics say:

Two American researchers allege that U.S. government scientists have skewed global temperature trends by ignoring readings from thousands of local weather stations around the world, particularly those in colder altitudes and more northerly latitudes, such as Canada.

In the 1970s, nearly 600 Canadian weather stations fed surface temperature readings into a global database assembled by the U.S. National Oceanic and Atmospheric Administration (NOAA). Today, NOAA only collects data from 35 stations across Canada.

Worse, only one station — at Eureka on Ellesmere Island — is now used by NOAA as a temperature gauge for all Canadian territory above the Arctic Circle.

The Canadian government, meanwhile, operates 1,400 surface weather stations across the country, and more than 100 above the Arctic Circle, according to Environment Canada. – Canada.com


The ClimateGate emails were highly damning, and have led to Phil Jones’ (one of the researchers at the centre of the scandal) resignation and an investigation into Michael Mann’s ’scholarship’. Furthermore, the UN is also ‘investigating’ the ’scholarship’ underlying the scandal, but if something as incontrovertible as the Goldstone Report can get whitewashed, I have little hope for a meaningful or just analysis in a scandal of this magnitude. In theory, science is self-correcting; but in practice it’s “defend your thesis at all costs”.

Nevertheless, each of our original four suppositions are demonstrably ambiguous – if not outright invalid. Therefore, science – and empiricism – invalidates AGW.

Humanity has irrevocably altered – blighted? – the Earth, but C02 levels are far less relevant than other forms of industrial pollution: mercury-seeping lightbulbs, dioxin pollution, gene drift, cell phone-induced genetic damage, and all manner of other harmful and silly endeavours pose greater unambiguous threats to humanity than C02. Therefore, if you really want to help clean up the Earth, leave the AGW rhetoric in the dustbin and let’s get on with disempowering the hegemons.

January 28, 2010 Posted by | Aletho News, Deception, Environmentalism, Science and Pseudo-Science | , , , , | 7 Comments