Convictions are stronger if they are self-generated, rather than taught.
— David MacKay
My favorite read in 2019. Being deeply immersed in the finance world, I am exposed daily to climate change skepticism. Last summer, I took a deeper dive into understanding the science of climate change and went through the original literature – including papers of the so-called “skeptics”. What I learned on this journey mirrored the findings in this book. And it is absolutely insane.
While I absolutely recommend reading the whole book, I know many are pressed for time. I still think the following quotes from the book convey about 80% of its content. My emphasis added (except italic text).
See also a shorter version of my notes in this Twitter thread.
(And in case you are wondering, almost every sentence here is supported by citations in the book – so if you are still skeptical about the veracity of this content, go check it out there yourself.)
***
INTRODUCTION
Physics tells us that if the Sun were causing global warming—as some skeptics continue to insist—we’d expect both the troposphere and the stratosphere to warm, as heat comes into the atmosphere from outer space.
Why didn’t Santer’s accusers bother to find out the facts? Why did they continue to repeat charges long after they had been shown to be unfounded? The answer, of course, is that they were not interested in finding facts. They were interested in fighting them.
Among the multitude of documents we found in writing this book were Bad Science: A Resource Book—a how-to handbook for fact fighters, providing example after example of successful strategies for undermining science, and a list of experts with scientific credentials available to comment on any issue about which a think tank or corporation needed a negative sound bite.
In case after case, Fred Singer, Fred Seitz, and a handful of other scientists joined forces with think tanks and private corporations to challenge scientific evidence on a host of contemporary issues. In the early years, much of the money for this effort came from the tobacco industry; in later years, it came from foundations, think tanks, and the fossil fuel industry. They claimed the link between smoking and cancer remained unproven. They insisted that scientists were mistaken about the risks and limitations of SDI. They argued that acid rain was caused by volcanoes, and so was the ozone hole. They charged that the Environmental Protection Agency had rigged the science surrounding secondhand smoke. Most recently—over the course of nearly two decades and against the face of mounting evidence—they dismissed the reality of global warming. First they claimed there was none, then they claimed it was just natural variation, and then they claimed that even if it was happening and it was our fault, it didn’t matter because we could just adapt to it.
CHAPTER 1 – DOUBT IS OUR PRODUCT [ABOUT BIG TOBACCO]
December 15, 1953, was a fateful day. A few months earlier, researchers at the Sloan-Kettering Institute in New York City had demonstrated that cigarette tar painted on the skin of mice caused fatal cancers. This work had attracted an enormous amount of press attention: the New York Times and Life magazine had both covered it, and Reader’s Digest—the most widely read publication in the world—ran a piece entitled “Cancer by the Carton.” Perhaps the journalists and editors were impressed by the scientific paper’s dramatic concluding sentences: “Such studies, in view of the corollary clinical data relating smoking to various types of cancer, appear urgent. They may not only result in furthering our knowledge of carcinogens, but in promoting some practical aspects of cancer prevention.”
These findings shouldn’t have been a surprise. German scientists had shown in the 1930s that cigarette smoking caused lung cancer, and the Nazi government had run major antismoking campaigns; Adolf Hitler forbade smoking in his presence. However, the German scientific work was tainted by its Nazi associations, and to some extent ignored, if not actually suppressed, after the war; it had taken some time to be rediscovered and independently confirmed.
The industry made its case in part by cherry-picking data and focusing on unexplained or anomalous details. No one in 1954 would have claimed that everything that needed to be known about smoking and cancer was known, and the industry exploited this normal scientific honesty to spin unreasonable doubt.
The industry had realized that you could create the impression of controversy simply by asking questions, even if you actually knew the answers and they didn’t help your case. And so the industry began to transmogrify emerging scientific consensus into raging scientific “debate.”
The appeal to journalistic balance (as well as perhaps the industry’s large advertising budget) evidently resonated with writers and editors, perhaps because of the influence of the Fairness Doctrine. Under this doctrine, established in 1949 (in conjunction with the rise of television), broadcast journalists were required to dedicate airtime to controversial issues of public concern in a balanced manner. (The logic was that broadcasts licenses were a scarce resource, and therefore a public trust.) While the doctrine did not formally apply to print journalism, many writers and editors seem to have applied it to the tobacco question, because throughout the 1950s and well into the 1960s, newspapers and magazines presented the smoking issue as a great debate rather than as a scientific problem in which evidence was rapidly accumulating, a clear picture was coming into focus, and the trajectory of knowledge was clearly against tobacco’s safety. Balance was interpreted, it seems, as giving equal weight to both sides, rather than giving accurate weight to both sides.
While the idea of equal time for opposing opinions makes sense in a two-party political system, it does not work for science, because science is not about opinion. It is about evidence. It is about claims that can be, and have been, tested through scientific research—experiments, experience, and observation—research that is then subject to critical review by a jury of scientific peers. Claims that have not gone through that process—or have gone through it and failed—are not scientific, and do not deserve equal time in a scientific debate.
Today, the World Health Organization finds that smoking is the known or probable cause of twenty-five different diseases, that it is responsible for five million deaths worldwide every year, and that half of these deaths occur in middle age.
Doubt-mongering also works because we think science is about facts—cold, hard, definite facts. If someone tells us that things are uncertain, we think that means that the science is muddled. This is a mistake. There are always uncertainties in any live science, because science is a process of discovery.
Doubt is crucial to science—in the version we call curiosity or healthy skepticism, it drives science forward—but it also makes science vulnerable to misrepresentation, because it is easy to take uncertainties out of context and create the impression that everything is unresolved.
CHAPTER 2 – STRATEGIC DEFENSE, PHONY FACTS, AND THE CREATION OF THE GEORGE C. MARSHALL INSTITUTE
The tobacco industry was happy to have a man of Frederick Seitz’s scientific stature on their side, but by the late 1980s, Seitz was aligning himself with men of increasingly extreme views. Often, they were scientists in their twilight years who had turned to fields in which they had no training or experience, such as Walter Elsasser, a geophysicist who argued that biology as a science was a dead end because of “the unfathomable complexity” of organisms, a view that even a sympathetic biographer described as “ignored by most biologists and attacked by some.” Many colleagues thought Elsasser had become irrational, and some began to think the same of Seitz. In August 1989, one tobacco industry executive recommended against soliciting his further advice: “Dr. Seitz is quite elderly and not sufficiently rational to offer advice.”
But Seitz had found other allies, and by the mid-1980s a new cause: rolling back Communism. He did this by joining forces with several fellow physicists—old cold warriors who shared his unalloyed anti-Communism—to support and defend Ronald Reagan’s Strategic Defense Initiative. SDI (Star Wars to most of us) was rejected by most scientists as impractical and destabilizing, but Seitz and his colleagues began to defend it by challenging the scientific evidence that SDI would not work and promoting the idea that the United States could “win” a nuclear war.
However, when the panel found evidence that the Soviets had spent large sums of money on nonacoustic antisubmarine warfare systems, but no evidence that they had ever deployed a nonacoustic system, they did not draw the obvious logical conclusion that those systems simply hadn’t worked. Rather the panel concluded that they had worked, that the Soviets had deployed something, and covered it up. “The absence of a deployed system by this time is difficult to understand,” they wrote. “The implication could be that the Soviets have, in fact, deployed some operational non-acoustic systems and will deploy more in the next few years.” The panel saw evidence that the Soviets had not achieved a particular capability as proof that it had. The writer C. S. Lewis once characterized this style of argument: “The very lack of evidence is thus treated as evidence; the absence of smoke proves that the fire is very carefully hidden.” Such arguments are effectively impossible to refute, as Lewis noted. “A belief in invisible cats cannot be logically disproved,” although it does “tell us a good deal about those who hold it.”
While the tobacco industry had tried to exploit uncertainties where the science was firm, these men insisted on certainties where the evidence was thin or entirely absent. The “Soviet Union is” they repeatedly wrote, rather than “might be” or “appears to be.” They understood the power of language: you could undermine your opponents’ claims by insisting that theirs were uncertain, while presenting your own as if they were not.
The committee spent the next four years currying media attention via press releases and opinion pieces, helping to push American foreign policy far to the right, often on the basis of “factual” claims with few facts behind them.
Team B, Jastrow, and Moynihan had all overestimated Soviet capabilities, and greatly exaggerated the certainty of their claims. But their alarming arguments had the desired effect, providing “evidence” that the United States needed to act, and fast. It also demonstrated that you could get what you wanted if you argued with enough conviction, even if you didn’t have the facts on your side.
Using publicly available information on the effects of nuclear weapons and computer models of nuclear warfare, the NASA-Ames group investigated how nuclear exchanges of one hundred to five thousand megatons might affect global temperatures. (For comparison, the Mt. St. Helens eruption was equivalent to ten megatons.) Their model suggested that even the smallest nuclear exchange could send the Earth into a deep freeze: surface temperatures might fall below freezing even in summer. Larger exchanges could produce near-total darkness for many months. The nuclear winter hypothesis had been born, but it could equally well have been called nuclear night. After even a modest nuclear exchange,
During the Johnson and Nixon administrations, the United States had developed and started deployment of a defensive system, allegedly against Chinese ballistic missiles, although few in the defense business believed that claim (and in fact China did not obtain intercontinental ballistic missiles until 1981). This Sentinel system used two layers of ground-based interceptors, long-range Spartan missiles for area defense, and shorter-range Sprint missiles to destroy warheads that the Spartans missed. Both missiles used nuclear warheads of their own to destroy the incoming warheads.
One of the great heroes of the American right of the late twentieth century was neoliberal economist Milton Friedman. In his most famous work, Capitalism and Freedom, Friedman argued (as its title suggests) that capitalism and freedom go hand in hand—that there can be no freedom without capitalism and no capitalism without freedom. So defense of one was the defense of the other. It was as simple—and as fundamental—as that. These men, committed as they were to freedom—liberty as they understood it, and viewing themselves as the guardians of it—were therefore also committed capitalists. But their scientific colleagues were increasingly finding evidence that capitalism was failing in a crucial respect: it was failing to protect the natural environment upon which all life—free or not—ultimately depends.
Working scientists were finding more and more evidence that industrial emissions were causing widespread damage to human and ecosystem health. The free market was causing problems—unintended consequences—that the free market did not know how to solve. The government had a potential remedy—regulation—but that flew in the face of the capitalist ideal.
CHAPTER 3 – SOWING THE SEEDS OF DOUBT: ACID RAIN
While the debate over strategic defense and nuclear winter was playing out, another rather different issue had come to the fore: acid rain. While the science of nuclear winter was entirely different from that of acid rain, some of the same people would be involved in both debates. And as in the debate over tobacco, opponents of regulating the pollution that caused acid rain would argue that the science was too uncertain to justify action.
Bills like the Clean Air Act reflected a shift in focus from land preservation to pollution prevention through science-based government regulation, and from local to global. These were profound shifts. Silent Spring —Rachel Carson’s alarm bell over the impacts of the pesticide DDT—led Americans to realize that local pollution could have global impacts. Private actions that seemed reasonable—like a farmer spraying his crops to control pests—could have unreasonable public impacts. Pollution was not simply a matter of evil industries dumping toxic sludge in the night: people with good intentions might unintentionally do harm. Economic activity yielded collateral damage. Recognizing this meant acknowledging that the role of the government might need to change in ways that would inevitably affect economic activity.
Collateral damage was what acid rain was all about. Sulfur and nitrogen emissions from electrical utilities, cars, and factories could mix with rain, snow, and clouds in the atmosphere, travel long distances, and affect lakes, rivers, soils, and wildlife far from the source of the pollution.
Chemical analysis showed that most of the acidity was due to dissolved sulfate and the rest mostly to dissolved nitrate, by-products of burning coal and oil. Yet fossil fuels had been burned enthusiastically since the mid-nineteenth century, so why had this problem only arisen of late? The answer was the unintended consequence of the introduction of devices to remove particles from smoke and to reduce local air pollution.
Although the exact magnitude of the acid rain effects was uncertain, their existence and gravity was not, and the Swedes warned against discounting the effects just because they weren’t immediate, or fully documented. Although occurring gradually, the effects were serious, and potentially irreversible. However, the situation was not all bleak, because the cause was known, and so was the remedy. “A reduction in the total emissions both in Sweden and in adjacent countries is required.”
In science, this sort of clear demonstration of a phenomenon should inspire fellow scientists to learn more. It did. Over the next ten years, scientists around the globe worked to document acid rain, understand its dimensions, and communicate its significance.
Were the reasons not entirely clear? It depended on what you meant by entirely. Science is hard—why so many kids hate it in school—and nothing is ever entirely clear. There are always more questions to be asked, which is why expert consensus is so significant.
Herbert Bormann, at this point teaching at Yale, thought that ambiguity arose from confusing different types of uncertainty. There was no question that acid rain was real. Rainfall in the northeastern United States was many times more acidic than it used to be. The uncertainty was about the precise nature of its cause: tall smokestacks—dispersing sulfur higher in the atmosphere—or just increased use of fossil fuels overall? Moreover, while the broad picture was emerging, many details were still to be sorted out, some of them quite important. Chief among these was the question: did we know for sure that the sulfur was anthropogenic—made by man—rather than natural? This question would recur in debates over ozone and global warming, so it’s worth understanding how it was answered here.
Bolin and his Swedish colleagues had made “mass balance arguments”: they considered how much sulfur could be supplied by the three largest known sources—pollution, volcanoes, and sea spray—and compared this with how much sulfur was falling as acid rain. Since there are no active volcanoes in northern Europe, and sea spray doesn’t travel very far, they deduced that most of the acid rain in northern Europe had to come from air pollution. Still, this was an indirect argument. To really prove the point, you’d want to show that the actual sulfur in actual acid rain came from a known pollution source. Fortunately there was a way to do this—using isotopes.
Scientists love isotopes—atoms of the same element with different atomic weights, like carbon-12 and carbon-14—because they are exceptionally useful. If they are radioactive and decay over time—like carbon-14—they can be used to determine the age of objects, like fossils and archeological relics. If they are stable, like carbon-13—or sulfur-34—they can be used to figure out where the carbon or sulfur has come from.19 Different sources of sulfur have different amounts of sulfur-34, so you can use the sulfur isotope content as a “fingerprint” or “signature” of a particular source, either natural or man-made.
In 1979, the United Nations Economic Commission for Europe passed the Convention on Long-range Transboundary Pollution. Based on the Declaration of the U.N. Conference on the Human Environment—the one for which Bert Bolin’s report had been prepared—the convention insisted that all nations have responsibility to “ensure that activities within their jurisdiction or control do not cause damage to the environment of other states or of areas beyond the limits of national jurisdiction.”
The panel began by noting a common problem among scientists: the tendency to emphasize uncertainties rather than settled knowledge. Scientists do this because it’s necessary for inquiry—the research frontier can’t be identified by focusing on what you already know—but it’s not very helpful when trying to create public policy.
As Bernabo puts it, for any problem, the degree of scientific certainty demanded is proportional to the cost of doing something about it.
Singer made clear that he shared the view later famously credited to Roger Revelle: that human activities had reached a tipping point. Our actions were no longer trivial; we were capable of changing fundamental processes on a planetary scale. Numerous emerging problems—acid rain, global warming, the effects of DDT—made this clear.
Like most of his colleagues, Singer believed there was a need for more science, but in 1970 he argued that one cannot always wait to act until matters are proven beyond a shadow of a doubt. Singer cited the famous essay “The Tragedy of the Commons,” in which biologist Garrett Hardin argued that individuals acting in their rational self-interest may undermine the common good, and warned against assuming that technology would save us from ourselves. “If we ignore the present warning signs and wait for an ecological disaster to strike, it will probably be too late,” Singer noted. He imagined what it must have been like to be Noah, surrounded by “complacent compatriots,” saying, “‘Don’t worry about the rising waters, Noah; our advanced technology will surely discover a substitute for breathing.’ If it was wisdom that enabled Noah to believe in the ‘never-yet-happened,’ we could use some of that wisdom now,” Singer concluded.
Singer made a similar argument in a book on population control published in 1971, in which he framed the debate about population as a clash between neo-Malthusians, who focused on the limits of resources, and Cornucopians, who believed that resources are created by human ingenuity and are therefore unlimited. In 1971, Singer did not take sides, but stressed that the Cornucopian view hinged on the availability of energy: if population increases and one has to work harder to obtain available resources, then “per capita energy consumption must necessarily increase.”
Energy was key; the other crucial issue was protecting the quality of life. “Environmental quality is not a luxury; it is an absolute necessity of life,” Singer wrote, and so it was “incumbent upon us … to learn how to reduce the environmental impact of population growth: by conservation of resources; by re-use and re-cycling; by a better distribution of people which reduces the extreme concentrations in metropolitan centers; but above all by choosing life styles which permit ‘growth’ of a type that makes a minimum impact on the ecology of the earth’s biosphere.”
Somewhere between 1970 and 1980, however, Singer’s views changed. He began to worry more about the cost of environmental protection, and to feel that it might not be worth the gain. He also adopted the position he previously attributed to Noah’s detractors: that something would happen to save us. That something would be technological innovation fostered in a free market. Singer would come down on the Cornucopian side.
In 1978 Singer developed an argument for cost-benefit analysis as a way to think about environmental problems in a report for the Mitre Corporation—a private group that did extensive consulting to the government on energy and security issues. “In the next decade,” he wrote, “… the nation will spend at least 428 billion dollars to reach and maintain certain legal air and water standards. To know whether these costs are in any sense justified, one must carry out a cost-benefit analysis. This has not been done.”
“The public policy conclusion from our analysis is that where a choice exists, one should always choose a lower national cost, i.e. a conservative approach to air pollution control, which will not inflict as much economic damage on the poorer segment of the population.”
Singer had emphasized the potential cost to those who could afford it least—a point with which many liberals would concur—but if you left off his final phrase, you had a view that many free market conservatives, as well as polluting industries, found very attractive.
Someone on the panel also circulated a document produced by a private consulting firm criticizing earlier National Academy work on acid rain. The consultants’ report asserted that the scientific arguments for adverse effects from acid rain were “speculative” and “oversimplified,” the conclusions “premature” and “unbalanced,” and also added that some crops might benefit from acid rain. While the record doesn’t say who circulated this report to the panel, its complaint that “relative costs and benefits of available options are not considered” certainly resonated with Fred Singer’s views. But economic analysis was neither within the charge nor the expertise of the Academy scientists, so they were being criticized for not doing something they had not been asked to do.
Singer wrote a six-page letter to the committee chair taking issue with Ackermann’s testimony, which he claimed was unsupported by sufficient data. He argued that evidence of damage was lacking, or limited, that a good deal of soil acidification is natural, that only certain kinds of soils were susceptible to acid damage, and that acidification might in some cases be beneficial. Some of Singer’s claims—for example, that some soils are naturally acidic—were true, but irrelevant. Others were misleading, insofar as he was the only member of the committee who held the opinion that the evidence of potential soil damage was “insufficient.” Whether or not the House Committee chairman believed Singer’s claims, his letter certainly would have had at least one effect: to make it appear that the committee was divided and there was real and serious scientific disagreement. The committee was divided, but it was divided 8–1, with the dissenter appointed by the Reagan White House.
Singer was supposed to be writing the final chapter of the report, on the feasibility of estimating the economic benefits of controlling acid pollution. It was to be an investigation of how you might try to place a dollar value on nature—and what would be lost if you didn’t. Somehow, along the way, it turned into the claim that if you did nothing, it cost you nothing. Singer was continuing to equate the value of nature to zero. This was not something the others would accept, so the panel had three choices: keep working until they came to agreement, delete the chapter altogether, or relegate it to an appendix. As the panel neared completion of their report, this issue remained unresolved. When the report finally appeared, the third solution had been chosen. While the rest of the report was jointly authored—the norm for National Academy and other peer review panel reports—Singer’s appendix was all his own. It began with a strange claim: that the benefits as well as the costs of doing nothing were zero. This was patently at odds with the rest of the report, which stressed repeatedly the ecological costs of acid deposition.
Singer also presumed that the costs were mostly accrued in the present, but the benefits in the future, and therefore the latter had to be discounted in order to make them commensurate with the former. (That is to say, a dollar in the future is not worth as much to you as a dollar now, so you “discount” its value in your planning and decision making. How much you discount it depends in part on inflation, but also in part on how much you value the future.) Discounting would later become a huge issue in assessing the costs and benefits of stopping global warming, as long-term risks can be quickly written off with a sufficiently high discount rate.
Yet economists (and ordinary people) know that markets do not always work. Indeed, many economists would say that pollution is a prime example of market failure: its collateral damage is a hidden cost not reflected in the price of a given good or service.
The article didn’t just misrepresent the state of the science, it misrepresented its history, too. “It’s not surprising that there should be sharp disagreements about acid rain. The rain has been studied only for about six years.” (You’d think think tank researchers could do arithmetic: the elapsed time between 1963 and 1984 did not come to six years.) The Wall Street Journal ran a piece on its editorial page by a consultant for Edison Electric named Alan Katzenstein entitled “Acidity is not a major factor,” questioning the scientific evidence and suggesting that the real “villain in the acid-rain story” might be aluminum. One forest ecologist responded in a letter to the editor: “Katzenstein made several assertions about the research findings [and] all of them are incorrect!” Who was Katzenstein? An ecologist? A chemist? A biologist? No, he was a business consultant who previously had worked for the tobacco industry.
“We don’t know what’s causing it” became the official position of the Reagan administration, despite twenty-one years of scientific work that demonstrated otherwise. “We don’t know” was the mantra of the tobacco industry in staving off regulation of tobacco long after scientists had proven its harms, too. But no one seemed to notice this similarity, and the doubt message was picked up by the media, which increasingly covered acid rain as an unsettled question.
Well after acid rain was off the headlines, Gene Likens and his colleagues continued to work at Hubbard Brook. By 1999, they had concluded that the problem had not been solved. “Acid rain still exists,” Likens wrote in the Proceedings of the American Philosophical Society, “and its ecological effects have not gone away.” Indeed, matters had gotten worse, as additional stresses such as global warming were making the forests “even more vulnerable to these anthropogenic inputs of strong acids from the atmosphere.” The net result was that “the forest has stopped growing.”
Over the next ten years, Likens and his colleagues pursued the question of net forest health. In 2009 they spoke out frankly. “Since 1982, the forest has not accumulated biomass. In fact, since 1997, the accumulation … has been significantly negative.” The forest was shrinking, “under siege” from multiple onslaughts of climate change, alien species invasion, disease, mercury and salt pollution, landscape fragmentation, and continued acid rain.
Magical thinking still informs the position of many who oppose environmental regulation. As recently as 2007, the George Marshall Institute continued to insist that the damages associated with acid rain were always “largely hypothetical,” and that “further scientific investigation revealed that most of them were not in fact occurring.” The Institute cited no studies to support this extraordinary claim. The Institute cited no studies to support this extraordinary claim. Moreover, there is reason to believe that a straight-out command and control approach might have better results than cap and trade in one important respect: research shows that regulation is an effective means to stimulate technological innovation. That is to say, if you want the market to do its magic—if you want businesses to provide the goods and services that people need—the best way to do that, at least in terms of pollution prevention, appears, paradoxically, to be to mandate it.
David Hounshell is one of America’s leading historians of technology. Recently he and his colleagues at Carnegie Mellon University have turned their attention to the question of regulation and technological innovation. In an article published in 2005, “Regulation as the Mother of Innovation,” based on the Ph.D. research of Hounshell’s student, Margaret Taylor, they examined the question of what drives innovation in environmental control technology. It is well established that the lack of immediate financial benefits leads companies to underinvest in R & D, and this general problem is particularly severe when it comes to pollution control. Because pollution prevention is a public good—not well reflected in the market price of goods and services—the incentives for private investment are weak. Competitive forces just don’t provide enough justification for the long-term investment required; there is a lack of driving demand. However, when government establishes a regulation, it creates demand. If companies know they have to meet a firm regulation with a definite deadline, they respond—and innovate. The net result may even be cost savings for the companies, as obsolete technologies are replaced with state-of-the-art ones, yet the companies would not have bothered to make the change had they not been forced to.
This is admittedly speculative. We will never know what would have happened had a different approach been taken. However, one thing we do know for sure is that doubt-mongering about acid rain—like doubt-mongering about tobacco—led to delay, and that was a lesson that many people took to heart. In the years that followed, the same strategy would be applied again, and again, and again—and in several cases by the same people. Only next time around, they would not merely deny the gravity of the problem; they would deny that there was any problem at all. In the future, they wouldn’t just tamper with the peer review process; they would reject the science itself.
CHAPTER 4 – CONSTRUCTING A COUNTERNARRATIVE: THE FIGHT OVER THE OZONE HOLE
At the same time as acid rain was being politicized, another, possibly even more worrisome problem had come to light: the ozone hole. The idea that human activities might be damaging the Earth’s protective ozone layer first entered the public mind in 1970. Awareness began with the American attempt to develop a commercial airliner that could fly faster than the speed of sound. The “supersonic transport,” or SST, would fly inside the stratospheric ozone layer, and scientists worried that its emissions might do damage. While the SST did not turn out to be a serious threat, concern over it led to the realization that chemicals called chlorofluorocarbons were.
Scorer’s main point during his tour was one that would become a common refrain among anti-environmentalists in the years to come. He insisted that human activities were too small to have any impact on the atmosphere, which he called “the most robust and dynamic element in the environment.” He dismissed the idea of ozone destruction as a “scare story” based on little scientific evidence. Even in Los Angeles, struggling with a tremendous smog problem that created widespread respiratory distress during the summer, he insisted that humans were incapable of harming the environment.
The industry’s Committee on Atmospheric Sciences had an idea, perhaps generated by Stolarski and Cicerone’s work-around for their chlorine paper in Kyoto: to blame volcanoes. Magmas contain dissolved chlorine, and when volcanoes erupt, this chlorine can be released into the atmosphere. When volcanoes erupt catastrophically they send ash, dust, and gases into the stratosphere. If there were a lot of volcanic chlorine floating around the stratosphere, then a small amount of additional CFCs might not make much difference. If volcanoes supplied most of the chlorine, and the ozone layer hadn’t been destroyed yet, then chlorine couldn’t be a big deal. Or so the industry argument went.
But volcanoes also erupt a lot of water vapor, and soot-and-dust-laden rain (often black) falls during or just after eruptions, as the water vapor condenses. Chlorine is easily dissolved in water and some of it therefore rains out. This phenomenon was understood qualitatively in the mid-1970s but not quantitatively, so the industry Council on Atmospheric Sciences decided to make a big show of proving that most of the chlorine would reach the stratosphere. They held a press conference in October 1975 to announce their “research” program on an Alaska volcano expected to erupt soon. The volcano erupted at the end of January 1976, but evidently it did not do what they were hoping, as the industry group never announced results, beyond stating they were “inconclusive.” Yet the claim that volcanoes were the source of most stratospheric chlorine was repeated well into the 1990s.
The Ozone Trends Panel included a chemist from the DuPont Corporation, which had also provided financial support for the Antarctic field expeditions. After the panel’s announcement, he convinced his own management that the results had to be taken seriously; they, in turn, approached the corporation’s executives. After three days of intense discussion, DuPont’s executives decided that the panel had demonstrated an appropriate level of harm. On March 18, they decided that DuPont would cease production of CFCs within about ten years.
If environmental regulation should be based on science, then ozone is a success story. It took time to work out the complex science, but scientists, with support from the U.S. government and international scientific organizations, did it. Regulations were put in place based on the science, and adjusted in response to advances in it. But running in parallel to this were persistent efforts to challenge the science. Industry representatives and other skeptics doubted that ozone depletion was real, or argued that if it was real, it was inconsequential, or caused by volcanoes.
One aspect of the effort to cast doubt on ozone depletion was the construction of a counternarrative that depicted ozone depletion as a natural variation that was being cynically exploited by a corrupt, self-interested, and extremist scientific community to get more money for their research. One of the first people to make this argument was a man who had been a fellow at the Heritage Foundation in the early 1980s: Fred Singer.
In “My Adventures in the Ozone Layer,” he cast the scientific community as dominated by self-interest. “It’s not difficult to understand some of the motivations behind the drive to regulate CFCs out of existence,” he wrote. “For scientists: prestige, more grants for research, press conferences, and newspaper stories. Also the feeling that maybe they are saving the world for future generations.” (As if saving the world would be a bad thing!)
Singer was doing just what he had done for acid rain—insisting that any solution would be difficult and expensive, yet providing scant evidence to support the claim. In fact, he was going further, making bold assertions about the nature of technologies that did not yet exist.
In short, Singer’s story had three major themes: the science is incomplete and uncertain; replacing CFCs will be difficult, dangerous, and expensive; and the scientific community is corrupt and motivated by self-interest and political ideology.
From 1988 to 1995, Singer insisted that the ozone research community was misleading the public about even the existence of ozone depletion, let alone its origins. He argued in his 1989 National Review article that researchers were doing this to line their own pockets, and those of their graduate students, by scaring public officials who could fund their research.
Of course, similar charges might be levelled at Singer. While we don’t have access to SEPP’s tax returns for the 1990s, in 2007 it netted $226,443, and had accumulated assets of $1.69 million.95 His skepticism also gained him a huge amount of attention—far more than most scientists ever get for their research, quietly published in academic journals. So if scientists should be discredited for getting money for their research, or for enjoying the limelight, the same argument would logically apply to Singer.
What was Singer really up to? We suggest that the best answer comes from his own pen. “And then there are probably those with hidden agendas of their own—not just to ‘save the environment’ but to change our economic system,” he wrote in 1989. “Some of these ‘coercive utopians’ are socialists, some are technology-hating Luddites; most have a great desire to regulate—on as large a scale as possible.” In a 1991 piece on global warming, he reiterated the theme that environmental threats—in this case global warming—were being manufactured by environmentalists based on a “hidden political agenda” against “business, the free market, and the capitalistic system.” The true goal of those involved in global warming research was not to stop global warming, but to foster “international action, preferably with lots of treaties and protocols.” The “real” agenda of environmentalists—and the scientists who provided the data on which they relied—was to destroy capitalism and replace it with some sort of worldwide utopian Socialism—or perhaps Communism. That echoed a common right-wing refrain in the early 1990s: that environmental regulation was the slippery slope to Socialism. In 1992, columnist George Will encapsulated this view, saying that environmentalism was a “green tree with red roots.”
CHAPTER 5 – WHAT’S BAD SCIENCE? WHO DECIDES? THE FIGHT OVER SECONDHAND SMOKE
By the mid-1980s, nearly every American knew that smoking caused cancer, but still tobacco industry executives successfully promoted and sustained doubt. Scientists continued to play a crucial role in that effort, as men like Dr. Martin Cline provided powerful “expert” testimony when cases went to court. In 1986, a new panic ripped through the industry, much like the one that tobacco salesmen must have felt in 1953 when those first painted mice developed cancer from cigarette tar, and again in 1963 when the industry read the first Surgeon General’s report. The cause was a new Surgeon General’s report that concluded that secondhand smoke could cause cancer even in otherwise healthy non-smokers. When the EPA took steps to limit indoor smoking, Fred Singer joined forces with the Tobacco Institute to challenge the scientific basis of secondhand smoke’s health risks. But they didn’t just claim that the data were insufficient; they claimed that the EPA was doing “bad science.” To make this claim seem credible, they didn’t just fight EPA on secondhand smoke; they began a smear campaign to discredit the EPA in general and tarnish any scientific results that any industry didn’t like as “junk.”
In the 1970s, industry researchers had found that sidestream smoke contained more toxic chemicals than mainstream smoke—in part because smoldering cigarettes burn at lower temperatures at which more toxic compounds are created. So they got to work trying to produce less harmful sidestream smoke by improving filters, changing cigarette papers, or adding components to make the cigarettes burn at higher temperatures. They also tried to make cigarettes whose sidestream smoke was not less dangerous, but simply less visible.
Takeshi Hirayama was chief epidemiologist at the National Cancer Center Research Institute in Tokyo, Japan. In 1981, he showed that Japanese women whose husbands smoked had much higher death rates from lung cancer than those whose husbands did not. The study was long-term and big—540 women in twenty-nine different health care districts studied over fourteen years—and showed a clear dose-response curve: the more the husbands smoked, the more the wives died from lung cancer. Spousal drinking had no effect, and the husbands’ smoking had no impact on diseases like cervical cancer that you wouldn’t expect to be affected by cigarette smoke. The study did exactly what good epidemiology should do: it demonstrated an effect and ruled out other causes. The Japan study also explained a long-standing conundrum: why many women got lung cancer even when they didn’t smoke. Hirayama’s study was a first-rate piece of science; today it is considered a landmark.
The tobacco industry lambasted its findings. They hired consultants to mount a counterstudy and undermine Hirayama’s reputation. One of these consultants was Nathan Mantel, a well-known biostatistician, who claimed that Hirayama had committed a serious statistical error. The Tobacco Institute promoted Mantel’s work, convincing the media to present “both sides” of the story. Leading newspapers played into their hands, running articles with headlines such as SCIENTIST DISPUTES FINDINGS OF CANCER RISK TO NONSMOKERS and NEW STUDY CONTRADICTS NON-SMOKERS’ RISK. Then the industry ran full-page ads in major newspapers highlighting these headlines.
The “new study” was of course funded by the industry. In private, a different story was unfolding, as industry advisors acknowledged that the Hirayama study was correct. “Hirayama [and his defenders] are correct and Mantel and TI [Tobacco Institute] are wrong,” one internal memo acknowledged.
Several of these special projects were run through a law firm to shield these efforts from scrutiny using attorney-client privilege. (We already saw how UCLA scientist Martin Cline hid behind attorney-client privilege when testifying as an expert witness, claiming not to work for the tobacco industry, but for a law firm.)
Project Whitecoat—as its name suggests—enlisted European scientists to “reverse scientific and popular misconception that ETS [environmental tobacco smoke] is harmful.” Once again, the industry was fighting science with science—or at least, scientists.
“Objective #1”—on which all else hinged—was “to maintain the controversy … about tobacco smoke in public and scientific forums.” The budget for maintaining the controversy was $16 million.
The year that followed was crucial for maintaining the controversy, because the battle had now been joined by the U.S. Environmental Protection Agency. The tobacco industry had promoted the use of the phrase “environmental tobacco smoke” in preference to passive smoking or secondhand smoke—perhaps because it seemed less threatening—but this proved a tactical mistake, because it virtually invited EPA scrutiny. If secondhand smoke was “environmental,” then there was no question that it fell under the purview of the Environmental Protection Agency. And this meant the prospect of federal regulation—what the industry most dreaded.
Seitz did not suggest, however, that the industry give up the fight. Rather, he suggested that the best way to fight such a heavy weight of evidence was to challenge the weight-of-evidence approach. The idea was to reject “exhaustive inclusion”—examining all the evidence—and to focus on the “best evidence” instead.
Seitz had a point. Not all scientific studies are created equal, and lumping the good with the bad can cause confusion and error. An epidemiological study with ten thousand people is clearly better than one with ten. But it doesn’t take much imagination to see how easily a “best evidence” approach could be biased, excluding studies you don’t like and including the ones you do. Seitz’s report stressed that inclusion criteria should always be stated up front—such as a preference for studies with “ideal research designs.” But medical studies are never conducted under ideal conditions: you cannot put people in cages and control what they eat, drink, and breathe, 24/7. Animals are by definition models for what a researcher is really interested in—people. At best, animal studies are reliable representations or good first approximations, but they can never be considered ideal; Seitz’s argument was transparently self-serving. The industry was not charmed, and they took up a different banner instead. It was the banner of “sound science.” For this they turned to Fred Singer.
In 1990, Singer had created his Science and Environment Policy Project to “promote ‘sound science’ in environmental policy.” What did it mean to promote “sound science”? The answer is, at least in part, to defend the tobacco industry. By 1993, he was helping the industry to promote the concept of sound science to support science they liked and to discredit as “junk” any science they didn’t.
Tom Hockaday was an APCO employee, and March 1993 found him working closely with Philip Morris vice president Ellen Merlo to develop scientific articles to defend secondhand smoke and promote the idea that the EPA work was “junk science.”
Why would the EPA “rig” the numbers? Singer’s answer: Controlling smoke would lead toward greater regulation in general. “The litany of questionable crises emanating from the Environmental Protection Agency is by no means confined to these issues. It could just as easily include lead, radon, asbestos, acid rain, global warming, and a host of others.”
Consider a handbook the tobacco industry distributed that same year, which drew on Singer’s work. Bad Science: A Resource Book was a how-to handbook for fact fighters. It contained over two hundred pages of snappy quotes and reprinted editorials, articles, and op-ed pieces that challenged the authority and integrity of science, building to a crescendo in the attack on the EPA’s work on secondhand smoke. It also included a list of experts with scientific credentials available to comment on any issue about which a think tank or corporation needed a negative sound bite.
Bad Science was a virtual self-help book for regulated industries, and it began with a set of emphatic sound-bite-sized “MESSAGES”:
- Too often science is manipulated to fulfill a political agenda.
- Government agencies … betray the public trust by violating principles of good science in a desire to achieve a political goal.
- No agency is more guilty of adjusting science to support preconceived public policy prescriptions than the Environmental Protection Agency.
- Public policy decisions that are based on bad science impose enormous economic costs on all aspects of society.
- Like many studies before it, EPA’s recent report concerning environmental tobacco smoke allows political objectives to guide scientific research.
- Proposals that seek to improve indoor air quality by singling out tobacco smoke only enable bad science to become a poor excuse for enacting new laws and jeopardizing individual liberties.
Bad, bad science. You can practically see the fingers wagging. Scientists had been bad boys; it was time for them to behave themselves. The tobacco industry would be the daddy who made sure they did. It wasn’t just money at stake; it was individual liberty. Today, smoking, tomorrow … who knew? By protecting smoking, we protected freedom.
As we saw in chapter 3, science really was manipulated for political purposes in the case of acid rain, but not by the scientists who had done the research. It was Bill Nierenberg who changed the Executive Summary of the Acid Rain Peer Review Panel, not the EPA, which played no role in the Acid Rain Peer Review. Still, if the best defense is a good offense, the tobacco industry now took the offensive. To anyone who understood the science, their actions were pretty darn offensive, indeed.
If the quotable quotes were assertions without evidence, so too were many of the articles, often taken from the Wall Street Journal and Investor’s Business Daily, and written by individuals with long histories of defending risky industrial products. Michael Fumento, for example, a syndicated columnist for Scripps Howard papers and a longtime defender of pesticides, asked, “Are Pesticides Really So Bad?” in Investor’s Business Daily. (Fumento was later fired from Scripps Howard for failing to disclose receiving $60,000 from Monsanto, a chemical corporation whose work he covered in his columns.) “Frontline Perpetuates Pesticide Myth,” “Earth Summit Will Shackle the Planet, Not Save It,” and other articles from the Wall Street Journal variously attacked efforts to control pesticides, stop global warming, and limit the risks of asbestos.
If Bad Science often quoted “experts” who were paid consultants to regulated industries, sometimes it followed a more sophisticated strategy: reminding readers of the fallibility of science. Reprints from respected media outlets provided well-documented examples of scientific error and malfeasance. “The Science Mob,” from the New Republic, recounted the David Baltimore case, where a collaborator of Dr. Baltimore was accused of falsifying experimental results, and the scientific establishment closed ranks to defend Baltimore—a giant in his field—rather than support the whistle-blower who exposed it. Other pieces discussed bias and distortion in medical research caused by industrial financing (the irony of this was unremarked). Several pieces from the New York Times focused on the limits of animal studies, while a special issue of Time, “Science under Siege,” described growing public distrust of science in the face of mistakes like the premature announcement of cold fusion and mismanagement of the Hubble telescope. Collectively, the articles created an impression of science rife with exaggeration, mismanagement, bias, and fraud.
“The EPA report has been widely criticized within the scientific community,” the book proclaimed, but in truth very few scientists had criticized the EPA report, except ones linked to the tobacco industry. This was the Bad Science strategy in a nutshell: plant complaints in op-ed pieces, in letters to the editor, and in articles in mainstream journals to whom you’d supplied the “facts,” and then quote them as if they really were facts. Quote, in fact, yourself. A perfect rhetorical circle. A mass media echo chamber of your own construction. The phrases “excessive regulation,” “over-regulation,” and “unnecessary regulation” were liberally sprinkled throughout the book. Many of the quotable quotes came from the Competitive Enterprise Institute (CEI), a think tank promoting “free enterprise and limited government” and dedicated to the conviction that the “best solutions come from people making their own choices in a free marketplace, rather than government intervention.”
In short, Bad Science was a compendium of attacks on science, published in places like the Washington Times, and written by staff of the Competitive Enterprise Institute. The articles weren’t written by scientists and they didn’t appear in peer-reviewed scientific journals. Rather, they appeared in media venues whose readers would be sympathetic to the Competitive Enterprise Institute’s laissez-faire ideology.
And that was precisely the point. The goal wasn’t to correct scientific mistakes and place regulation on a better footing. It was to undermine regulation by challenging the scientific foundation on which it would be built. It was to pretend that you wanted sound science when really you wanted no science at all—or at least no science that got in your way.
Bad Science lambasted the EPA for not “seek[ing] out the nation’s leading scientists [to] conduct a peer-reviewed study” on ETS, but the EPA had sought leading scientists and their work had been peer-reviewed. Had the EPA commissioned a brand new study, the industry would no doubt have attacked them for wasting taxpayer money on superfluous work. But that was precisely the point: to attack the EPA, because it was just about impossible to defend secondhand smoke any other way. At least, this was what Philip Morris had concluded.
Tozzi had been an administrator at the Office of Management and Budget in the Reagan administration, and was well-known among public health officials for his resistance to the scientific evidence that aspirin causes Reye’s syndrome in children. (Critics charged him with perfecting the strategy of “paralysis by analysis”: insisting on more, and more, and more, data in order to avoid doing anything.)
“Without a major, concentrated effort to expose the scientific weaknesses of the EPA case, without an effort to build considerable reasonable doubt … then virtually all other efforts … will be significantly diminished in effectiveness,” ran a memo from Philip Morris communications director, Victor Han, to Ellen Merlo.
The EPA was “an agency that is at least misguided and aggressive, at worst corrupt and controlled by environmental terrorists,” Han asserted. Since few people were sympathetic to secondhand smoke, attacking the EPA offered “one of the few avenues for inroads.” The industry would abandon its defensive posture—defending smokers’ right to smoke—and argue instead that “over-regulation” was leading to “out-of-control expenditures of taxpayer money.” Much of this would be done through a newsletter called EPA Watch—an “asset” created by Philip Morris through the public relations firm APCO.
No one in 1993 would have argued that the EPA was a perfect agency, or that there weren’t some regulations that needed to be revamped; even its supporters had said as much. But the tobacco industry didn’t want to make the EPA work better and more sensibly; they wanted to bring it down. “The credibility of EPA is defeatable,” Victor Han concluded, “but not on the basis of ETS alone. It must be part of a larger mosaic that concentrates all of the EPA’s enemies against it at one time.” That mosaic would soon be created.
“Junk science” quickly became the tag line of Steven J. Milloy and a group called TASSC—The Advancement of Sound Science Coalition—whose strategy was not to advance science, but to discredit it. Milloy—who later became a commentator for Fox News—was affiliated with the Cato Institute and had previously been a lobbyist at Multinational Business Services (MBS)—a firm hired by Philip Morris in the early 1990s to assist in the defense of secondhand smoke. (Milloy’s supervisor at MBS had been James Tozzi.)
Scientific advisors to TASSC included Fred Singer, Fred Seitz, and Michael Fumento—names familiar from both Bad Science and from earlier arguments over tobacco, acid rain, and ozone. Richard Lindzen, a distinguished meteorologist at MIT who was a major global warming skeptic and industry expert witness, was also invited to join.
Scientists are confident they know bad science when they see it. It’s science that is obviously fraudulent—when data have been invented, fudged, or manipulated. Bad science is where data have been cherry-picked—when some data have been deliberately left out—or it’s impossible for the reader to understand the steps that were taken to produce or analyze the data. It is a set of claims that can’t be tested, claims that are based on samples that are too small, and claims that don’t follow from the evidence provided. And science is bad—or at least weak—when proponents of a position jump to conclusions on insufficient or inconsistent data.
But while these scientific criteria may be clear in principle, knowing when they apply in practice is a judgment call. For this scientists rely on peer review. Peer review is a topic that is impossible to make sexy, but it’s crucial to understand, because it is what makes science science—and not just a form of opinion.
The idea is simple: no scientific claim can be considered legitimate until it has undergone critical scrutiny by other experts. At minimum, peer reviewers look for obvious mistakes in data gathering, analysis, and interpretation. Usually they go further, addressing the quality and quantity of data, the reasoning linking the evidence to its interpretation, the mathematical formulae or computer simulations used to analyze and interpret the data, and even the prior reputation of the claimant. (If the person is thought to do sloppy work, or has previously been involved in spurious claims, he or she can expect to attract tougher scrutiny.)
How did the EPA defend itself against these attacks? In normal scientific practice, the mere fact of withstanding peer review is the first line of defense, but Singer and Jeffreys had misrepresented the peer review process, claiming that the EPA report had been widely criticized in the scientific community, ignoring that the report had not only been unanimously endorsed by the independent experts, but that those experts had encouraged EPA to make it stronger.
One answer has already emerged in our discussion of acid rain and ozone depletion: these scientists, and the think tanks that helped to promote their views, were implacably hostile to regulation. Regulation was the road to Socialism—the very thing the Cold War was fought to defeat. This hostility to regulation was part of a larger political ideology, stated explicitly in a document developed by a British organization called FOREST—Freedom Organisation for the Right to Enjoy Smoking Tobacco. And that was the ideology of the free market. It was free market fundamentalism.
But as the philosopher Isaiah Berlin sagely pointed out, liberty for wolves means death to lambs. Our society has always understood that freedoms are never absolute. This is what we mean by the rule of law. No one gets to do just whatever he feels like doing, whenever he feels like doing it. I don’t have the right to yell fire in a crowded theater; your right to throw a punch ends at my nose. All freedoms have their limits, and none more obviously than the freedom to kill other people, either directly with guns and knives, or indirectly with dangerous goods.
The biggest hazard of them all—one that could truly affect the entire planet—was just at that moment coming to public attention: global warming. Global warming would become the mother of all environmental issues, because it struck at the very root of economic activity: the use of energy. So perhaps not surprisingly, the same people who had questioned acid rain, doubted the ozone hole, and defended tobacco now attacked the scientific evidence of global warming.
CHAPTER 6 – THE DENIAL OF GLOBAL WARMING
Many Americans have the impression that global warming is something that scientists have only recently realized was important.
As early as 1995, the leading international organization on climate, the Intergovernmental Panel on Climate Change (IPCC), had concluded that human activities were affecting global climate. By 2001, IPCC’s Third Assessment Report stated that the evidence was strong and getting stronger, and in 2007, the Fourth Assessment called global warming “unequivocal.” Major scientific organizations and prominent scientists around the globe have repeatedly ratified the IPCC conclusion. Today, all but a tiny handful of climate scientists are convinced that Earth’s climate is heating up, and that human activities are the dominant cause. Yet many Americans remained skeptical.
The doubts and confusion of the American people are particularly peculiar when put into historical perspective, for scientific research on carbon dioxide and climate has been going on for 150 years. In the mid-nineteenth century, Irish experimentalist John Tyndall first established that CO2 is a greenhouse gas—meaning that it traps heat and keeps it from escaping to outer space. He understood this as a fact about our planet, with no particular social or political implications. This changed in the early twentieth century, when Swedish geochemist Svante Arrhenius realized that CO2released to the atmosphere by burning fossil fuels could alter the Earth’s climate, and British engineer Guy Callendar compiled the first empirical evidence that the “greenhouse effect” might already be detectable. In the 1960s, American scientists started to warn our political leaders that this could be a real problem, and at least some of them—including Lyndon Johnson—heard the message. Yet they failed to act on it.
There are many reasons why the United States has failed to act on global warming, but at least one is the confusion raised by Bill Nierenberg, Fred Seitz, and Fred Singer.
Yet, while CO2 didn’t get much attention in the 1970s, climate did, as drought-related famines in Africa and Asia drew attention to the vulnerability of world food supplies. The Soviet Union had a series of crop failures that forced the humiliated nation to buy grain on the world market, and six African nations south of the Sahel (the semi-arid region south of the Sahara) suffered a devastating drought that continued through much of the 1970s. These famines didn’t just hurt poor Africans and Asians; they also caused skyrocketing food prices worldwide.
One of the founders of modern numerical atmospheric modeling, and perhaps the most revered meteorologist in America, Charney assembled a panel of eight other scientists at the Academy’s summer study facility in Woods Hole, Massachusetts. Charney also decided to go a bit beyond reviewing what the Jasons had done, inviting two leading climate modelers—Syukuro Manabe from the Geophysical Fluid Dynamics Laboratory and James E. Hansen at the Goddard Institute for Space Studies—to present the results of their new three-dimensional climate models. These were the state of the art—with a lot more detail and complexity than the Jason model—yet their results were basically the same. The key question in climate modeling is “sensitivity”—how sensitive the climate is to changing levels of CO2. If you double, triple, or even quadruple CO2, what average global temperature change would you expect? The state-of-the-art answer, for the convenient case of doubling CO2, was “near 3 C with a probable error of 1.5 C.”16 That meant that total warming might be as little as 1.5°C or as much as 4°C, but either way, there was warming, and the most likely value was about 3°C.
There were, however, natural processes that might act as a brake on warming. The panel spent some time thinking about such “negative feedbacks”, but concluded they wouldn’t prevent a substantial warming. “We have examined with care all known negative feedback mechanisms, such as increase in low or middle cloud amount, and have concluded that the oversimplifications and inaccuracies in the models are not likely to have vitiated the principal conclusions that there will be appreciable warming.” The devil was not in the details. It was in the main story. CO2 was a greenhouse gas. It trapped heat. So if you increased CO2, the Earth would warm up. It wasn’t quite that simple—clouds, winds, and ocean circulation did complicate matters—but those complications were “second-order effects”—things that make a difference in the second decimal place, but not the first. The report concluded, “If carbon dioxide continues to increase, the study group finds no reason to doubt that climate changes will result and no reason to believe that these changes will be negligible.”
Scientists use the word “sink” to describe processes that remove components from natural systems; the oceans are almost literally a heat sink, as heat in effect sinks to the bottom of the sea. The available evidence suggested that ocean mixing was sufficient to delay the Earth’s atmospheric warming for several decades. Greenhouse gases would start to alter the atmosphere immediately—they already had—but it would take decades before the effects would be pronounced enough for people to really see and feel. This had very serious consequences: it meant that you might not be able to prove that warming was under way, even though it really was, and by the time you could prove it, it would be too late to stop it.
Most National Academy reports are written collectively, reviewed by all the committee members, and then reviewed again by outside reviewers. Changes are made by the authors of the various sections and by the chairperson, and the report is accepted and signed by all the authors. An Executive Summary, or synthesis, sometimes written by the chairperson, sometimes by Academy staff, is also reviewed to ensure that it accurately reflects the contents of the study. That didn’t happen here. The Carbon Dioxide Assessment Committee—chaired by Bill Nierenberg—could not agree on an integrated assessment, so they settled for chapters that were individually authored and signed. The result, Changing Climate: Report of the Carbon Dioxide Assessment Committee, was really two reports—five chapters detailing the likelihood of anthropogenic climate change written by natural scientists, and two chapters on emissions and climate impacts by economists—which presented very different impressions of the problem. The synthesis sided with the economists, not the natural scientists.
The physical scientists allowed that many details were unclear—more research was needed—but they broadly agreed that the issue was very serious. When the chapters were boiled down to their essence, the overall conclusion was the same as before: CO2 had increased due to human activities, CO2 will continue to increase unless changes are made, and these increases will affect weather, agriculture, and ecosystems. None of the physical scientists suggested that accumulating CO2 was not a problem, or that we should simply wait and see.
So Nierenberg’s committee had produced a report with two quite different views: the physical scientists viewed accumulating CO2 as a serious problem; the economists argued that it wasn’t. And the latter view framed the report—providing its first and last chapters. A fair synthesis might have laid out the conflicting views and tried to reconcile them or at least account for the differences. But this synthesis didn’t. It followed the position advocated by Nordhaus and Schelling. It did not disagree with the scientific facts as laid out by Charney, the Jasons, and all the other physical scientists who had looked at the question, but it rejected the interpretation of those facts as a problem. “Viewed in terms of energy, global pollution, and worldwide environmental damage, the ‘CO2 problem’ appears intractable,” the synthesis explained, but “viewed as a problem of changes in local environmental factors—rainfall, river flow, sea level—the myriad of individual incremental problems take their place among the other stresses to which nations and individuals adapt.”
The fact is, historical mass migrations had been accompanied by massive suffering, and typically people moved under duress and threat of violence. So Nierenberg’s cavalier tone, and suggestion that these migrations were essentially benign, flew in the face of historical evidence. At least one reviewer recognized this. Alvin Weinberg, a physicist who had led the Oak Ridge National Laboratory for nearly twenty years, wrote a scathing eight-page critique. Weinberg was one of the first physicists to recognize the potential severity of global warming, arguing in 1974 that climate impacts might limit our use of fossil fuels before they were even close to running out. This perspective meshed with his advocacy of nuclear power, which he believed was the only energy source that could enable better living conditions for all humanity, an opinion he and Nierenberg shared. But Weinberg was outraged by what he read in Nierenberg’s report.
The report was “so seriously flawed in its underlying analysis and in its conclusions,” Weinberg wrote, that he hardly knew where to begin. The report flew in the face of virtually every other scientific analysis of the issue, yet presented almost no evidence to support its radical recommendation to do nothing.
Weinberg wasn’t alone in realizing that the claims made in the synthesis were not supported by the analysis presented in the body of the report. Two other reviewers made the same point, although with less passion. Yet these reviewers were also ignored. How was it possible for the reviewers’ comments to be ignored, and for a report to be issued in which the synthesis was at odds with the report it claimed to synthesize and in which major claims were unsupported by evidence? One senior scientist many years later answered this way: “Academy review was much more lax in those days.” But why didn’t anyone object after the report was released? This same scientist: “We knew it was garbage so we just ignored it.”
But the Nierenberg report didn’t go out with the morning trash. It was used by the White House to counter scientific work being done by the Environmental Protection Agency. The EPA prepared two reports of its own, both of which concluded that global warming would be serious, and that the nation should take immediate action to reduce coal use. When the EPA reports came out, White House Science Advisor George Keyworth used Nierenberg’s report to refute them. In his monthly report for October prepared for Ed Meese, Keyworth wrote, “The Science Advisor has discredited the EPA reports … and cited the NAS report as the best current assessment of the CO2 issue. The press seems to have discounted the EPA alarmism and has taken the conservative NAS position as the wisest.”
Keyworth was right. The press would indeed take the “conservative” position. A New York Times reporter put it this way: “The Academy found that since there is no politically or economically realistic way of heading off the greenhouse effect, strategies must be prepared to adapt to a ‘high temperature world.’ ” But the Academy hadn’t found that; the committee had asserted it. And it wasn’t the Academy; it was Bill Nierenberg and a handful of economists.
Nierenberg’s CO2 and climate report pioneered all the major themes behind later efforts to block greenhouse gas regulation, save one. Nierenberg didn’t deny the legitimacy of climate science. He simply ignored it in favor of the claims made by economists: that treating symptoms rather than causes would be less expensive, that new technology would solve the problems that might appear so long as government didn’t interfere, and that if technology couldn’t solve all the problems, we could just migrate. In the two decades to come, these claims would be heard again and again.
Bert Bolin, the man who had first warned about acid rain in Europe, thought that Hansen’s temperature data hadn’t been “scrutinized well enough,” and accepted the task. He divided the panel into three working groups. The first would produce a report reflecting the state of climate science. The second would assess the potential environmental and socioeconomic impacts. The third would formulate a set of possible responses. The scientists set themselves a deadline of 1990 for their first assessment: a very short time given their intent to involve more than three hundred scientists from twenty-five nations.
What Hansen and his group had done was to explore the role of various “forcings”—the different causes of climate change. One was greenhouse gases, a second was volcanoes, and the third was the Sun. Hansen’s team had done what scientists are supposed to do—objectively considered all the known possible causes. Then they asked, What cause or combination of causes best explains the observations? The answer was all of the above. “CO2 + volcanoes + Sun” fit the observational record best. The Sun did make a difference, but greenhouse gases did, too.

Fig. 5, Global temperature trend obtained from climate model with sensitivity 2.8°C for doubled Co2. The results in (a) are based on a 100-m mixed-layer ocean for heat capacity; those in (b) include diffusion of heat into the thermocline to 1000 m. The forcings by CO2, volcanoes, and the sun are based on Broecker (25), Lamb (27), and Hoyt (48). Mean ΔT is zero for observations and model.
There was an even larger problem with the Marshall analysis that climate modeler Steven Schneider pointed out. If Jastrow and company were right that the climate was extremely sensitive to small changes in solar output, then it meant that the climate would also be extremely sensitive to small changes in greenhouse gases. Schneider argued:
If only a few tenths of a percent change in solar energy were responsible for the [observed] .5 C long trend in climate over the past century, then this would suggest a planet that is relatively sensitive to small energy inputs. The Marshall Institute simply can’t have it both ways: they can’t argue on the one hand that small changes in solar energy output can cause large temperature changes, but that comparable changes in the energy input from greenhouse gases will not also produce comparable large signals. Either the system is sensitive to large scale radiative forcing or it is not.
Meanwhile, the Cato Institute distributed an uncorrected version of the graph printed in the original Marshall Institute white paper—the one that showed only the top part of Hansen’s graph. Given all the efforts the climate scientists had made to set the record straight, it’s not plausible that this was simply a mistake.
Moreover, they were proud of the results. In a February 1991 letter to the vice president of the American Petroleum Institute, Robert Jastrow crowed, “It is generally considered in the scientific community that the Marshall report was responsible for the Administration’s opposition to carbon taxes and restrictions on fossil fuel consumption.” Quoting New Scientist magazine, he reported that the Marshall Institute “is still the controlling influence in the White House.”
While Singer was trying to get Revelle to review the drafts, he published an article on his own in the journal Environmental Science and Technology, with essentially the same title, “What To Do about Greenhouse Warming.” Singer echoed the Marshall Institute’s arguments, implying that scientists just didn’t know what had caused the warming of the twentieth century. “There is major uncertainty and disagreement about whether this increase [in CO2] has caused a change in the climate during the past 100 years; observations simply don’t fit the theory,” he insisted. Of course there was disagreement—the Marshall Institute had generated it—but not among climate scientists. The IPCC had clearly stated that the unrestricted fossil fuel use would produce a “rate of increase of global mean temperature during the next century of about .3 C per decade; this is greater than that seen over the past 10,000 years.” Singer rejected this, asserting instead that “the scientific base for [greenhouse warming] includes some facts, lots of uncertainty, and just plain ignorance.” He concluded emphatically, “The scientific base for a greenhouse warming is too uncertain to justify drastic action at this time.” This, of course, was precisely what he had said about acid rain. And ozone depletion. It was easy to see why many working scientists didn’t like Fred Singer. He routinely rejected their conclusions, suggesting that he knew better than they did.
In February 1991, Singer visited Scripps. In one multihour meeting, Singer and Revelle went over the paper, which was already set in galleys. There was at least one point of contention between the two, and it was a big one: what was the climate sensitivity to carbon dioxide? The galleys that Singer gave to Revelle to review asserted, “Assume what we regard as the most likely outcome: A modest average warming in the next century of less than one degree Celsius, well below the normal year to year variation.”
This was completely inconsistent with what the Jasons had said, what Charney’s panel had said, and what the IPCC had said. No one in the climate community was asserting that the climate change from increased greenhouses gases would be no different from normal year-to-year variation. In fact, the IPCC had said just the opposite. Revelle apparently crossed out “less than one degree” and wrote in the margin next to it: “one to three degrees.”
This might not seem like a big difference, but it was. One to three degrees fell within the mainstream view, and clearly outside the range of the natural climate variability of the past few hundred years. This was the key point: would warming lead us into a new man-made climate regime, unlike anything we had seen before? Revelle (and thousands of climate scientists) said yes; Singer said no.
Singer finessed the disagreement by dropping numbers altogether. The sentence as published read, “Assume what we regard as the most likely outcome: A modest average warming in the next century well below the normal year to year variation.” The paper contradicted what Revelle had written in the margin, and asserted that there was no likelihood of significant warming. What little change would occur would be not noticeably different from natural variation. Singer had prevailed, and it looked as if Revelle had agreed.
The paper was published later that year in Cosmos.
Lancaster later recalled that Revelle was embarrassed when the Cosmos paper was published. But Cosmos wasn’t a scientific journal—it wasn’t peer reviewed—and it didn’t have a very high circulation. Few scientists would have seen the article, much less paid much attention to it, so even had he been in good health, Revelle might well have just let it drop. Perhaps he would have thought it was “garbage” and just ignored it.
Lancaster and his thesis advisor, Dave Keeling, wrote a letter to the New Republic challenging the Easterbrook article, but it was never published. For a second time, scientists close to Revelle were attempting to refute the misrepresentation, but their attempts to set the record straight were rejected by the journals that had published the misrepresentation in the first place.
As Lancaster continued to publicly dispute Revelle’s coauthorship of the paper, Singer filed a libel lawsuit against him. Lancaster had little money and fewer resources, but he tried to fight Singer, insisting that the facts were on his side. The only other person who could corroborate Lancaster’s account, Revelle’s secretary, Christa Beran, did. It wasn’t enough. Singer’s pockets were deeper than Lancaster’s, and in 1994, Lancaster accepted a settlement that forced him to retract his claim that Revelle hadn’t really been a coauthor, put him under a ten-year gag order, and sealed all the court documents.
Despite the best efforts of Jastrow, Seitz, Nierenberg, and Singer to create doubt, the scientific debate over the detection of global warming was reaching closure. By 1992, Hansen’s 1988 claim that warming was detectable no longer seemed bold. It seemed prescient. The only remaining issue really was whether we could prove that the warming was caused by human activities. As scientists had acknowledged many times, there are many causes of climate change, so the key question was how to sort out these various causes. Now that warming had been detected, could it be definitively attributed to humans?
“Detection and attribution studies” work by considering how warming caused by greenhouse gases might be different from warming caused by the Sun—or other natural forces. They use statistical tests to compare climate model output with real-life data. These studies were the most threatening to the so-called skeptics because they spoke directly to the issue of causality: to the social question of whether or not humans were to blame, and to the regulatory question of whether or not greenhouse gases need to be controlled. As these studies began to appear in the peer-reviewed literature, it’s not surprising that Singer and his colleagues tried to undermine them. Having taken on the patriarch of climate change research, they went after one of its rising young stars: Benjamin Santer of the Program for Climate Model Diagnosis and Intercomparison at the Lawrence Livermore National Laboratory.
Santer presented the findings in chapter 8 on November 27, 1995, the first day of the plenary session (and the same day Nierenberg proclaimed the issue politically dead in his letter to Seitz). The chapter was immediately opposed by the Saudi Arabian and Kuwaiti delegates. In the words of the New York Times’s reporter, these oil-rich states “made common cause with American industry lobbyists to try to weaken the conclusions emerging from Chapter 8.” The lone Kenyan delegate, Santer remembers, “thought there should not be a detection and attribution chapter at all.” Then the chairman of a fossil fuel industry group, the Global Climate Coalition, and automobile industry representatives monopolized the rest of the afternoon. Finally the IPCC chairman, Britain’s Sir John Houghton, closed the discussion and appointed an ad hoc drafting group to work out the disagreements and to address all of the late government comments. The working group included the lead authors, and delegates from the United States, Britain, Australia, Canada, New Zealand, the Netherlands, Saudi Arabia, Kuwait, and the lone Kenyan.
If warming were caused by the Sun, then you’d expect the whole atmosphere to warm up. If warming were caused by greenhouse gases, however, the effect on the atmosphere would be different, and distinctive. Greenhouse gases trap heat in the lower atmosphere (so it warms up), while the reduced heat flow into the upper atmosphere causes it to cool.
A portion of the ad hoc group hammered out an acceptable language. Steve Schneider convinced the Kenyan that there really was a scientific basis for the chapter’s central conclusion that anthropogenic climate change had been detected. But the Saudis never sent a representative to the ad hoc sessions, and when Santer presented the revised draft, the Saudi head delegate protested all over again. A bit of a shouting match ensued, and Houghton had to intervene, effectively tabling the issue while the working group finished negotiating the Summary for Policymakers. There the entire issue boiled down to a single sentence, in fact a single adjective, drawn from Santer’s chapter: “The balance of evidence suggests that there is a [blank] human influence on global climate.”
What should the adjective be? Santer and Wigley wanted “appreciable.” This was unacceptable to the Saudi delegate, but it was too strong for Bert Bolin, too. One participant recalls the group trying about twenty-eight different words before Bolin suggested “discernible.” That clicked, and the outcome of the Madrid meeting was this sentence: “The balance of evidence suggests that there is a discernible human influence on global climate.” This line would be quoted repeatedly in the years to come.
Moreover, Singer was again creating a straw man. “Singer refers to the [Summary for Policymakers] as saying that global warming is ‘the greatest global challenge facing mankind,’” Wigley and his coauthors wrote. “We do not know the origin of this statement—it does not appear in any of the IPCC documents. Further, it is the sort of extreme statement that most involved with the IPCC would not support.”
Wigley was right. The IPCC had not described global warming as the “greatest global challenge facing mankind.” The words Singer attributed to the IPCC don’t appear in either the Working Group I Report or in its Summary for Policymakers. Singer was putting words into other people’s mouths—and then using those words to discredit them.
The IPCC had in fact bent over backward not to use alarmist terms.
We know how the [Wall Street] Journal edited the letters because Seitz’s attack and the Journal’s weakening of the response so offended the officials of the American Meteorological Society and of the University Corporation for Atmospheric Research that their boards agreed to publish an “Open Letter to Ben Santer” in the Bulletin of the American Meteorological Society, where they republished the letters in their entirety, showing how the Journal had edited them. They voiced their support of Santer and the effort it had taken all the authors to put the report together, and categorically rejected Seitz’s attack as having “no place in the scientific debate about issues related to global change.” They began, finally, to realize what they were up against.
[There] appear[ed] to be a concerted and systematic effort by some individuals to undermine and discredit the scientific process that has led many scientists working on understanding climate to conclude that there is a very real possibility that humans are modifying Earth’s climate on a global scale. Rather than carrying out a legitimate scientific debate through the peer-reviewed literature, they are waging in the public media a vocal campaign against scientific results with which they disagree.
In her 1999 analysis, Myanna Lahsen pinned Singer’s efforts to “envelop the IPCC in an aura of secrecy and unaccountability” to a common American conservative rhetoric of political suppression. As we have seen in previous chapters, if anyone was meddling in the scientific assessment and peer review process, it was the political right wing, not the left. It wasn’t the Sierra Club that tried to pressure the National Academy of Sciences over the 1983 Carbon Dioxide Assessment; it was officials from the Department of Energy under Ronald Reagan. It wasn’t Environmental Defense that worked with Bill Nierenberg to alter the Executive Summary of the 1983 Acid Rain Peer Review Panel; it was the White House Office of Science and Technology Policy. And it was the Wall Street Journal spreading the attack on Santer and the IPCC, not Mother Jones.
We take it for granted that great individuals—Gandhi, Kennedy, Martin Luther King—can have great positive impacts on the world. But we are loath to believe the same about negative impacts—unless the individuals are obvious monsters like Hitler or Stalin. But small numbers of people can have large, negative impacts, especially if they are organized, determined, and have access to power.
Seitz, Jastrow, Nierenberg, and Singer had access to power—all the way to the White House—by virtue of their positions as physicists who had won the Cold War. They used this power to support their political agenda, even though it meant attacking science and their fellow scientists, evidently believing that their larger end justified their means.
Whatever the reasons and justifications of our protagonists, there’s another crucial element to our story. It’s how the mass media became complicit, as a wide spectrum of the media—not just obviously right-wing newspapers like the Washington Times, but mainstream outlets, too—felt obligated to treat these issues as scientific controversies. Journalists were constantly pressured to grant the professional deniers equal status—and equal time and newsprint space—and they did. Eugene Linden, once an environment reporter for Time magazine, commented in his book Winds of Change that “members of the media found themselves hounded by experts who conflated scientific diffidence with scientific uncertainty, and who wrote outraged letters to the editor when a report didn’t include their dissent.” Editors evidently succumbed to this pressure, and reporting on climate in the United States became biased toward the skeptics and deniers because of it.
We’ve noted how the notion of balance was enshrined in the Fairness Doctrine, and it may make sense for political news in a two-party system (although not in a multiparty system). But it doesn’t reflect the way science works. In an active scientific debate, there can be many sides. But once a scientific issue is closed, there’s only one “side.” Imagine providing “balance” to the issue of whether the Earth orbits the Sun, whether continents move, or whether DNA carries genetic information. These matters were long ago settled in scientists’ minds. Nobody can publish an article in a scientific journal claiming the Sun orbits the Earth, and for the same reason, you can’t publish an article in a peer-reviewed journal claiming there’s no global warming. Probably well-informed professional science journalists wouldn’t publish it either. But ordinary journalists repeatedly did.
In 2004, one of us showed that scientists had a consensus about the reality of global warming and its human causes—and had since the mid-1990s. Yet throughout this time period, the mass media presented global warming and its cause as a major debate. By coincidence, another study also published in 2004 analyzed media stories about global warming from 1988 to 2002. Max and Jules Boykoff found that “balanced” articles—ones that gave equal time to the majority view among climate scientists as well as to deniers of global warming—represented nearly 53 percent of media stories. Another 35 percent of articles presented the correct majority position among climate scientists, while still giving space to the deniers. The authors conclude that this “balanced” coverage is a form of “informational bias,” that the ideal of balance leads journalists to give minority views more credence than they deserve.
This divergence between the state of the science and how it was presented in the major media helped make it easy for our government to do nothing about global warming. Gus Speth had thought in 1988 that there was real momentum toward taking action. By the mid-1990s, that policy momentum had not just fizzled; it had evaporated. In July 1997, three months before the Kyoto Protocol was finalized, U.S. senators Robert Byrd and Charles Hagel introduced a resolution blocking its adoption. Byrd-Hagel passed the Senate by a vote of 97–0. Scientifically, global warming was an established fact. Politically, global warming was dead.
CHAPTER 7 – DENIAL RIDES AGAIN: THE REVISIONIST ATTACK ON RACHEL CARSON
Rachel Carson is an American hero—the courageous woman who in the early 1960s called our attention to the harms of indiscriminate pesticide use. In Silent Spring, a beautiful book about a dreadful topic, Carson explained how pesticides were accumulating in the food chain, damaging the natural environment, and threatening even the symbol of American freedom: the bald eagle. Although the pesticide industry tried to paint her as a hysterical female, her work was affirmed by the President’s Science Advisory Committee, and in 1972, the EPA concluded that the scientific evidence was sufficient to warrant the banning of the pesticide DDT in America.
Most historians, we included, consider this a success story. A serious problem was brought to public attention by an articulate spokesperson, and, acting on the advice of acknowledged experts, our government took appropriate action. Moreover, the banning of DDT, which took place under a Republican administration, had widespread public and bipartisan political support. The policy allowed for exceptions, including the sale of DDT to the World Health Organization for use in countries with endemic malaria, and for public health emergencies here at home. It was sensible policy, based on solid science.
Fast-forward to 2007. The Internet is flooded with the assertion that Carson was a mass murderer, worse than Hitler. Carson killed more people than the Nazis. She had blood on her hands, posthumously. Why? Because Silent Spring led to the banning of DDT, without which millions of Africans died of malaria. The Competitive Enterprise Institute—whom we encountered in previous chapters defending tobacco and doubting the reality of global warming—now tells us that “Rachel was wrong.” “Millions of people around the world suffer the painful and often deadly effects of malaria because one person sounded a false alarm,” their site asserts. “That person is Rachel Carson.”
Other conservative and Libertarian think tanks sound a similar cry. The American Enterprise Institute argues that DDT was “probably the single most valuable chemical ever synthesized to prevent disease,” but was unnecessarily banned because of hysteria generated by Carson’s influence. The Cato Institute tells us that DDT is making a comeback. And the Heartland Institute posts an article defending DDT by Bonner Cohen, the man who created EPA Watch for Philip Morris back in the mid-1990s. (Heartland also has extensive, continuing programs to challenge climate science.)
The stories we’ve told so far in this book involve the creation of doubt and the spread of disinformation by individuals and groups attempting to prevent regulation of tobacco, CFCs, pollution from coal-fired power plants, and greenhouse gases. They involve fighting facts that demonstrate the harms that these products and pollutants induce in order to stave off regulation. At first, the Carson case seems slightly different from these earlier ones, because by 2007 DDT had been banned in the United States for more than thirty years. This horse was long out of the barn, so why try to reopen a thirty-year-old debate?
Sometimes reopening an old debate can serve present purposes. In the 1950s, the tobacco industry realized that they could protect their product by casting doubt on the science and insisting the dangers of smoking were unproven. In the 1990s, they realized that if you could convince people that science in general was unreliable, then you didn’t have to argue the merits of any particular case, particularly one—like the defense of secondhand smoke—that had no scientific merit. In the demonizing of Rachel Carson, free marketeers realized that if you could convince people that an example of successful government regulation wasn’t, in fact, successful—that it was actually a mistake—you could strengthen the argument against regulation in general.
We’ve seen how some people have fought the facts about the hazards of tobacco, acid rain, ozone depletion, secondhand smoke, and global warming. Their denials seemed plausible, at least to some, because they involved matters that were still under scientific investigation, where many of the details were uncertain even if the big picture was becoming clear. But the construction of a revisionist history of DDT gives the game away, because it came so long after the science was settled, far too long to argue that scientists had not come to agreement, that there was still a real scientific debate. The game here, as before, was to defend an extreme free market ideology. But in this case, they didn’t just deny the facts of science. They denied the facts of history.
So Sri Lanka didn’t stop using DDT because of what the United States did, or for any other reason. DDT stopped working, but they kept using it anyway. We can surmise why: since DDT had appeared to work at first, officials were reluctant to give it up, even as malaria became resurgent. It took a long time for people to admit defeat—to accept that tiny mosquitoes were in their own way stronger than us. As a WHO committee concluded in 1976, “It is finally becoming acknowledged that resistance is probably the biggest single obstacle in the struggle against vector-borne disease and is mainly responsible for preventing successful malaria eradication in many countries.”
Resistance is never mentioned in Ray’s account, an especially notable omission given that she was a zoologist. In a particularly egregious example of the pot calling the kettle black, Ray accused both environmentalists and William Ruckelshaus of giving credibility to pseudoscience, by creating “an atmosphere in which scientific evidence can be pushed aside by emotion, hysteria, and political pressure.” But it was she, not Ruckelshaus, who was spreading hysteria.
Milloy’s current project is junkscience.com, but, as we saw in chapter 5 “junk science” was a term invented by the tobacco industry to discredit science it didn’t like. Junkscience.com was originally established in a partnership with the Cato Institute, which, after Milloy’s continued tobacco funding came to light, severed its ties.
The Competitive Enterprise Institute shares philosophical ground with the American Enterprise Institute, which promoted the work of the late fiction writer Michael Crichton. His 2004 novel, State of Fear, portrayed global warming as a liberal hoax meant to bring down Western capitalism. Crichton also took on the DDT issue, as one character in the novel insists, “Banning DDT killed more people than Hitler … It was so safe you could eat it.”
The “Rachel was wrong” chorus is echoed particularly loudly at the Heartland Institute, a group dedicated to “free-market solutions to social and economic problems.” Their Web site insists that “some one million African, Asian, and Latin American lives could be saved annually” had DDT not been banned by the U.S. Environmental Protection Agency.
The Heartland Institute is known among climate scientists for persistent questioning of climate science, for its promotion of “experts” who have done little, if any, peer-reviewed climate research, and for its sponsorship of a conference in New York City in 2008 alleging that the scientific community’s work on global warming is a fake. But Heartland’s activities are far more extensive, and reach back into the 1990s when they, too, were working with Philip Morris.
In 1997, Philip Morris paid $50,000 to the Heartland Institute to support its activities, but this was just the tip of the iceberg of a network of support to supposedly independent and nonpartisan think tanks. The stunning extent of Philip Morris’s reach is encapsulated in a ten-page document from 1997 listing policy payments that were made to various organizations. Besides the $50,000 to the Heartland Institute, there was $200,000 for TASSC, $125,000 for the Competitive Enterprise Institute, $100,000 for the American Enterprise Institute, and scores more.80 Payments were for as little as $1,000 or as much as $300,000, and many went to groups with no evident interest in the tobacco issue, such as the Ludwig von Mises Institute or Americans for Affordable Electricity. Numerous other documents attest to activities designed to undermine the Clinton health care reform plan. Often financial contributions were referred to in company documents as “philanthropy,” and because these organizations were all nonprofit and nonpartisan, the donations were all tax deductible.
Oreskes, Naomi. Merchants of Doubt (p. 234). Bloomsbury Publishing. Kindle Edition.
The following image is the first page of this ten-page document listing the “policy” organizations to which the Philip Morris Corporation contributed. Note how nearly all of these were described as having a focus in either “Individual Liberties,” “Regulatory Issues,” or both, and how the Cato Institute, the American Enterprise Institute, and the Competitive Enterprise Institute—all of whom have questioned the scientific evidence of global warming—each received six-figure contributions.

Source: BN: 2078848138, Legacy Tobacco Documents Library
A recent academic study found that of the fifty-six “environmentally skeptical” books published in the 1990s, 92 percent were linked to these right-wing foundations (only thirteen were published in the 1980s, and 100 percent were linked to the foundations). Scientists have faced an ongoing misrepresentation of scientific evidence and historical facts that brands them as public enemies—even mass murderers—on the basis of phony facts.
There is a deep irony here. One of the great heroes of the anti-Communist political right wing—indeed one of the clearest, most reasoned voices against the risks of oppressive government, in general—was George Orwell, whose famous 1984 portrayed a government that manufactured fake histories to support its political program. Orwell coined the term “memory hole” to denote a system that destroyed inconvenient facts, and “Newspeak” for a language designed to constrain thought within politically acceptable bounds.
All of us who were children in the Cold War learned in school how the Soviet Union routinely engaged in historical cleansing, erasing real events and real people from their official histories and even official photographs. The right-wing defenders of American liberty have now done the same. The painstaking work of scientists, the reasoned deliberations of the President’s Science Advisory Committee, and the bipartisan American agreement to ban DDT have been flushed down the memory hole, along with the well-documented and easily found (but extremely inconvenient) fact that the most important reason that DDT failed to eliminate malaria was because insects evolved. That is the truth—a truth that those with blind faith in free markets and blind trust in technology simply refuse to see.
The rhetoric of “sound science” is similarly Orwellian. Real science—done by scientists and published in scientific journals—is dismissed as “junk,” while misrepresentations and inventions are offered in its place. Orwell’s Newspeak contained no science at all, as the very concept of science had been erased from his dystopia. And not surprisingly, for if science is about studying the world as it actually is—rather than as we wish it to be—then science will always have the potential to unsettle the status quo. As an independent source of authority and knowledge, science has always had the capacity to challenge ruling powers’ ability to control people by controlling their beliefs. Indeed, it has the power to challenge anyone who wishes to preserve, protect, or defend the status quo.
Lately science has shown us that contemporary industrial civilization is not sustainable. Maintaining our standard of living will require finding new ways to produce our energy and less ecologically damaging ways to produce our food. Science has shown us that Rachel Carson was not wrong.
To acknowledge this was to acknowledge the soft underbelly of free market capitalism: that free enterprise can bring real costs—profound costs—that the free market does not reflect. Economists have a term for these costs—a less reassuring one than Friedman’s “neighborhood effects.” They are “negative externalities”: negative because they aren’t beneficial and external because they fall outside the market system. Those who find this hard to accept attack the messenger, which is science.
Accepting that by-products of industrial civilization were irreparably damaging the global environment was to accept the reality of market failure. It was to acknowledge the limits of free market capitalism.
Orwell understood that those in power will always seek to control history, because whoever controls the past controls the present.
Why did this group of Cold Warriors turn against the very science to which they had previously dedicated their lives? Because they felt—as did Lt. General Daniel O. Graham (one of the original members of Team B and chief advocate of weapons in space) when he invoked the preamble to the U.S. Constitution—they were working to “secure the blessings of liberty.” If science was being used against those blessings—in ways that challenged the freedom of free enterprise—then they would fight it as they would fight any enemy. For indeed, science was starting to show that certain kinds of liberties are not sustainable—like the liberty to pollute. Science was showing that Isaiah Berlin was right: liberty for wolves does indeed mean death to lambs.
CONCLUSION – OF FREE SPEECH AND FREE MARKETS
Our Founding Fathers placed freedom of the press in the first amendment of the U.S. Constitution, because democracy requires it. Citizens need information to make decisions, and a free press is crucial to its flow. Two centuries later the Fairness Doctrine was established in law, and although the legal doctrine was dismantled in the Reagan years, the notion of “equal time” remains enshrined in Americans’ sense of justice and fair play.
But not every “side” is right or true; opinions sometimes express ill-informed beliefs, not reliable knowledge. As we’ve seen throughout this book, some “sides” represent deliberate disinformation spread by well-organized and well-funded vested interests, or ideologically driven denial of the facts. Even honest people with good intentions may be confused or mistaken about an issue. When every voice is given equal time—and equal weight—the result does not necessarily serve us well. Writing in Democracy in America long ago, Alexis de Tocqueville lamented the cacophony that passed for serious debate in the young republic: “A confused clamor rises on every side, and a thousand voices are heard at once.”
That was two hundred years ago; today the problem is much worse. With the rise of radio, television, and now the Internet, it sometimes seems that anyone can have their opinion heard, quoted, and repeated, whether it is true or false, sensible or ridiculous, fair-minded or malicious. The Internet has created an information hall of mirrors, where any claim, no matter how preposterous, can be multiplied indefinitely. And on the Internet, disinformation never dies. “Electronic barbarism” one commentator has called it—an environment that is all sail and no anchor. Pluralism run amok.
Many journalists we have spoken with have been surprised at our revelations, and in some cases even skeptical, until we showed them the documents. The degree of research we have done for this book cannot be done in time for a daily or weekly deadline, so it is understandable that most journalists would not know what we have discovered in five years of research. But the pressures on contemporary journalism cannot be the whole story, because we have seen how, at least in the early stages of this story, media leaders were openly courted by the tobacco industry. Arthur Hays Sulzberger, Edward R. Murrow, and William Randolph Hearst Jr. were hardly unsophisticated people, yet they evidently accepted the argument that the tobacco industry’s view of the harms tobacco generates merited equal consideration as the scientific community’s view. That is rather hard to explain, except to suppose that journalists, like the rest of us, are reluctant to accept information we’d rather was not true. Edward R. Murrow no doubt hoped that tobacco smoking wouldn’t kill him. And who among us wouldn’t prefer a world where acid rain was no big deal, the ozone hole didn’t exist, and global warming didn’t matter? Such a world would be far more comforting than the one we actually live in. Faced with challenging situations, we welcome reassurance that everything is going to be all right. We may even prefer comforting lies to sobering facts. And the facts denied by our protagonists were more than sobering. They were downright dreadful.
Fred Seitz circulated information soliciting signatures on a petition “refuting” global warming. He did this in concert with a chemist named Arthur Robinson, who composed a lengthy piece challenging mainstream climate science, formatted to look like a reprint from the Proceedings of the National Academy of Sciences. The “article”—never published in a scientific journal, but summarized in the Wall Street Journal—repeated a wide range of debunked claims, including the assertion that there was no warming at all. It was mailed to thousands of American scientists, with a cover letter signed by Seitz inviting the recipients to sign a petition against the Kyoto Protocol.
Seitz’s letter emphasized his connection with the National Academy of Sciences, giving the impression that the whole thing—the letter, the article, and the petition—was sanctioned by the Academy. Between his mail-in card and a Web site, he gained about fifteen thousand signatures, although since there was no verification process there was no way to determine if these signatures were real, or if real, whether they were actually from scientists. In a highly unusual move, the National Academy held a press conference to disclaim the mailing and distance itself from its former president. Still, many media outlets reported on the petition as if it were evidence of genuine disagreement in the scientific community, reinforced, perhaps, by Fred Singer’s celebration of it in the Washington Times the very same day the Academy rejected it.
The “Petition Project” continues today. Fred Seitz is dead, but his letter is alive and well on the Internet, and the project’s Web site claims that its signatories have reached thirty thousand.
Many skeptical claims about global warming have been published in the Journal of Physicians and Surgeons, which is associated with the Oregon Institute of Science and Medicine, who sponsored the anti–global warming petition.
The link that unites the tobacco industry, conservative think tanks, and the scientists in our story is the defense of the free market.
When the Cold War ended, these men looked for a new great threat. They found it in environmentalism. Environmentalists, they implied, were “watermelons”: green on the outside, red on the inside. Each of the environmental threats we’ve discussed in this book was a market failure, a domain in which the free market had created serious “neighborhood” effects. But despite the friendly sound of this term, these effects were potentially deadly—and global in reach. To address them, governments would have to step in with regulations, in some cases very significant ones, to remedy the market failure. And this was precisely what these men most feared and loathed, for they viewed regulation as the slippery slope to Socialism, a form of creeping Communism.
Moreover, the idea that free markets produce optimum allocation of resources depends on participants having perfect information. But one of several ironies of our story is that our protagonists did everything in their power to ensure that the American people did not have good (much less perfect) information on crucial issues.
Many honest people who actually run businesses welcome reasonable government regulation with rules that prevent bad behavior—like unfair business practices or polluting the environment—so long as the rules are clear and fair, and create a stable, level playing field.
Global warming became the most charged of all environmental debates, because it is global, and it implicates everything and everyone.
Nicholas Stern, formerly chief economist and senior vice president of the World Bank from 2000 to 2003, and principal author of the Stern Review of the Economics of Climate Change (commissioned by U.K. prime minister Gordon Brown), has called climate change “the greatest and widest-ranging market failure ever seen.” No wonder the defenders of free market capitalism are worried.
Which leads to the second great irony of our story. Men like Bill Nierenberg were proud of the role they had played in defending liberty during the Cold War and understood their latter-day activities as an extension of that role. They feared that overreaction to environmental problems would provide the justification for heavy-handed government intervention in the marketplace and intrusion in our personal lives. That was not an unreasonable anxiety, but by denying the scientific evidence—and contributing to a strategy of delay—these men helped to create the very situation they most dreaded.
In Denmark, a struggle erupted over the book, and charges of scientific dishonesty were leveled against Lomborg. Ultimately, the Danish Ministry of Science, Technology, and Innovation ruled that Lomborg couldn’t be guilty of scientific dishonesty, because it had not been shown that The Skeptical Environmentalist was a work of science!
The first problem is their presumption that these advances will necessarily continue. If we have indeed reached a tipping point, as many leading scientists fear, then the past may not be a guide to the future. Past environmental changes were mostly local and reversible. Today, human activities have a global reach. We are changing our planet in radical ways, and we may not have the wherewithal to respond to the challenges ahead, at least not without enduring a good deal of discomfort and dislocation. Moreover, some of these changes—like sea level rise and the melting of Arctic ice—are almost certainly irreversible.
The second problem with Cornucopianism is its assertion that past advances have been the result—and could only have been the result—of free market systems. This assertion is demonstrably false.
Why do they hold this belief when history shows it to be untrue? Again we turn to Milton Friedman’s Capitalism and Freedom, where he claimed that “the great advances of civilization, in industry or agriculture, have never come from centralized government.” To historians of technology, this would be laughable had it not been written (five years after Sputnik) by one of the most influential economists of the second half of the twentieth century.
Markets spread the technology of machine tools throughout the world, but markets did not create it. Centralized government, in the form of the U.S. Army, was the inventor of the modern machine age.
EPILOGUE – A NEW VIEW OF SCIENCE
Imagine a gigantic banquet. Hundreds of millions of people come to eat. They eat and drink to their hearts’ content—eating food that is better and more abundant than at the finest tables in ancient Athens or Rome, or even in the palaces of medieval Europe. Then, one day, a man arrives, wearing a white dinner jacket. He says he is holding the bill. Not surprisingly, the diners are in shock. Some begin to deny that this is their bill. Others deny that there even is a bill. Still others deny that they partook of the meal. One diner suggests that the man is not really a waiter, but is only trying to get attention for himself or to raise money for his own projects. Finally, the group concludes that if they simply ignore the waiter, he will go away.
This is where we stand today on the subject of global warming. For the past 150 years, industrial civilization has been dining on the energy stored in fossil fuels, and the bill has come due. Yet, we have sat around the dinner table denying that it is our bill, and doubting the credibility of the man who delivered it. Economists have often noted that “There is no such thing as a free lunch.” They are right. We have experienced prosperity unmatched in human history. We have feasted to our hearts’ content. But the lunch was not free.
Uncertainty favors the status quo. As Giere and his colleagues put it, “Is it any wonder that those who benefit the most from continuing to do nothing emphasize the controversy among scientists and the need for continued research?”
For many of us, the word “science” does not actually conjure visions of science; it conjures visions of scientists. We think of the great men of science—Galileo, Newton, Darwin, Einstein—and imagine them as heroic individuals, often misunderstood, who had to fight against conventional wisdom or institutions to gain appreciation for their radical new ideas. To be sure, brilliant individuals are an important part of the history of science; men like Newton and Darwin deserve the place in history that they hold. But if you asked a historian of science, When did modern science begin? She would not cite the birth of Galileo or Copernicus. Most likely, she would discuss the origins of scientific institutions.
C. P. Snow once argued that foolish faith in authority is the enemy of truth. But so is a foolish cynicism.
In writing this book, we have plowed through hundreds of thousands of pages of documents. As historians during the course of our careers we have plowed through millions more. Often we find that, in the end, it is best to let the witnesses to events speak for themselves. So we close with the comments of S. J. Green, director of research for British American Tobacco, who decided, finally, that what his industry had done was wrong, not just morally, but also intellectually: “A demand for scientific proof is always a formula for inaction and delay, and usually the first reaction of the guilty. The proper basis for such decisions is, of course, quite simply that which is reasonable in the circumstances.”
Or as Bill Nierenberg put it in a candid moment, “You just know in your heart that you can’t throw 25 million tons a year of sulfates into the Northeast and not expect some … consequences.”
We agree.