The System of All Things?

For a video promo, advertising new online courses offered in the History Department at McMaster University, I referred to my own class, HIST 2EE3: Science and Technology in World History, as “Grand Central Station.”

Below, a brief reflection on what I meant and a brief pitch for the course, which remains the flagship for my offerings at the intersections of the histories of science, technology, and environment at McMaster University.

Science, Conspiracy, & Journalism: A Cold War Anecdote

I’m currently teaching a third-year course on the history of truth. The course examines the historical mechanisms that contributed to the social production and consumption of knowledge over time. It interests itself in the construction of “matters of fact,” and how scientific praxis emerged as the primary mode of knowledge authority in the modern world. It aims to explore the cultural features of who could practice science and how their scientific method came to be ingrained as a method of forging consensus among scientists, and how their findings came to be adopted as truths to a more general public. More significantly, this course proposes to examine how these activities changed or evolved over time.

We read Steven Shapin and Simon Schaffer’s Leviathan and the Air-Pump and talked about Boyle’s literary technology and virtual witnessing as pillars of the new experimental science. Recently, I lectured on Robert Kohler’s Lords of the Fly as a corollary investigation of the experimental life, and I stressed Kohler’s discussion of the moral economy. Collaboration, trustworthiness, fraud, failure, metaphors in science have featured throughout lectures and discussions. But I have had little opportunity to share anecdotes. Anecdotes can be fun.

Next week, I will be running a small module on science journalism in the twentieth century. I’m especially interested in themes surrounding science literacy and the media’s role as broker in communicating scientific information—translating it for a lay audience. In his classic essay, “Roots of the New Conservation Movement,” in Perspectives in American History  6 (1972), Donald Fleming talked about politico-scientists—scientists were politically engaged (Barry Commoner, for one)—as being part of a specialized fifth estate intent on informing the public. This during a politically tense period in American history.

As a topic, it reminded me of a story Barry Commoner relayed to me during the oral histories I conducted with him. Let me start with the report written by William Laurence (the Pulitzer Prize-winning journalist—and one of our in-class subjects), which appeared in The New York Times on December 29, 1954.

Headline: "Scientist Decries Curb on Condon."
Headline: “Scientist Decries Curb on Condon.”

In 1954, E. U. Condon was an elder statesman of American physics, a notable quantum physicist from the 1920s, and the outgoing President of the American Association for the Advancement of Science. After World War II, he had also suffered serious scrutiny from a subcommittee of the House Un-American Activities Committee. Condon had been particularly critical of imposed secrecy in science, and strongly advocated continued international scientific cooperation. On 1 March 1948, the subcommittee described Condon—at the time, the director of the National Bureau of Standards—as “one of the weakest links in our atomic security.” Condon was by no means a radical thinker, but he did believe that science only functioned properly in an open society. His AAAS election (in 1951) had been somewhat controversial, and by 1954 the label of “Communist” or “security risk” constituted a black mark. But turn your attention to the final paragraph: “Dr. Condon received an ovation as he rose to address his colleagues.”

Warren Weaver was a strong supporter of Condon’s (as his remarks above might attest). The young Barry Commoner as well. The story that Commoner told me involved this evening and the standing ovation as Condon retired from his role as President. At the conference, Commoner—who knew Laurence—invited Laurence to join him and others for dinner and drinks before the evening lecture. Because the conference was in California, the time difference was such that Laurence needed to file his story before dinner so that it could appear in the following day’s paper. He hadn’t filed his story yet, and asked Commoner how the membership would respond to Condon’s term. Could vocal support be interpreted as political subversion in Cold War America? The ovation (reported) was hardly a certainty. Commoner assured his friend that there would be a standing ovation: File the story and come for a drink. Which Laurence did. The ovation was reported (if not printed) before it happened. Returning to the conference hall for the evening proceedings, Commoner walked Laurence to the front row of the auditorium to sit down. After Weaver spoke and introduced Condon, Commoner told me (almost 50 years later), Commoner pulled Laurence by the shoulder and gruffly said: “Bill, stand up!” At which point the two led the standing ovation—giving credence to the story Laurence had already filed.

It’s a fun little anecdote, and Commoner told it to me at least twice. But I was reminded of it this week while preparing to discuss and have students research the relationship between science, journalism, and the public.

Uncertainty: Mercury & the Politics of the Reference Dose

I keep coming back to the idea of uncertainty. It’s an omnipresent feature of the mercury project. Uncertainty, I think, is also at the heart of how toxic fear manifests itself. We’re afraid of what we don’t know—or don’t understand. And, yet, chemical pollution demands that we act quickly, and sometimes with incomplete information about the nature of the contaminant’s threat. So when uncertainty prevails, how do you develop baseline regulation? In the aftermath of mercury poisoning epidemic at Minamata, national and global health agencies raced to identify acceptable exposure limits for mercury. These were complicated by mercury’s ubiquity in industry and—scientists discovered—throughout the environment. As various organizations introduced reference dose recommendations that erred on the side of caution to accommodate unknowns in the available data (such as differences in sensitivity across a population and the inability of a single study to address all possible adverse outcomes), it became glaringly apparent that these preliminary numbers were not nearly conservative enough.

My focal point is the politics of establishing a reference dose for mercury and the manner in which uncertainty rests at the heart of this problem. The reference dose is effectively a standard uncertainty factor, and is built in to represent unknowns in the available data—such as differences in sensitivity across a population and the inability of a single study to address all possible adverse outcomes. The crux of the problem is establishing a regulatory line between safe levels of mercury in human bodies and not safe levels—and doing that without relying on a trial-and-error approach.

I want to argue that mercury has a distinctive place in the ecosystem of quantifying chemical hazards, due in no small measure to the manner in which it impressed itself through a series of acute poisoning epidemics during the latter half of the twentieth century. But also in terms of how it was measured. The weak mortar that holds this presentation together is the contradiction between the uses for toxicological research. Where the scientific endeavour seeks to identify acceptable parameters for chemical risk, legislative demands put scientific findings in conversation with competing economic and political imperatives.

To illustrate, consider the anecdotal story related by Nils-Erik Landell, reflecting on the Swedish mercury case of the 1960s. Sweden was the first developed country to locate widespread industrial mercury pollution in its water systems (this, of course, discounting the acute mercury poisoning case in Minamata, Japan). Landell recalls:

I was working at the Public Health Institute to get money for my education as a medical doctor … and my chief had written a toxicological evaluation of the maximum limit of mercury in fish. I saw it on his table, and he had written [the safe limit of mercury content in fish] 0.5 milligrams per kilogram of wet weight. The next day, the paper was still there on the table, but now I saw that he had rubbed it out and it was now 1.0 milligrams per kilogram. And I asked him why … and he said in Lake Vänern, the biggest lake in Sweden, the fishermen had pointed out that the fish had a concentration of 0.7, so he had to raise it to 1.0. And I understood that the evaluation of toxicology was not so sharp as it should be, but it was illustrative of the pressure from different companies and economic interests on the scientists.

As a reference point, the current EPA reference dose for mercury in fish is 0.1 µg/kg/day (there’s an interesting side-story here—maybe a post for another day).

To start, allow me to move away from mercury to discuss the broader history of the reference dose. Measuring the safety factor of chemicals is a feature of post-World War II environmental praxis. Starting in the United States, efforts to identify safe levels for new additives in foods in the middle-1950s prompted interest in articulating safe levels of acute and chronic exposure to harmful chemicals. The first recommendations came from two scientists at the US Food and Drug Administration. In 1954, Arnold Lehman and O. Garth Fitzhugh posited that animal toxicity tests could be extrapolated qualitatively to predict responses in humans, but that quantitative predictions were more problematic. To articulate safe levels of a given toxin, they proposed that the reference dose be evaluated by the following formula:

Reference Dose (RfD) = NOAEL/Uncertainty Factor

Lehman and Fitzhugh set their uncertainty factor at a 100-fold margin. That is to say that exposure levels to harmful chemicals should be set a hundred times lower than the point at which no adverse effects had been observed in the laboratory. The justification for the 100-fold safety factor was traditionally interpreted as the product of two separate values, expressing default values to a magnitude of 10. The protocol worked on the assumption, first, that human beings were 10 times more sensitive than the test animal, and, second, that the variability of sensitivity within the human population could be managed within a 10-fold frame.

The fundamental premise of the reference dose, as Lehman and Fitzhugh conceived it, was that it was designed to address the untidiness of extrapolating animal data and applying them to human populations outside the lab. In effect, the initial 100-fold reference point was arbitrary, without any real quantitative basis for or against it. It’s a principle that has stood up to more recent scientific scrutiny, and variants of it remain in practice sixty years later.

To mercury. Though mercury’s entry into the toxic century occurred at Minamata, it is the Swedish case study that galvanized growing interest in establishing a reference dose for mercury exposure. The Minamata case was the result of very specific mercury emissions into the bay. A combination of not looking further for mercury in the environment and broader disinterest in international circles meant that much of the Japanese research was not revisited until the 1970s when mercury was accepted as a ubiquitous environmental contaminant with universal reach. There was also some delay in identifying mercury as the source. In the mid-1960s, Swedes found mercury prevalent in wild birds—a product of mercury-coated seed grain (fungicidal properties)—and, subsequently, throughout their water systems—through a variety of industrial uses. Swedish concerns over an appropriate reference dose for mercury stemmed on the hypothetical. They had discovered mercury, but had not experienced any cases of mercury poisoning. So what was the threshold? Their analyses debated the merits of measuring mercury content in dry or wet weight of fish, measuring potential threats to the fishing industry, and determining social and individual risks associated with mercury exposure.

But if the reference dose studies in Sweden were based on conjecture, mercury’s neurotoxic potential was realized in Iraq in 1972. Widespread poisoning resulted after a mishandled supply of mercury-coated Wonder Wheat arrived too late from Mexico to be planted. Desperate, hungry farmers started making homemade bread from the seed grain. The seeds had been dyed pink to warn that they had been treated with hazardous chemicals, but farmers assumed that washing off the dye also removed the mercury. Numbers on the severity of the mercury epidemic vary drastically. Official, Ba’athist counts suggest 4500 victims; more recent, independent observers estimate at least ten times that number.

Amidst the chaos and calamity, the Iraqi case provided a critical opportunity to measure mercury exposures on human subjects. Note that whereas the Swedes were taken by measuring mercury content in fish, the new evaluations could be rendered more precise by disregarding the first 10-fold protocol, effectively by eliminating interspecies uncertainty factors—getting rid of the middle-fish. Put another way, where Lehman and Fitzhugh were addressing uncertainty factors as part of a qualitative analysis of potential risk, data derived from Iraq could engage a more quantitative approach. As a result, numerous national and international agencies—the World Health Organization and the US Food and Drug Administration foremost among them—collected data from mercury victims in the provinces around Baghdad. These studies subsequently served as the cornerstone for numerous national and international recommendations for acceptable mercury exposure for the next 25 years.

During the 1980s, however, researchers in Europe and in the United States raised concerns about the validity of the data. The measurements taken in Iraq stemmed from acute mercury poisoning—the rapid consumption of dangerously high levels of mercury. Were these findings—and the limits they proposed—consistent with the much more common chronic, low-level exposure? If mercury-contaminated fish was part of a regular diet over a longer period of time, how would mercury behave and what would be the epidemiological effects?

The first project, composed of an international team and based at Harvard, undertook an assessment of possible brain function impairment in adolescent children due to prenatal exposure to mercury when the mothers’ diet was high in seafood. They selected as their case study the small communities of the Faroe Islands to examine a traditional population that ate some fish, and occasionally feasted on mercury-contaminated whales. I’ll leave out the specifics of the study, but the authors found that high levels of mercury passed from mother to child in utero produced irreversible impairment to specific brain functions in the children. By age 7, 614 children with the most complete mercury-exposure data had lower scores in 8 of 16 tests of language, memory, and attention, suggesting that low-level mercury exposure caused neurological problems.

At roughly the same time, a team of researchers at the University of Rochester Medical Center, carried out mental and motor tests on 9-yr old children born on the Seychelles Islands. The study, begun in 1989, looked for an association between mercury exposure and behavior, motor skills, memory, and thinking in 779 children born to mothers who averaged a dozen fish meals a week. Around age 9, higher mercury exposure was associated with two test results. Boys, but not girls, were slower at one movement test, but only when using their less capable hand. Boys and girls exposed to more mercury were rated as less hyperactive by their teachers. The authors concluded, “These data do not support the hypothesis that there is a neurodevelopmental risk from prenatal methylmercury exposure resulting solely from ocean fish consumption.” So while the Faroes study indicated cause for concern in low level mercury exposure through ocean fish consumption, the Seychelles study exonerated mercury. To complicate matters, a third study in New Zealand, which followed the Seychelles methodology identified mercury risk more consistent with the Faroes study.

By way of exit strategy, let me conclude by situating talk of reference doses in its larger context. Interest in and analysis of mercury pollution and its acceptable limits constitute part of the transformation of global environmentalism after World War II. Put very roughly, prior to 1945 concern for the environment consisted of protecting nature from the onslaught of civilization; after 1945 this concern—in actions and in rhetoric—shifted to protecting civilization from itself. The environmental lexicon supports this notion. New vocabulary—bioaccumulation, biomagnification, environmental risk, chemical hazard—became prevalent, transforming our environmental engagement. Similar transformations take place within toxicological vocabularies. Environmental toxicology, toxicokinetics, toxicodynamics, suggest that specialized and nonspecialized forms of language use evolved during the second half of the twentieth century. None of this should come as a surprise, but it adds a layer of complexity to the traditional, post-materialist arguments that have typically explained the post-war environmental transformation.

The struggle for precision comes at another price, however. This bodily turn in environmental thinking has understandably shifted the gaze of environmental monitoring from the ecosystem to the body. What happens “out there,” ironically, matters less than what happens “in here.” And that fear over public health risks has galvanized a more pressing need for scientific knowledge and political action—the interaction between the two breeding a landscape of new, reactionary or crisis disciplines to make sense of environmental hazards. That policy moves faster than science and thereby shapes the practice of knowledge gathering and its place in policymaking has historically constituted one of the primary obstacles in the struggle for epistemic clarity when articulating threshold levels for mercury exposure. In somewhat related news, I received a copy of Frederick Rowe Davis’s book, Banned: A History of Pesticides and the Science of Toxicology, the other day. I have yet to get beyond the first chapter, but I look forward to seeing how he treats the messy politics of environmental toxicology—and especially the relationship between science and policy.

Lest this discussion seem more at home in the histories of science and policy, let me assert a place for it in environmental history as well. Mercury is a naturally-occurring feature of the physical environment, but human activities have increased the amount of mercury in circulation beyond any quantities that could ever be considered normal. Atmospheric levels are seven times higher and ocean-surface levels are almost six times higher than they were in 2000 BC. Half of that increase has occurred since 1950, during the toxic century. In effect, human-industrial practices provoked and set in motion the need for establishing a reference dose for mercury. But this is also a story grounded in place—or, rather, places. While the preliminary history of mercury’s reference dose took place in laboratories, it was prompted by the discovery that mercury was present in significant quantities in various specific places. Similarly, with the advent of the acute poisoning cases in Iraq in the early 1970s, reference dose studies left the lab to attend to mercury in the field, thereby transforming the nature and parameters of knowledge construction. In so doing, they invite re-readings of how we might tell stories about nature and the numbers we use to make sense of them.

Historicizing a Scientific Interdiscipline: Swedish Mercury Science in the 1960s

It’s been awhile since I’ve discussed the history of mercury pollution on the blog. It remains one of my main research interests, although it has taken a backseat to other projects of late. I attach below a poster I presented at the American Society for Environmental History annual meeting in 2010. While the original poster is collecting dust in a corner of my study, it seemed to me that a “visual” form of research should be seen, so I share it here. As posters go, this isn’t terribly good. Too much text and fine detail. It might work better as a webpage, but not as a poster. I would discourage students from adopting this as a model for their own work.

The blue sidebar contains much of the intellectual framework for the project, which is designed as a global study of knowing and regulating mercury pollution since Minamata. The poster outlines all of this, but I was especially interested in the political activism of a young subset of of Swedish scientists who became concerned about mercury in the Swedish landscape and engaged not just their scientific expertise, but also a nascent method of science information to share their concerns with the public. Frustrated by the relative inertia exhibited by policy makers and convinced that a solution was urgently needed, these younger scientists—who had left their independent research and turned their attention to mercury problems—entered the mainstream debate and argued vociferously for more radical responses to mercury pollution.  Coming from disparate backgrounds, they came to refer to themselves (not at all self-consciously) as the “Mercury Group,” as they fostered working relationships and pushed their findings into the mainstream media.

In August 2009, I traveled to Stockholm and met with four members of the mercury group at the home of Göran Löfroth.  It was the first time that Löfroth, Hans Ackefors, Carl-Gustav Rosen, and Nils-Erik Landell had sat down together in almost forty years (though they had stayed in touch).  Over the course of a meal and a couple of hours of discussion, they reminisced on their collective efforts to effect a policy response to the mercury problem as it emerged in the 1960s. There was something very moving about the session, as these now elderly men shared their memories and reconnected after many years. In many respects, it is one of the most rewarding professional experiences I have enjoyed. The main text includes excerpts from an oral history I conducted with these four protagonists of the Swedish mercury case.



Why Barry Commoner Matters

Here is a rough draft of a paper I published in Organization & Environment, outlining Barry Commoner’s social and historical significance. It overlaps with (and is drawn from) a talk at the American Sociological Association I posted recently, but it goes deeper into Commoner’s contributions to science, democracy, and the environment.

Why Barry Commoner Matters

It would be very difficult to properly understand the last fifty years of American environmentalism without recognizing the biologist Barry Commoner’s important contributions to its method and practice.  I make this claim with some vested interest (Egan 2007), but historical analysis of environmental activism since World War II points to a number of significant changes in American environmentalism, many of which find Commoner at their source.  Commoner’s place in the history of American environmentalism is based in large part on the breadth of his activism.  Commoner participated in scientific and activist campaigns to bring an end to aboveground nuclear weapons testing, to raise awareness about toxic chemicals in the city, on the farm, and in the home, to identify the harmful production practices of the petrochemical industry, to address economic and energy sustainability, and to create a more peaceful and equitable world.  More specifically, Commoner was centrally involved in efforts surrounding synthetic pesticides, detergents, and fertilizers; mercury, lead, and several other heavy metals; photochemical smog; population; sustainable energy; urban waste disposal and recycling; dioxin; and, more recently, a return to genetic theory.

But this essay sets out to argue that the depth and influence of Commoner’s activism is of even greater historical significance.  It sets out to provide historical context for Commoner’s career to allow for further investigation of his influence across a broad swath of American scientific, democratic, and environmental principles, and proposes to argue that Commoner saw these three pillars of his activity not as independent aspects of his political sensibilities but part of a single, intrinsic whole.  That science, democracy, and environment should be so related is indicative of Commoner’s deep-seated conviction that human societies, their politics and economies, and their physical environments functioned in larger, holistic systems.  Indeed, Commoner’s great contribution to environmental activism might be articulated as his capacity to identify the root causes of American environmental decline in the post-World War II era.  This is important—indeed, a better and popular understanding of Commoner’s activism is important—because the intersections between science, society, and the environment that serve as the cornerstone of Commoner’s career and work are not simply historical points of interest, but remain vitally relevant to contemporary debates and struggles to address toxic contaminants, energy productions crises, and global climate change.


While Commoner is typically remembered as a social and political activist, it is important to stress that he came to this activism from his professional training in science.  From a very early point, Commoner was devoted to the notion that scientific research should be directed toward the public good.  His training in the 1930s and his early career at Washington University in St. Louis coincided with significant structural changes in the academy and unprecedented technological growth throughout American society.  Of that period, Commoner remembered, “I began my career as a scientist and at the same time … learned that I was intimately concerned with politics.”  That perspective helped him to develop a social perspective that he applied to his all his activities, and before he had completed his undergraduate studies at Columbia University, he was deeply committed to participating in “activities that properly integrated science into public life” (Commoner 2001).

During World War II, Commoner served in the U.S. Navy, and it was during his wartime service that he discovered firsthand that scientific innovations often possessed unanticipated and undesirable side effects.  In 1942, Commoner headed a team working to devise an apparatus that would allow torpedo bombers to spray DDT on beachheads to reduce the incidence of insect-borne disease among soldiers.  The new device was tested in Panama and at an experimental rocket station off the New Jersey coastline that was infested with flies.  The apparatus worked well, and the DDT was tremendously effective in killing the flies.  Within days, however, new swarms of flies were congregating at the rocket station, attracted by the tons of decaying fish—accidental victims of the DDT spraying—that had washed up on the beach (Strong 1988).  As the flies fed upon the dead fish, Commoner witnessed an eerie foreshadowing of how new technologies often brought with them environmental problems that their inventors had not anticipated.  Commoner (1971) would later apply this notion to his four laws of ecology, recognizing that there is no such thing as a free lunch.

Such environmental decline—a product of unforeseen consequences associated with many of the new technological products of the petrochemical industry—created a context in which an increasing gulf emerged between what was known and what it was desirable to know (Douglas & Wildavsky 1982, 3), and thereby changed the shape of American science.  Nuclear fallout, the incidental effects of DDT and other synthetic pesticides, the build-up of new detergents and fertilizers in water systems, the introduction of photochemical smog from automobile emissions, and the fact that these new petrochemical products did not break down in nature were the result of a kind of artificial reductionism, which was itself the product of this new science.  In fabricating these new products, innovators directed their attention to the benefits their use might provide, and failed to conceive of what costs these introductions might have on human health and the physical environment.  While atomic bombs, pesticides, detergents, fertilizers, automobiles, plastics, and the other creations of new science and industry were very good at doing what they set out to do, each came with a host of unanticipated environmental problems in large part because their design and implementation was encouraged by sources outside of science.  The economist, Thorstein Veblen, for example, asserted that knowledge reflected the material circumstances of its conception; the questions science asked or new technologies as they were produced were driven by external interests.  Similarly, in a more recent study, Chandra Mukerji (1990) reads a complex interdependence between science and state, wherein scientists tended to assume the role of highly skilled experts retained to provide legitimacy to government policies.  This artificial reductionism—the exercise of focusing on only a part of the larger equation—posed serious harm to both science and society, Commoner warned.  In an unpublished paper titled “The Scientist and Political Power” (1962), Commoner insisted that the integrity of science was “the sole instrument we have for understanding the proper means of controlling the enormously destructive forces that science has placed at the hands of man” (4).  Should that integrity be eroded—and this kind of artificial reductionism was a distinct threat—Commoner worried that “science will become not merely a poor instrument for social progress, but an active danger to man” (2-3).  Commoner’s was not by any stretch a novel observation, but his greater significance in the larger discussion surrounding distrust in science and technology stems from his articulation of the hazards inherent in a “disinterested” science being dictated by outside interests.  Too often, environmental problems arise from the disconnect between nature and scientific evidence on the one hand and state fantasies and directives on the other.

As Shapin has observed (1996, 164), “good order and certainty in science have been produced at the price of disorder and uncertainty elsewhere in culture.”  By way of example, Commoner found that the acceptance of synthetic detergents, which were the product of good order and certainty in science—they were, after all, rather effective in cleaning clothes—produced disorder and uncertainty when foam came from household faucets and other drinking sources because the detergents did not break down in nature and effectively choked water system bacteria.  McGucken (1991) noted the paradox that “achieving human cleanliness entailed fouling the environment.”  This paradox was not lost on Commoner, who observed that synthetic detergents “were put on the market before their impact on the intricate web of plants, animals, and microorganisms that makes up the living environment was understood” (1966a, 7).

Commoner’s concern was a fairly logical one: discoveries in the chemical and physical sciences failed to take into account the biological consequences of their introduction into the marketplace and into nature.  As he noted in Science and Survival (1966b, 25), “Since the scientific revolution which generated modern technology took place in physics, it is natural that modern science should provide better technological control over inanimate matter than over living things.”  Whereas ecology endorsed a more holistic understanding of the environment, industrial science worked in a more reductionist manner.  In “The Integrity of Science” (1965a), Commoner illustrated the dangers of this kind of reductionist approach, noting that the Soap and Detergent Association had admitted that no biological field tests had been conducted to determine how the new detergents would interact with the local ecosystem.  “The separation of the laws of nature among the different sciences is a human conceit,” Commoner concluded elsewhere.  “Nature itself is an integrated whole” (1966b, 25).

The disparity between the physicochemical sciences and the biological sciences was a direct consequence of the American science policy that followed World War II, as government funding supported nuclear physics and industry supported developments in petrochemical experimentation.  This was an important development.  Whereas the ethos of science lauded the wider discipline’s democratic principles and critical peer review, knowledge increasingly came to reflect the material circumstances of its conception.  During and after World War II, those material circumstances were increasingly shaped by an omnipresent military influence that dominated scientific research agendas across the country.  In 1939, the federal government had allotted $50 million per year to science research, 18 percent of all private and public spending on research and development.  By the end of the war, the federal investment was $500 million, and constituted 83 percent of all funding.  In 1955, the annual research and development budget was $3.1 billion.  By the early 1960s, that budget had climbed above $10 billion, and to $17 billion by 1969.  Moreover, since 1940, the federal budget had multiplied by a factor of 11; the budget for research and development had increased some 200 times.  While that money was a significant boon to scientific research, it also suggested that the American research agenda was integrally connected to political interests.  After World War II, that meant military development and, eventually, the space race (see Egan 2007, 25).  As bombs, rockets, and synthetic products emerged as the fruits of this new research—very much the reflection of the material conception—more and more environmental problems emerged.  In sum, science was very good at finding what it was looking for, but little else.


As previously noted, Commoner’s science was also deeply imbued with a strong social responsibility.  Shortly after World War II, at the height of Cold War tensions, American scientists found that their intellectual freedoms were being somewhat curtailed by national security interests and that their primary duty was to what President Eisenhower would famously call the military-industrial complex as he left the White House.  Cold War priorities seemed in conflict with what Robert K. Merton (1957) called the “ethos of science,” which protected and preserved the scientific community’s standards, and ensured a climate in which good basic research could be conducted.  Commoner saw a contradiction between the sabre-rattling of the Cold War and the intellectual freedom that drove scientific progress.  During the 1950s, he emerged as one of the more prominent socially engaged scientists, who saw their duty residing in creating a better democratic society, not a dominant one.  The historian Donald Fleming (1972) has called these activist scientists “politico-scientists,” an apt term that is representative of Commoner’s career as a whole.

As a scientist, Commoner worked on the conviction that he had an obligation to serve the society that made his work possible.  In a paper titled “The Scholar’s Obligation to Dissent,” Commoner wrote:

The scholar has an obligation—which he owes to the society that supports him—toward … open discourse.  And when, under some constraint, scholars are called upon to support a single view, then the obligation to discourse necessarily becomes an obligation to dissent.  In a situation of conformity, dissent is the scholar’s duty to society (Commoner, 1967, 7).

Commoner had a particular expertise, and it was his social responsibility to identify and speak out on problems that would otherwise be left unaddressed.  And the Cold War was a period of intense (and, frequently, enforced) conformity.  In expressing his obligation to dissent, Commoner was bucking a national social trend in science and in society at large.

The existence of Cold War conformity posed a particular challenge to the politico-scientist, however.  “Conformity is often a sensible course of action. … One reason we conform is that we often lack much information of our own” (Sunstein 2003, 5).  As a means of challenging Cold War conformity and to deflect challenges that he was subverting American values, Commoner invented the science information movement. The reason few people objected to nuclear fallout or DDT or dioxin was because they lacked the technical information to understand the dimensions of the problem.  As a scientist—with a particular kind of expertise and responsibility to the society that supported him—Commoner felt a special duty to provide an accessible and vernacular body of scientific information on the environmental crisis.

The most celebrated example of the science information movement is the Baby Tooth Survey, which collected teeth to demonstrate the hazards of strontium-90, a particularly dangerous component of nuclear fallout.  Strontium-90 was chemically similar to calcium, and followed a similar path through the food chain, falling on grass, being consumed by cattle, and appearing—in place of calcium—in milk, consumed by people, and especially children.  The Greater St. Louis Committee for Nuclear Information, of which Commoner was a founding member, responded to growing public concerns that fallout from nuclear weapons testing could have a negative health impact on citizens, and especially children.  The Atomic Energy Commission had long defended aboveground nuclear weapons testing by downplaying any potential health risk inherent.  But by 1953, uncertainty had grown as nuclear radiation was being detected in much higher than anticipated quantities.  Again, another example of scientific hubris defied the ethos or integrity of science.  More immediately, however, Americans wanted to better understand the hazard.  In a campaign begun in early 1958, the Committee for Nuclear Information put out a call for baby teeth from the greater St. Louis area.

The Committee was inspired by an article that the biochemist Herman M. Kalckar had published in Nature in August 1958.  Titled “An International Milk Teeth Radiation Census,” the essay proposed a scientific study of baby teeth as a means of determining the extent to which fallout was being absorbed into human bodies.  “If a continued general trend toward a rise in radioactivity in children’s teeth were attained,” Kalckar wrote, “it might well have important bearings on national and international policy” (283).  In a press statement in December 1958, the Committee for Nuclear Information announced its plans to collect 50,000 baby teeth a year to monitor for strontium-90.  Because strontium-90 had begun to fall to earth roughly ten years earlier, the children who were currently losing their deciduous teeth were providing perfect samples, since these teeth had been formed from the minerals present in food eaten by mothers and infants at the nascent stages of the fallout era.

The response to the Committee for Nuclear Information’s call for teeth was considerable.  By the spring of 1960, the survey had received 17,000 teeth.  In late April 1960, St. Louis Mayor Raymond Tucker declared Tooth Survey Week to initiate the Committee’s spring tooth drive.  Support from the mayor, the St. Louis Dental Society, and the St. Louis Pharmaceutical Association provided publicity for the campaign and developed widespread grassroots support; 10,000 teeth were collected in the next month alone.  In November 1961, the Committee published the Baby Tooth Survey’s preliminary findings in Science, presenting strontium-90 absorption levels in St. Louis between 1951 and 1954, and arguing for the validity of their approach.  By that time, 67,500 had been cataloged and 1,335 had been used in the initial study, which confirmed widespread fears that strontium-90 was increasingly present in children’s bones.  The amount of strontium-90 began increasing after 1952, the year the first hydrogen bomb was detonated.  Whereas levels of strontium-90 found in teeth from 1951 to 1952 contained roughly 0.2 micromicrocuries per gram, that number had doubled by the end of 1953, and tripled and quadrupled in 1954 (Reiss 1961).

The Baby Tooth Survey officially continued its work until 1968, but from a public information standpoint, the call for baby teeth was an instant and inspired success and contributed to a sea-change in the American response to nuclear weapons testing and radioactive fallout.  Whereas Democratic presidential candidate Adlai Stevenson had barely caused a ripple among American voters in 1956 when he proposed a test ban, a more public debate over the costs and benefits of nuclear testing was front and center within a half-decade, and a Nuclear Test Ban Treaty was signed in 1963.  In an October 1964 speech, President Lyndon Johnson noted the connection between health and nuclear fallout, referring specifically to the hazards noted by Commoner and the Committee for Nuclear Information:

The deadly products of atomic explosions were poisoning our soil and our food and the milk our children drank and the air we all breathe.  Radioactive deposits were being formed in increasing quantity in the teeth and bones of young Americans.  Radioactive poisons were beginning to threaten the safety of people throughout the world.  They were a growing menace to the health of every unborn child (cited in Commoner 1966b, 14-15).

The Baby Tooth Survey is historically significant on a number of counts.  It constitutes an early example of biomonitoring as a component of environmental activism, a practice that has since become a fundamental aspect of environmental health campaigns (Corburn 2005; Daemmrich 2007; Roberts 2005).  While biomonitoring—the practice of using biological organisms to track fluctuations in the exposure to chemicals or contaminants—was a product of Progressive-era occupational health efforts to trace the impact of lead, arsenic, and other chemicals in workers (see Clark 1997, for example), the Baby Tooth Survey was a very early instance of those practices being applied to a more generic population to monitor and track the exposure of environmental pollutants at large (Egan 2007, 66-72, 75).

As a form of environmental activism, it also had the particular advantage of requiring public participation, which, in turn, provided a ready audience for the results and ensured the development of a grassroots movement.  Concerned parents sent in teeth and waited anxiously to learn the results.  Were their children being poisoned?  The Committee for Nuclear Information also found ways to include children, setting up an Operation Tooth Club.  Children who submitted teeth became members and received a certificate and a pin that read: “I gave my tooth to science.”  As young adults, this generation of children would come to witness the most emboldened and successful environmental legislation in American history and would participate—centrally—in the first Earth Day (1970).  In many respects, the participation required for the success of the Baby Tooth Survey fostered the growth of American environmental awareness by providing the public with the tools necessary for their own empowerment.

But in order to guarantee the success of the Baby Tooth Survey, Commoner and his colleagues needed to carefully translate their technical findings into a more vernacular or accessible language so that their non-scientific audience could understand and act upon their findings.  And this was a critical feature of Commoner’s science information movement: rather than telling people what to do, Commoner developed a rhetorical method of presenting accessible scientific information to the public, empowering them to participate in political decision-making.  Rather than simply sharing the results of the study, Commoner shared the hypotheses, experiments, and observations, leaving the public to participate in the interpretation of the results.  There was little question that nuclear fallout posed some risk to human health.  But how much?  And, more to the point, how much was too much?  These were social questions, not scientific questions, and Commoner saw his role as providing the public with information so that they could properly evaluate the risk and determine their collective threshold, not based on actuarial calculations made by policymakers, but within their own communities.  This re-conception of the scientist in practice—intentionally expanding the traditional peer review in order to include and communicate with a public audience—is likely the most significant development in the history of science since World War II.  This kind of risk analysis, Commoner fervently argued, was a social conversation, not a scientific one; scientists had no special moral authority to make decisions over what constituted acceptable exposure to fallout or DDT or dioxin.  He warned: “The notion that … scientists have a special competence in public affairs is … profoundly destructive of the democratic process.  If we are guided by this view, science will not only create [problems] but also shield them from the customary process of administrative decision-making and public judgment” (1966b, 108).  Commoner challenged the American faith in monitoring the environment and “leaving it to the experts.” Determining the nature of environmental hazards was a scientific exercise, but deciding how a society should address those environmental hazards was a political one.  It warrants noting that this practice of social empowerment has become the cornerstone of environmental justice activism.

This exercise remained, however, highly controversial as it bucked conformist trends.  In order to dodge the hazards of Cold War conformity, Commoner established a mechanism in which information that criticized the existing social and political order could be presented as bolstering democratic virtues.  For instance, as early as 1958, Commoner insisted that the scientific information be presented without conclusion or evaluation.  If the data were sufficiently accessible, the public would be able to draw their own conclusions.  This kind of activity promoted democracy, science’s role in democracy, and how both were involved in the emergence of a new kind of environmentalism after World War II.


Commoner regularly admitted that his work on fallout had made him an environmentalist.  Whereas the Atomic Energy Commission often limited their studies of fallout to direct exposure, Commoner demanded that they also consider radioactive exposure through the food chain.  People did not live in isolation, but rather as part of a larger ecological community.  The hazards imposed by nuclear fallout or, indeed, the new products of the petrochemical industry, were not simply direct threats to human health, but rather indirectly in their proliferation throughout the environment.  For Commoner, then, the science of fallout was not at all far removed from the contamination of air and water.  This was brought home even more concretely; shortly after the Committee for Nuclear Information began its campaign against aboveground nuclear weapons testing, Rachel Carson breathed new life into the American environmental movement with the 1962 publication of Silent Spring.  The book was remarkably well received by a public audience, already primed by alarming discoveries surrounding radioactive fallout (see Lutts 1985).  Like the Committee for Nuclear Information, Carson also exhibited an astute knack for presenting complicated, technical information in an accessible and persuasive manner.

Prompted by the resounding success of Silent Spring and the emergence of a charismatic generation of environmental scientists—among them Commoner, Paul Ehrlich, LaMont Cole, and Kenneth Watt—the environmental movement gained widespread credibility by relying on scientific expertise of their own.  This rise of popular ecology and the scientific leadership of 1960s environmentalism marks another historically important development.  After World War II the environmental movement was led “not by poets or artists, as in the past, but by individuals within the scientific community.  So accustomed are we to assume that scientists are generally partisans of the entire ideology of progress,” the historian Donald Worster (1994, 22) has observed, “that the ecology movement has created a vast shock wave of reassessment of the scientist’s place in society.”  For more than fifty years, Barry Commoner was at the vanguard of that scientists’ movement.

Commoner’s primary contribution here stems from his resistance to reductionist science and environmental thought.  Building on his earlier discussion of risk and public participation, he pointed to the limitations of science and expertise when it came to environmental problems.  To illustrate these problems, Commoner devoted a chapter of The Closing Circle (1971), his classic treatise on the environmental crisis, to the air pollution problem in Los Angeles; he began by claiming that “for teaching us a good deal of what we now know about modern air pollution, the world owes a great debt to the city of Los Angeles. …  There are few cities in the world with climates so richly endowed by nature and now so disastrously polluted by man” (Commoner 1971, 66).

Los Angeles has suffered a host of air pollutants; one of the earliest during the Second World War was dust from industrial smokestacks and incinerators.  By 1943, residents of Los Angeles started noticing the whitish haze, tinged with yellow-brown that bothered many peoples’ eyes.  They eventually started referring to this new pollutant as smog after the term invented in England to describe the thick clouds that had killed 4,000 Londoners.  The dangerous component in London smog was sulfur dioxide, which had increased in Los Angeles with wartime industrialization; the burning of coal and fuel oil that contain sulfur produced sulfur dioxide.  By 1947 fuel changes and controls began to reduce the amount of sulfur dioxide in the air, and Los Angeles reached prewar levels by 1960.  But instead of getting better, the smog got worse.  Later research determined that the problem in Los Angeles began with nitrogen oxides, which caused photochemical smog.  Nitrogen oxide is produced whenever air becomes hot enough to cause its natural nitrogen and oxygen to interact.  The primary culprit seemed to be high temperature power plants, and authorities imposed rigid controls on open venting of the numerous oil fields and refineries that surrounded the city.  With this new information in hand, Los Angeles authorities sought methods to control and reduce the levels of photochemical smog.  But still the smog got worse until scientists stumbled across the notion that cars and trucks were emitting more hydrocarbons and creating more nitrogen oxide than was the petroleum industry.  Detroit introduced engine modifications that reduced hydrocarbon emissions, but at the same time increased nitrogen oxides through the 1960s.  Los Angeles had effectively traded one pollutant for another, and the step-by-step process pursued by researchers of smog and the one-dimensional response from the auto industry proved myopic in really addressing the air pollution problem in Los Angeles.

The Los Angeles case also highlights problems inherent in scientific method as we understand it today.  As Commoner noted in The Closing Circle in reference to air pollution in Los Angeles, “it is extremely difficult to blame any single air pollutant for a particular health effect.  Nevertheless, ‘scientific method’ is, at present closely bound to the notion of a singular cause and effect, and most studies of the health effects of air pollution make strong efforts to find them” (Commoner 1971, 78).  This is the great flaw in reductionist science, and why it is so particularly difficult to prove that any single air pollutant is the specific cause of a particular disease, and how tobacco and lead among other threats have been so difficult to regulate against.  When we are forced into a reductionist rubric, it becomes near impossible to target an individual pollutant.  At the same time, we are simply missing the bigger picture.  By concentrating things down to their smallest elements, we reduce our scientific peripheral vision, limiting our capacity to consider—never mind recognize—the potential for multiple causes and effects.  If ecology has taught us nothing else, Commoner repeatedly argued, it has amply demonstrated the complexity that living systems are subject to a multiplicity of intricate relationships on macro and micro scales that defy definitive specialized explanations.

Commoner combated this reductionism on a variety of levels.  The most famous expression of this contempt is, perhaps, his articulation of ecology’s four laws:

  1. Everything is connected to everything else
  2. Everything must go somewhere
  3. Nature knows best
  4. There is no such thing as a free lunch

These four laws have been regularly cited and repeated in popular and scholarly arenas, but they deserve some comment here, as their importance to Commoner’s environmental thinking is frequently understated (Egan 2002).  With the benefit of almost forty years’ hindsight, we might treat Commoner’s four laws as a larger expression of social and environmental interaction and recognize that the connections, changes, knowledges, and free lunches are not merely ecological transpirations, but socioeconomic ones, too.  Industrial pollution, the source of the postwar environmental crisis, was generally considered the cost of postwar affluence; it represented jobs, productivity, and reduced prices of consumer goods and services.  Because the petrochemical industry could manufacture synthetic fertilizers in huge quantities—which lowered production costs—synthetic fertilizers quickly came to dominate the market.  Pollution controls, sustainable energy consumption, and greater efforts to ensure workplace safety and health were frequently marginalized because they reduced the scale of profits enjoyed by such high-polluting industries.  Pollution, inefficient energy use, and the trivialization of worker safety became popular accepted as the price of progress, but in reality they cumulatively constituted a false prosperity.

The real costs of pollution, Commoner argued, were not appearing on the balance sheet.  While private industries belched carcinogens into the environment, the public suffered rising cancer rates.  In The Closing Circle, Commoner stressed the significance of externalities: the infliction of involuntary, non-beneficial, or indeed, detrimental repercussions on another industry or the environment or the public.  “Mercury benefits the chloralkali producer but harms the commercial fisherman,” he observed (Commoner 1971, 253).  With its pollution and unanticipated costs, the technological revolution that followed World War II introduced a series of “external diseconomies,” the external or third-party effects of commerce.  As early as 1966, Commoner saw this disconnect between the apparent and real costs of new technologies.  “Many of our new technologes and their resultant industries have been developed without taking into account their cost in damage to the environment or the real value of the essential materials bestowed by environmental life processes. … While these costs often remain hidden in the past, now they have become blatantly obvious in the smog which blankets our cities and the pollutants which poison our water supplies.  If we choose to allow this huge and growing debt to accumulate further, the environment may eventually be damaged beyond its capacity for self-repair” (Commoner 1966a, 13).

Not only did these externalities hide the true damage of the environmental crisis, they were also an expression of reductionist thinking.  “Environmental degradation represents a crucial, potentially fatal, hidden factor in the operation of the economic system,” Commoner argued in The Closing Circle (273).  Coal-burning power companies were among the greatest polluters of air, but disparity between their rising profits as demand for electricity increased and the growing and social and environmental costs suggested a paradox.  Stressing the nature of external diseconomies, Commoner observed that “if power companies were required to show on electric bills the true cost of power to the consumer, they would have to include the extra laundry bills resulting from soot [from burning coal], the extra doctor bills resulting from emphysema, the extra maintenance bills due to erosion of buildings [from acid rain].”  These were hidden expenses.  “The true account books are not in balance,” Commoner continued, “and the deficit is being paid by the lives of the present population and the safety of future generations” (Commoner 1970, 5-6).  As a result of these kinds of externalities, Commoner insisted “the costs of environmental degradation are chiefly borne not by the producer, but by society as a whole.”  In noting these external diseconomies, Commoner identified the social impact of environmental decline.  “A business enterprise that pollutes the environment is … being subsidized by society” (Commoner 1971, 268).

Commoner also emphasized the hazards of reductionist science, introducing a kind of systems thinking to environmental activism.  Systems thinking works on the premise that the component parts of a system will act differently when isolated.  As a concept, we might recognize the relationship between systems thinking and holistic interpretations; in each case the sum is greater than its parts.  With respect to Commoner’s career, science, democracy, and the environment might be taken as the three key systems that drove the post-World War II world and Commoner identified how they were intrinsically linked.  Commoner’s historical significance is the product of his capacity to recognize that “everything is connected to everything else” and then to explain that in accessible and persuasive language.  Identifying the relationship between biodiversity, occupational health, social equality, and peace literally transformed the landscape of environmental thinking during the 1960s and 1970s.  What’s important here is the fact that Commoner drew persuasive connections between the myriad social problems that emerged after World War II.  The discovery of pollutants like dioxin rarely altered production choices, in large part because expertise demanded a more reductionist examination of the problem.  Instead, management of those risks became a more prominent feature of the technological landscape.  (This is a variant on the old prevention vs. cure routine).  Irrespective of which pollutants are particularly harmful, we can conclusively insist that polluted air makes people sicker than they would otherwise be.  In a discussion of public environmental risk, Commoner argued there was something inherently wrong with existing methods of measuring harmful elements in the environment when the burden of proof rested on the side of human health.

Identifying the nature of these burdens was also critical.  Whereas Commoner noted that society shared in the costs of environmental degradation, they rarely did so equally.  The unequal distribution of environmental risks also posed a deeper social problem insofar as environmental pollutants inhibited human health, which, in turn, inhibited social progress.  A vicious circle: poor and minority communities were more exposed to environmental hazards, suffered greater health problems, and were prevented from achieving significant social progress.  This prompted Commoner to charge that “there is a functional link between racism, poverty, and powerlessness, and the chemical industry’s assault on the environment” (Russell 1989, 25).  In observing that poor and minority communities faced greater environmental threats by dint of their geographic location and limited political power in work dating back to the 1960s, Commoner effectively anticipated the environmental justice movement.


On 17 February 1965, at the 4th Mellon Lecture at the University of Pittsburgh’s School of Medicine, Commoner gave a paper entitled “Is Biology a Molecular Science?” He criticized molecular biology and the new cult of DNA, which promised to unlock the secret of life, and concluded his remarks with the assertion: “If we would know life, we must cherish it—in our laboratories and in the world” (Commoner 1965b, 40).  It was a simple statement, but one that would resonate through most all of his activism and take on especially poignant significance as we move into the twenty-first century.  Knowing and cherishing life applied to Commoner’s integration of science, democracy, and the environment insofar as it challenges us to think about poverty, health, inequality, racism, sexism, war, means and modes of production, scientific method and practice, and our exploitation of natural resources.  Commoner’s felicity at grasping for the larger picture puts these disparate themes into harmonious conversation with each other.

Commoner worried about reductionism accompanied by startling advances in chemistry, physics, and biology.  He appreciated the urgent need for the greater study of living things, not just as a scientific endeavor, but also as a social and environmental imperative.  And as an environmental necessity, this approach demands greater public participation and interaction in addition to more scientific recognition.  For fifty years, Commoner’s criticisms of the petrochemical industry focused on the manner in which its products barged unwelcome into the chemistry of living things and polluted people, animals, and ecosystems. While most of the chemicals manufactured or released as waste by the petrochemical industry resembled the structure of chemical components found in nature, they were sufficiently different to be hazardous to life. To Commoner, the connection to twenty-first century genetic engineering was clear: we were in the process of committing the same tragic error, but this time with the secret of life.

But the message was the same.  Environmental risks were being unequally disseminated throughout the environment without the public’s approval or participation.  They were being distributed unevenly, and the public was frequently unaware of the inherent hazards.  This larger phenomenon constitutes a central feature of American environmental history since World War II, and the public response—for which Commoner was a key catalyst—is a pivotal component of the history of American environmentalism.  Barry Commoner matters—or deserves scholarly and political attention—because of the method and practice of a career spent developing a social mechanism for developing and disseminating information, bringing science and the environment into the mainstream, and challenging scientists, the public, and policy-makers to examine the world in more holistic frames.  Combined, these portions of Commoner’s career offer a historically significant account of the past half-century of American environmentalism, but they also offer a poignant and positive prescription for the future.  Amid journalistic criticisms eulogizing the death of environmentalism (Nordhaus & Shellenberger 2007), Commoner, almost forty years ago, provided a template that resonates as clearly in the twenty-first century:

In our progress-minded society, anyone who presumes to explain a serious problem is expected to offer to solve it as well.  But none of us—singly or sitting in committee—can possibly blueprint a specific “plan” for resolving the environmental crisis.  To pretend otherwise is only to evade the real meaning of the environmental crisis: that the world is being carried to the brink of ecological disaster not by a singular fault, which some clever scheme can correct, but by the phalanx of powerful economic, political, and social forces that constitute the march of history.  Anyone who proposes to cure the environmental crisis undertakes thereby to change the course of history.

But this is a competence reserved to history itself, for sweeping social change can be designed only in the workshop of rational, informed, collective social action.  That we must act is clear.  The question which we face is how (Commoner 1971, 300).

Barry Commoner

I just received the following e-mail from Barry Commoner’s longtime associate David Kriebel, informing me that Commoner died in his sleep today. He was 95. I’m still gathering my own thoughts on this, but I am very grateful for the time he made available to me over the past decade. Very quickly, he shifted from research subject to friend, and I am so glad I had a chance to spend some time with him last month in New York. More to follow.

Barry Commoner died today. His wife Lisa called this evening to say he died peacefully in the hospital with her by his side. They’d had a lovely conversation just last night, and he died in his sleep.

Barry was an optimist. He said that since it was human economic development that had messed up the planet, it was entirely feasible for humans to fix it.

He was also a deep systems thinker, who had no time for the academic jargon of systems. He never used diagrams in his books because he said that if his ideas were going to have impact, they ought to be understandable in plain English.

Barry believed in giving ordinary people the information about the ecologic impacts of technology and he trusted that they would make the right decisions. He thought scientists should serve the public in this way, and he was very skeptical of putting experts in charge of making decisions for the public.

Barry said that good political strategy should be based on good science; trying to force the facts to fit a position would fail because sooner or later the truth would come out, and you would lose the confidence of the public.

In our progress-minded society, anyone who presumes to explain a serious problem is expected to offer to solve it as well.  But none of us – singly or sitting in committee – can possibly blueprint a specific “plan” for resolving the environmental crisis.  To pretend otherwise is only to evade the real meaning of the environmental crisis: that the world is being carried to the brink of ecological disaster not by a singular fault, which some clever scheme can correct, but by the phalanx of powerful economic, political, and social forces that constitute the march of history.  Anyone who proposes to cure the environmental crisis undertakes thereby to change the course of history.  But this is a competence reserved to history itself, for sweeping social change can be designed only in the workshop of rational, informed, collective social action. : That we must act now is clear. The question which we face is how.

– Barry Commoner, The Closing Circle, p. 300


Newton’s Apple

Yesterday I met my second year science and technology in world history survey course for the first time. It’s a group of 160 students that will meet three times a week through the semester for lectures. In addition, there will be an hour of tutorial per week. This is especially exciting for a few reasons. First, the course draws students from across campus, which means these tutorials are hopping with interdisciplinary discussion. Second, for the first time the tutorials will all be run by my own graduate students. While I have had some exceptional teaching assistants in the past, there’s something special about working with my own students. To wit: in my introductory lecture I challenged the notion of “Eureka” moments in science or the idea that science occurs in isolation—at a remove from its social context. And in so doing pooh-poohed the significance of Newton’s apple as being the singular defining moment of his work on gravity (largely in jest), and even went so far as to question whether the type of apple (which I didn’t know) played some kind of role. I received from one of my teaching assistants the following:

The variety of apple that supposedly fell on Newton’s head is called ‘Flower of Kent,’ a small green cooking apple that originated in France.  The tree is now cordoned off at Woolsthorpe Manor, since the volume of visitors was destroying its root system.  (However, it’s not the original tree, but one that was propagated in its place after it died in the 19th century.)  It sounds like the Flower of Kent probably would have gone the way of many, many other apple varieties if it wasn’t for its role in Newton’s ‘eureka moment.’   There aren’t many of them left, but a Flower of Kent tree has been planted at York University.  And, last but not least, a piece of the original tree was sent up with the space shuttle Atlantis.

At least, that’s what these articles tell me:

All in fun, but now I know. I shall now tread carefully as I will likely be challenged at every turn. But this is good stuff—learning is more fun when the good work can be complemented with cheerful and genuine curiosity…