I’m currently teaching a third-year course on the history of truth. The course examines the historical mechanisms that contributed to the social production and consumption of knowledge over time. It interests itself in the construction of “matters of fact,” and how scientific praxis emerged as the primary mode of knowledge authority in the modern world. It aims to explore the cultural features of who could practice science and how their scientific method came to be ingrained as a method of forging consensus among scientists, and how their findings came to be adopted as truths to a more general public. More significantly, this course proposes to examine how these activities changed or evolved over time.
We read Steven Shapin and Simon Schaffer’s Leviathan and the Air-Pump and talked about Boyle’s literary technology and virtual witnessing as pillars of the new experimental science. Recently, I lectured on Robert Kohler’s Lords of the Fly as a corollary investigation of the experimental life, and I stressed Kohler’s discussion of the moral economy. Collaboration, trustworthiness, fraud, failure, metaphors in science have featured throughout lectures and discussions. But I have had little opportunity to share anecdotes. Anecdotes can be fun.
Next week, I will be running a small module on science journalism in the twentieth century. I’m especially interested in themes surrounding science literacy and the media’s role as broker in communicating scientific information—translating it for a lay audience. In his classic essay, “Roots of the New Conservation Movement,” in Perspectives in American History 6 (1972), Donald Fleming talked about politico-scientists—scientists were politically engaged (Barry Commoner, for one)—as being part of a specialized fifth estate intent on informing the public. This during a politically tense period in American history.
As a topic, it reminded me of a story Barry Commoner relayed to me during the oral histories I conducted with him. Let me start with the report written by William Laurence (the Pulitzer Prize-winning journalist—and one of our in-class subjects), which appeared in The New York Times on December 29, 1954.
In 1954, E. U. Condon was an elder statesman of American physics, a notable quantum physicist from the 1920s, and the outgoing President of the American Association for the Advancement of Science. After World War II, he had also suffered serious scrutiny from a subcommittee of the House Un-American Activities Committee. Condon had been particularly critical of imposed secrecy in science, and strongly advocated continued international scientific cooperation. On 1 March 1948, the subcommittee described Condon—at the time, the director of the National Bureau of Standards—as “one of the weakest links in our atomic security.”Condon was by no means a radical thinker, but he did believe that science only functioned properly in an open society. His AAAS election (in 1951) had been somewhat controversial, and by 1954 the label of “Communist” or “security risk” constituted a black mark. But turn your attention to the final paragraph: “Dr. Condon received an ovation as he rose to address his colleagues.”
Warren Weaver was a strong supporter of Condon’s (as his remarks above might attest). The young Barry Commoner as well. The story that Commoner told me involved this evening and the standing ovation as Condon retired from his role as President. At the conference, Commoner—who knew Laurence—invited Laurence to join him and others for dinner and drinks before the evening lecture. Because the conference was in California, the time difference was such that Laurence needed to file his story before dinner so that it could appear in the following day’s paper. He hadn’t filed his story yet, and asked Commoner how the membership would respond to Condon’s term. Could vocal support be interpreted as political subversion in Cold War America? The ovation (reported) was hardly a certainty. Commoner assured his friend that there would be a standing ovation: File the story and come for a drink. Which Laurence did. The ovation was reported (if not printed) before it happened. Returning to the conference hall for the evening proceedings, Commoner walked Laurence to the front row of the auditorium to sit down. After Weaver spoke and introduced Condon, Commoner told me (almost 50 years later), Commoner pulled Laurence by the shoulder and gruffly said: “Bill, stand up!” At which point the two led the standing ovation—giving credence to the story Laurence had already filed.
It’s a fun little anecdote, and Commoner told it to me at least twice. But I was reminded of it this week while preparing to discuss and have students research the relationship between science, journalism, and the public.
The tragic mercury poisoning epidemic at Minamata, Japan, serves as one of the critical first chapters in the history of the Toxic Century. The mercury spill in Minamata Bay in the 1950s constitutes one of the first expressions of the new landscapes that typify the Toxic Century. From 1932 to 1967, the Chisso Chemical Plant dumped mercury into the bay, from which local villagers subsisted on a fish-heavy diet. By the early 1950s, a growing number of animals and then residents were afflicted with a mysterious disease that flummoxed medical experts. Most typically, the symptoms involved debilitating damage to their nervous system. While researchers at Kumamoto University were able to identify heavy metal poisoning, it took some time before they could point to methyl mercury with confidence. (Minamata disease symptoms were first observed in humans in 1953; in 1959, studies definitively concluded that methyl mercury was the source).
Uncertainty ruled the early response. Hospitals quarantined sick patients, concerned that their ailment was contagious. “Whenever a new patient was identified,” Akio Mishima reported in Bitter Sea, “white-coated public health inspectors hurried to his or her house to disinfect every nook and cranny.” And still the fishing community ate the fish from the bay. Kibyo—strange illness—the locals said, when another neighbour showed symptoms. In historical circles, we resist talking about passive victims, but the hapless not-knowingness of the early stages of the Minamata outbreak can be framed in a manner that would impress Alfred Hitchcock.
Fear: the delay in discovering acute mercury poisoning was the source of Kibyo provoked fear around not knowing the source of the ailment. Subsequent victims also expressed fears about dying. Another form of fear manifests itself in the cultural response to victimhood. As science pointed toward the bay and the fish therein as the source of Minamata disease, divisions within the community arose between the afflicted and the fishermen who depended upon the bay for their livelihood. Patients’ families seeking compensation suffered discrimination from their neighbours. This ostracism also stimulated new forms of fear.
I thought I’d written this post already. For more than a year I have been organizing my research agenda around the Toxic Century—a period, post-World War II, in which a host of toxic chemicals proliferated the physical environment and created a series of health concerns. My introductory summary in a grant proposal, submitted last year:
We live in a toxic century. Each of us is a walking, breathing artifact of humanity’s toxic trespasses into nature. Unwittingly or not, we are all carrying a chemical cocktail in our blood, our bones, and our tissue, which constitutes the problematic legacy of persistent organic pollutants. This project is a history of that century from within, where “within” refers to the fact that we are still living in the toxic century—it begins after World War II—but also that this is an embodied history, which explores the history of the toxins we carry around inside us.
Persistent organic pollutants, such as synthetic pesticides, plastics, and PCBs, defy environmental degradation. As a result they pose considerable risks to human and environmental health insofar as they are able to move great distances from their points of origin and because they tend to magnify up the food chain and accumulate in human and animal tissue. They are a by-product of the chemical revolution that began at the end of the 19th century and proliferated in the marketplace in the years immediately following World War II. As carcinogens and endocrine disruptors, persistent organic pollutants have become the ominous centrepiece of the global toxic story that continues to haunt us.
The toxic century refers to the contamination of the entire planet. The synthetic chemicals defining this century have become a ubiquitous feature of the human footprint on the global landscape. More than 350 of them have been identified in the Great Lakes, where they would persist, even if their emission were halted tomorrow. They also have demonstrated a distinct capacity to travel over great distances in waterways, in the atmosphere, in our mobile bodies. Multiple chlorinated chemical by-products have been located in measurable quantities in the Canadian Arctic and over the Atlantic Ocean, for example, thousands of kilometers from their point of manufacture.
As a history of persistent organic pollutants and their science in a global context, this project first explores the manufacture and proliferation of toxic chemicals before concentrating on the post-World War II environmental science that raised alarms about their threats to human health and ecological integrity. In this manner, the project merges environmental politics with public health and toxicology to uncover the scale and scope of our toxic crisis, putting special emphasis on the emergence of environmental toxicology as a hybrid discipline designed to confront the uncertainty that has driven so much of the recent history of chemical harm. And it helps readers understand that, since World War II, a variety of military and industrial practices have introduced new chemicals into the environment and into our bodies, many of which pose serious health risks and have wrought damage to the physical environment, the extent of which we do not even know. This project aims to ensure that even if the damage remains uncertain, our understanding of the history that produced these problems—and the history of efforts to repair them—should not.
Over the past year, I moved away from the idea of drafting a project on the Toxic Century writ large. Instead, my interest in toxic fear is an avenue of inquiry within this framework. Further, the idea of telling “history from within,” provides a context for linking the Toxic Century to my other interests in the history of the future. Another angle I mean to pursue involves investigating the history of disaster science, which explicitly links toxics and the future around ideas of planning and anticipating environmental contamination.
I keep coming back to the idea of uncertainty. It’s an omnipresent feature of the mercury project. Uncertainty, I think, is also at the heart of how toxic fear manifests itself. We’re afraid of what we don’t know—or don’t understand. And, yet, chemical pollution demands that we act quickly, and sometimes with incomplete information about the nature of the contaminant’s threat. So when uncertainty prevails, how do you develop baseline regulation? In the aftermath of mercury poisoning epidemic at Minamata, national and global health agencies raced to identify acceptable exposure limits for mercury. These were complicated by mercury’s ubiquity in industry and—scientists discovered—throughout the environment. As various organizations introduced reference dose recommendations that erred on the side of caution to accommodate unknowns in the available data (such as differences in sensitivity across a population and the inability of a single study to address all possible adverse outcomes), it became glaringly apparent that these preliminary numbers were not nearly conservative enough.
My focal point is the politics of establishing a reference dose for mercury and the manner in which uncertainty rests at the heart of this problem. The reference dose is effectively a standard uncertainty factor, and is built in to represent unknowns in the available data—such as differences in sensitivity across a population and the inability of a single study to address all possible adverse outcomes. The crux of the problem is establishing a regulatory line between safe levels of mercury in human bodies and not safe levels—and doing that without relying on a trial-and-error approach.
I want to argue that mercury has a distinctive place in the ecosystem of quantifying chemical hazards, due in no small measure to the manner in which it impressed itself through a series of acute poisoning epidemics during the latter half of the twentieth century. But also in terms of how it was measured. The weak mortar that holds this presentation together is the contradiction between the uses for toxicological research. Where the scientific endeavour seeks to identify acceptable parameters for chemical risk, legislative demands put scientific findings in conversation with competing economic and political imperatives.
To illustrate, consider the anecdotal story related by Nils-Erik Landell, reflecting on the Swedish mercury case of the 1960s. Sweden was the first developed country to locate widespread industrial mercury pollution in its water systems (this, of course, discounting the acute mercury poisoning case in Minamata, Japan). Landell recalls:
I was working at the Public Health Institute to get money for my education as a medical doctor … and my chief had written a toxicological evaluation of the maximum limit of mercury in fish. I saw it on his table, and he had written [the safe limit of mercury content in fish] 0.5 milligrams per kilogram of wet weight. The next day, the paper was still there on the table, but now I saw that he had rubbed it out and it was now 1.0 milligrams per kilogram. And I asked him why … and he said in Lake Vänern, the biggest lake in Sweden, the fishermen had pointed out that the fish had a concentration of 0.7, so he had to raise it to 1.0. And I understood that the evaluation of toxicology was not so sharp as it should be, but it was illustrative of the pressure from different companies and economic interests on the scientists.
As a reference point, the current EPA reference dose for mercury in fish is 0.1 µg/kg/day (there’s an interesting side-story here—maybe a post for another day).
To start, allow me to move away from mercury to discuss the broader history of the reference dose. Measuring the safety factor of chemicals is a feature of post-World War II environmental praxis. Starting in the United States, efforts to identify safe levels for new additives in foods in the middle-1950s prompted interest in articulating safe levels of acute and chronic exposure to harmful chemicals. The first recommendations came from two scientists at the US Food and Drug Administration. In 1954, Arnold Lehman and O. Garth Fitzhugh posited that animal toxicity tests could be extrapolated qualitatively to predict responses in humans, but that quantitative predictions were more problematic. To articulate safe levels of a given toxin, they proposed that the reference dose be evaluated by the following formula:
Reference Dose (RfD) = NOAEL/Uncertainty Factor
Lehman and Fitzhugh set their uncertainty factor at a 100-fold margin. That is to say that exposure levels to harmful chemicals should be set a hundred times lower than the point at which no adverse effects had been observed in the laboratory. The justification for the 100-fold safety factor was traditionally interpreted as the product of two separate values, expressing default values to a magnitude of 10. The protocol worked on the assumption, first, that human beings were 10 times more sensitive than the test animal, and, second, that the variability of sensitivity within the human population could be managed within a 10-fold frame.
The fundamental premise of the reference dose, as Lehman and Fitzhugh conceived it, was that it was designed to address the untidiness of extrapolating animal data and applying them to human populations outside the lab. In effect, the initial 100-fold reference point was arbitrary, without any real quantitative basis for or against it. It’s a principle that has stood up to more recent scientific scrutiny, and variants of it remain in practice sixty years later.
To mercury. Though mercury’s entry into the toxic century occurred at Minamata, it is the Swedish case study that galvanized growing interest in establishing a reference dose for mercury exposure. The Minamata case was the result of very specific mercury emissions into the bay. A combination of not looking further for mercury in the environment and broader disinterest in international circles meant that much of the Japanese research was not revisited until the 1970s when mercury was accepted as a ubiquitous environmental contaminant with universal reach. There was also some delay in identifying mercury as the source. In the mid-1960s, Swedes found mercury prevalent in wild birds—a product of mercury-coated seed grain (fungicidal properties)—and, subsequently, throughout their water systems—through a variety of industrial uses. Swedish concerns over an appropriate reference dose for mercury stemmed on the hypothetical. They had discovered mercury, but had not experienced any cases of mercury poisoning. So what was the threshold? Their analyses debated the merits of measuring mercury content in dry or wet weight of fish, measuring potential threats to the fishing industry, and determining social and individual risks associated with mercury exposure.
But if the reference dose studies in Sweden were based on conjecture, mercury’s neurotoxic potential was realized in Iraq in 1972. Widespread poisoning resulted after a mishandled supply of mercury-coated Wonder Wheat arrived too late from Mexico to be planted. Desperate, hungry farmers started making homemade bread from the seed grain. The seeds had been dyed pink to warn that they had been treated with hazardous chemicals, but farmers assumed that washing off the dye also removed the mercury. Numbers on the severity of the mercury epidemic vary drastically. Official, Ba’athist counts suggest 4500 victims; more recent, independent observers estimate at least ten times that number.
Amidst the chaos and calamity, the Iraqi case provided a critical opportunity to measure mercury exposures on human subjects. Note that whereas the Swedes were taken by measuring mercury content in fish, the new evaluations could be rendered more precise by disregarding the first 10-fold protocol, effectively by eliminating interspecies uncertainty factors—getting rid of the middle-fish. Put another way, where Lehman and Fitzhugh were addressing uncertainty factors as part of a qualitative analysis of potential risk, data derived from Iraq could engage a more quantitative approach. As a result, numerous national and international agencies—the World Health Organization and the US Food and Drug Administration foremost among them—collected data from mercury victims in the provinces around Baghdad. These studies subsequently served as the cornerstone for numerous national and international recommendations for acceptable mercury exposure for the next 25 years.
During the 1980s, however, researchers in Europe and in the United States raised concerns about the validity of the data. The measurements taken in Iraq stemmed from acute mercury poisoning—the rapid consumption of dangerously high levels of mercury. Were these findings—and the limits they proposed—consistent with the much more common chronic, low-level exposure? If mercury-contaminated fish was part of a regular diet over a longer period of time, how would mercury behave and what would be the epidemiological effects?
The first project, composed of an international team and based at Harvard, undertook an assessment of possible brain function impairment in adolescent children due to prenatal exposure to mercury when the mothers’ diet was high in seafood. They selected as their case study the small communities of the Faroe Islands to examine a traditional population that ate some fish, and occasionally feasted on mercury-contaminated whales. I’ll leave out the specifics of the study, but the authors found that high levels of mercury passed from mother to child in utero produced irreversible impairment to specific brain functions in the children. By age 7, 614 children with the most complete mercury-exposure data had lower scores in 8 of 16 tests of language, memory, and attention, suggesting that low-level mercury exposure caused neurological problems.
At roughly the same time, a team of researchers at the University of Rochester Medical Center, carried out mental and motor tests on 9-yr old children born on the Seychelles Islands. The study, begun in 1989, looked for an association between mercury exposure and behavior, motor skills, memory, and thinking in 779 children born to mothers who averaged a dozen fish meals a week. Around age 9, higher mercury exposure was associated with two test results. Boys, but not girls, were slower at one movement test, but only when using their less capable hand. Boys and girls exposed to more mercury were rated as less hyperactive by their teachers. The authors concluded, “These data do not support the hypothesis that there is a neurodevelopmental risk from prenatal methylmercury exposure resulting solely from ocean fish consumption.” So while the Faroes study indicated cause for concern in low level mercury exposure through ocean fish consumption, the Seychelles study exonerated mercury. To complicate matters, a third study in New Zealand, which followed the Seychelles methodology identified mercury risk more consistent with the Faroes study.
By way of exit strategy, let me conclude by situating talk of reference doses in its larger context. Interest in and analysis of mercury pollution and its acceptable limits constitute part of the transformation of global environmentalism after World War II. Put very roughly, prior to 1945 concern for the environment consisted of protecting nature from the onslaught of civilization; after 1945 this concern—in actions and in rhetoric—shifted to protecting civilization from itself. The environmental lexicon supports this notion. New vocabulary—bioaccumulation, biomagnification, environmental risk, chemical hazard—became prevalent, transforming our environmental engagement. Similar transformations take place within toxicological vocabularies. Environmental toxicology, toxicokinetics, toxicodynamics, suggest that specialized and nonspecialized forms of language use evolved during the second half of the twentieth century. None of this should come as a surprise, but it adds a layer of complexity to the traditional, post-materialist arguments that have typically explained the post-war environmental transformation.
The struggle for precision comes at another price, however. This bodily turn in environmental thinking has understandably shifted the gaze of environmental monitoring from the ecosystem to the body. What happens “out there,” ironically, matters less than what happens “in here.” And that fear over public health risks has galvanized a more pressing need for scientific knowledge and political action—the interaction between the two breeding a landscape of new, reactionary or crisis disciplines to make sense of environmental hazards. That policy moves faster than science and thereby shapes the practice of knowledge gathering and its place in policymaking has historically constituted one of the primary obstacles in the struggle for epistemic clarity when articulating threshold levels for mercury exposure. In somewhat related news, I received a copy of Frederick Rowe Davis’s book, Banned: A History of Pesticides and the Science of Toxicology, the other day. I have yet to get beyond the first chapter, but I look forward to seeing how he treats the messy politics of environmental toxicology—and especially the relationship between science and policy.
Lest this discussion seem more at home in the histories of science and policy, let me assert a place for it in environmental history as well. Mercury is a naturally-occurring feature of the physical environment, but human activities have increased the amount of mercury in circulation beyond any quantities that could ever be considered normal. Atmospheric levels are seven times higher and ocean-surface levels are almost six times higher than they were in 2000 BC. Half of that increase has occurred since 1950, during the toxic century. In effect, human-industrial practices provoked and set in motion the need for establishing a reference dose for mercury. But this is also a story grounded in place—or, rather, places. While the preliminary history of mercury’s reference dose took place in laboratories, it was prompted by the discovery that mercury was present in significant quantities in various specific places. Similarly, with the advent of the acute poisoning cases in Iraq in the early 1970s, reference dose studies left the lab to attend to mercury in the field, thereby transforming the nature and parameters of knowledge construction. In so doing, they invite re-readings of how we might tell stories about nature and the numbers we use to make sense of them.
A dirty secret to start: course preparation is never as smooth as one would like. Behind in my work, I needed a big body of text to run through data visualization tools, so I turned to my dissertation, which I still had on my computer in .pdf. The work consisted of roughly 100,000 words—10,644 unique words. Modest for big data analysis like this, but sufficient for sharing with students in order to show them how digital tools can be used in historical analysis. Here’s a word cloud of the dissertation as a whole:
At a quick glance, this looks like a decent rendition of the work and its points of emphasis. But word clouds are simply snapshots in time and don’t provide any kind of chronological information. A good starting point, but limited. From here, I took the same text to voyant-tools.org to show my students how we could get under the hood a little more. The results surprised me a little. Not a lot, after I thought about it, but Voyant revealed some interesting evolutions within the text. Compare the relative and raw frequencies of my use of the words “science” and “environmental” throughout the dissertation in the images below.
About halfway through the dissertation, there seems to be a pretty clean transition from the history of science to environmental history. This is pretty consistent with the dissertation. The first two chapters engage Commoner’s participation in a number of scientific debates and his emergence as a scientist-activist. Heavy emphasis through these chapters considers scientists and their social responsibility, and investigates concerns over nuclear fallout (an issue that Commoner would later recall is what made him an environmentalist). The third chapter considers the Age of Ecology and scientists as public intellectuals in the developing environmental movement. This is the point where the blue line starts to climb and before the green line drops off. Eventually, I start to focus on the environmental movement as a whole and Commoner as an intellectual leader within that movement rather than as a scientist.
On a lazy morning—and buoyed by having played with some similar searches recently—I thought I could quickly pull Commoner references in The New York Times to see if I could draw any comparisons between my work and the primary source hits. Again: this is hardly a comprehensive or satisfactory methodology, but I think it provides sufficient material for working with undergraduate students as a means of showing them how historians might visualize and analyze bigger chunks of information.
“Barry Commoner” AND (science OR environment)
My search showed up in 252 articles. I elected to not use TV or radio guide references and a quick eye-test of article titles eliminated a number of non-relevant articles, so the total number of articles was reduced to 151. Too small to be a worthwhile dataset, but the articles totalled roughly 200,000 words, twice the number in my dissertation.
Here is the chronological distribution of the original search.
Not surprisingly, Commoner’s role as an environmental leader and outspoken activist reaches its apogee in the 1970s. His continuing work, his return to New York, and his presidential campaign likely contributed to his ongoing presence in the 1980s, even if he had technically “retired.”
Breaking up the newspaper findings into three sections—1950-1969, the 1970s, and the 1980s—the resulting clouds offer a story that is somewhat consistent with the Voyant trajectory shown above.
Commoner’s work in the 1950s and 1960s as a biologist, working on the Tobacco Mosaic Virus (for which he won the Newcomb Cleveland Award from the AAAS). This put Commoner within a ring of biologists informed about the developing events around heredity and the Watson-Crick discovery of DNA’s double helix. I should write about Commoner’s response to molecular biology at some point. But DNA, protein, and virus suggest this emphasis in the newspaper literature (life, too).
Another running theme in the newspaper articles and in the early stages of my dissertation is the treatment of social aspects of science. Too: Commoner’s outspoken opposition to funding for space travel, which he saw as a disconcerting expression of the military-industry complex and the Cold War arms race.
This first cloud also shows the beginning of environmental issues with “water” and some others. What else? This analysis is roughly consistent with the narrative I presented in the first three chapters of my dissertation/book (phew!).
Moving to the 1970s:
This second cloud shows a marked decline in “science,” “scientist,” and “university,” which suggests Commoner’s ascendance in environmental circles and his standing as a public intellectual.
In the third cloud, note the emphasis on “Carter” and “Reagan.” Perhaps the Reagan reference is not so surprising, but note that a goodly number of the Commoner references in the 1980s came from 1980 during Commoner’s presidential candidacy on the Citizens’ Party ticket (Harris refers to Commoner’s vice-presidential candidate, LaDonna Harris). The “Queens” reference is also indicative of Commoner’s retirement from Washington University in St. Louis and his move to CUNY Queens College (a return to his native New York City). Given my recent post, it’s also interesting to see “toxic” (in the bottom right corner) present in the 1980s.
One might also identify a change in environmental themes. “Atomic Energy Commission,” “atomic” and “radiation” in the 1950s and 1960s. “energy” in the 1970s; “recycling” and “waste” in the 1980s. “Environment”/”environmental” grow steadily in each word cloud. Clearly I prefaced this evolution in my dissertation and book—the benefit of looking backwards. And more. Again: limited as they are, I think clouds like these provide students with an interesting departure point for looking at big amounts of information, thinking about what might be present, and asking questions that will shape subsequent research. Play along: in the comments below, what evolving trends can we infer from the three newspaper clouds? What isn’t present, or surprisingly underrepresented?
As I let this blog slide over the past several months, I realize I also failed to report on the “History for a Sustainable Future” book series, which published its first titles in 2014. Editing the series is a new experience, but I have been especially grateful for the support and behind-the-scenes work of friends and colleagues Peter Alagona, Benjamin Cohen, and Adam Sowards, who make up the series’ editorial board. Acquisitions editor Clay Morgan retired from the MIT Press in January, but he was instrumental in getting the series off the ground, and he has left us in Beth Clevenger’s very capable hands. We look forward to growing the series, and remain open to inquiries and book proposals.
The first book, by Derek Wall, was published in March. Wall is an English politician and member of the Green Party of England and Wales. He is also an Associate Lecturer in the Department of Politics at Goldsmiths College, University of London. Among his books areThe No-Nonsense Guide to Green Politics and The Rise of the Green Left.
According to the MIT Press site’s overview:
The history of the commons—jointly owned land or other resources such as fisheries or forests set aside for public use—provides a useful context for current debates over sustainability and how we can act as “good ancestors.” In this book, Derek Wall considers the commons from antiquity to the present day, as an idea, an ecological space, an economic abstraction, and a management practice. He argues that the commons should be viewed neither as a “tragedy” of mismanagement (as the biologist Garrett Hardin wrote in 1968) nor as a panacea for solving environmental problems. Instead, Walls sees the commons as a particular form of property ownership, arguing that property rights are essential to understanding sustainability. How we use the land and its resources offers insights into how we value the environment.
After defining the commons and describing the arguments of Hardin’s influential article and Elinor Ostrom’s more recent work on the commons, Wall offers historical case studies from the United States, England, India, and Mongolia. He examines the power of cultural norms to maintain the commons; political conflicts over the commons; and how commons have protected, or failed to protect ecosystems. Combining intellectual and material histories with an eye on contemporary debates, Wall offers an applied history that will interest academics, activists, and policy makers.
The second book is from Frank Uekötter, a reader in Environmental Humanities at the University of Birmingham. It followed hard on the heels of Wall’s book, and appeared in May. In addition to the book in our series, Uekötter is the author of The Green and the Brown: A History of Conservation in Nazi Germany andThe Age of Smoke: Environmental Policy in Germany and the United States, 1880–1970.
Again, from MIT Press:
Germany enjoys an enviably green reputation. Environmentalists in other countries applaud its strict environmental laws, its world-class green technology firms, its phase-out of nuclear power, and its influential Green Party. Germans are proud of these achievements, and environmentalism has become part of the German national identity. In The Greenest Nation? Frank Uekötter offers an overview of the evolution of German environmentalism since the late nineteenth century. He discusses, among other things, early efforts at nature protection and urban sanitation, the Nazi experience, and civic mobilization in the postwar years. He shows that much of Germany’s green reputation rests on accomplishments of the 1980s, and emphasizes the mutually supportive roles of environmental nongovernmental organizations, corporations, and the state.
Uekötter looks at environmentalism in terms of civic activism, government policy, and culture and life, eschewing the usual focus on politics, prophets, and NGOs. He also views German environmentalism in an international context, tracing transnational networks of environmental issues and actions and discussing German achievements in relation to global trends. Bringing his discussion up to the present, he shows the influence of the past on today’s environmental decisions. As environmentalism is wrestling with the challenges of the twenty-first century, Germany could provide a laboratory for the rest of the world.
And there’s more to come. A few titles are in the pipeline and some stimulating conversations with prospective authors promise more in the near future. On a personal note, I am finding nice satisfaction from indirectly contributing to my field by playing a (very) small part in bringing these works to press. And I look forward to announcing more new titles soon (and more promptly).
For more on Derek Wall’s history of the commons, see the MIT Press link.
Similarly, for Frank Uekötter’s history of German environmentalism, link here.
This post draws on two lines of work. This fall, I have been introducing students to some (very) basic digital visualization techniques as a means of training them to ask historical questions. I have also been thinking further about the history of toxic fear—and whether a fear of toxic chemicals produced a distinct kind of fear during the Toxic Century. In A New Species of Trouble, Kai Erikson argues that the new, silent toxins of the post-World War II period “scare human beings in new and special ways, … [and] … elicit an uncanny fear in us” (144). I use Erikson as a departure point, and propose that it is time to examine toxic fear through an historical lens.
These two lines of work came together this week in my first year course on the Toxic Century (HIST 1EE3: The Historical Roots of Contemporary Issues). Working in groups of four or five, students have been tasked with identifying an appropriate keyword search, collecting ~500 newspaper articles in digital form, compiling them into a single, text-searchable file, and running them through some web-based reading and analysis tools. Groups were assigned a specific newspaper (for ease, we limited searches to 1950-1980 in The New York Times, The Washington Post, The Wall Street Journal, and The Globe and Mail, all readily available through McMaster University’s subscription to “Proquest Historical Newspapers”). ~500 articles constitutes a fairly small data set, but I am more interested in teaching the method and process than expecting very specific or accurate results.
Their assignment involves developing a series of word clouds in order to chart change and continuity in the Toxic Century’s vocabulary over time to see if their analysis can identify trends in that vocabulary. Each group would create a chronological suite of word clouds (1950s, 1960s, 1970s, for example) in order to “see” the articles they had collected. I recommended that students play with wordle for simple word cloud generation: I find it easy to use and it seems to generate some of the more aesthetically pleasing clusters. To better contextualize and quantify their results, I urged that they run the same material through Voyant-Tools, which offers some more sophisticated options. Building on this collaborative investigation and analysis, students will co-write a short paper on their findings. I should stress that these papers do not mean to offer anything but a bird’s-eye view of a singular primary source. The exercise is less about acquiring any conclusive historical understanding about a particular time or event. Instead, I introduce this process as a method of starting inquiry into a new topic (my third-year “Social History of Truth” class is doing something similar, but with scientific journals).
Which brings me back to toxic fear. To provide a mock example and case study for the class to take them through the assignment, I conducted a search for New York Times articles between 1950-1990 that adhered to the following criteria:
[toxic AND (fear OR anxiety) AND (chemical OR pollution)]
The search parameters were far from perfect, and I had to “weed” out some articles that debated marijuana use. But a cursory scan of article titles suggested there was not too much noise—non-relevant results that would interfere with the data visualization. I added the 1980s, since we had covered Bhopal and Chernobyl already in lecture, and I thought it would be interesting to see if we could “see” American coverage of international crises. But it’s probably just as well that I did. Of the 729 articles that came back, 504 (69%) were from the 1980s. Another 39 were from 1990. Remove “fear OR anxiety” from the search:
[toxic AND (chemical OR pollution)]
and The New York Times yields 5657 articles (of which a still surprisingly high 3535—62%—are from the 1980s. I haven’t done anything yet, but already I was surprised. While some literature engages the Reagan administration’s deregulation as a catalyst for swelling registration in environmental organizations in the 1980s, I had typically associated fear of toxic chemicals with the Age of Ecology writ large. Yes: Rachel Carson’s Silent Spring featured in the 1960s findings—and maybe my search parameters were skewed to leave out issues surrounding radioactive fallout. However, if The New York Times is at all representative of American print media, it would seem as though the 1980s was the decade of environmental fear (and toxic issues in general). Casting a wider net would be worthwhile. But even the more conservative Wall Street Journal, which returned only 168 hits for the first search including fear, had 122 of them (73%) come from the 1980s. Remove “fear OR anxiety” and you get 858 from 1240 articles (69%) from the 1980s.
Maybe this is simply media hype and marks a lexical transition in print journalism, but I’m not so sure. I have written about the rise and fall of the environmental jeremiad during the 1960s and 1970s, and argued that the effectiveness of alarmist rhetoric subsided during the 1970s. So it would seem out of place for media hyperbole on environmental fear to crescendo so dramatically in the 1980s. Something to investigate, though. On the one hand, perhaps this is just a sign of mainstream media catching up with a slow burning fire in American political thought. But it’s also possible that these results are not wholly surprising, even if the historical literature’s interest in the Age of Ecology tapers off somewhat after the energy crisis. We talk about the environment crisis as a post-World War II phenomenon, best articulated in Barry Commoner’s social activism, in Rachel Carson’s influence, and in the emergence of a number of public health concerns that emerged in the 1960s. And we typically associate the 1970s as a period of expansive environmental regulation in the United States—and, globally, as a key moment in the rise of contemporary global environmental governance. That’s the environmental crisis and its socio-political response. But we also know that the 1980s was punctuated by a series of intense environmental crises: Love Canal, Three Mile Island, Times Beach, Bhopal, Chernobyl. And perhaps these events prompted a more palpable recognition of interest and fear and anxiety surrounding toxins in the environment. Maybe I shouldn’t have been quite so surprised by the abundance of 1980s hits in my search.
Nevertheless, I spent yesterday afternoon focusing my efforts on the 1980s. This was quick and lazy work, and I only gave the files the most cursory of scrubs (and not satisfactorily: eliminating “New York Times” from the word clouds, for example, had the unhappy effect of problematizing: “Times Beach.” And I made the mistake of clearing “the” out of the text before I put it into Voyant. This produced “there” and “their,” which are prominent in some of the clouds (wordle automatically leaves out smaller words). As it happened, I had roughly 100 articles for every two years. I’ll spare you the detailed, more quantified analysis rendered in Voyant Tools. Here’s what each wordle-generated word cloud looked like.
It’s entirely likely certain that I’m working with too small a data set and too narrow a timeframe. But even here, I think there are opportunities for students to interpret and inquire. Dioxin features in 1982-1983 as a result of the Times Beach crisis; Bhopal and Union Carbide are (tragically) prominent in the 1984-1985 cloud. More useful for the undergraduate classroom is the opportunity to compare general topics, such as waste, water, air, etc. One can do a little of that with a preliminary eye- or smell-test with the clouds above. But this is where Voyant becomes a much more effective tool. It is possible to quantify and contextualize reference terms and compare their chronologies through the text. For instance, my New York Times articles for 1982-1983 contained almost 120,000 words (16,439 unique words). “Toxic” was present 270 times, “waste” 250 times, “health” 196 times, and “chemical” 180 times. All well and good.
But we can use Voyant to dig deeper and examine trends in the usage. For example, “dioxin” occurred 174 times. According to Google Books’ ngram generator, interest in dioxin increased through the 1980s and 1990s:
Back to Voyant, the chronology of dioxin references in my text from 1982-1983 looks like this:
The two spikes correlate with the discovery of dioxin and then the town’s evacuation. Which is to say that dioxin’s featuring in the 1982-1983 word cloud has a lot to do with Times Beach emerging as a national story. I’m learning with my students to become more proficient with Voyant, but it’s neat to play with. Voyant makes it possible to fiddle with the number of segments and analyze relative (rather than raw) frequencies. It is also possible to compare trends in terms:
That example is probably not instructive: since “chemical” was one of my search terms—and seems to experience mild spikes along with “dioxin”—I’m not sure what I’m learning here. And neither method shown here organizes the newspaper articles into an accurate chronology. The chronology is dictated by the raw number of articles and not divided into month-by-month sections, which might yield a different perspective. To wit:
Breaking the trend analysis into 25 segments (roughly one point for each month), it’s apparent that dioxin features too early (the story broke in December 1982). So the dumping large amounts of data into a reader does not necessarily return complete information for the historian. I could conduct a raw count of articles by month, of course, to determine the extent to which Times Beach dominated other issues during this two-year sample (it did). The wordle cloud also hints at some of those issues—Bhopal, above in 1984-1985, for example—but it does not indicate whether “dioxin” or “Bhopal” was used repeatedly within a small subset of articles or whether their prevalence is the result of a larger number of articles (or both).
But, still: too small and narrow a data set (though, arguably, this is a pragmatic start for in-class use at the undergraduate level). The work above could be bolstered with a range of newspapers that cover the United States. Having eliminated “New” and “York,” there are no references to city and state, though “Jersey” is present, and suggests regional coverage/emphasis of toxic issues. Perhaps midwestern newspapers such as The St. Louis Post-Dispatch or Chicago Tribune would return a greater number of relative hits (and emphasis) on Times Beach, Missouri, for example. And while adding to the raw data would be interesting, separating it geographically might also turn up some interesting variations in emphasis. Could we compare west coast reporting against east coast reporting, and what differences might be present? Of course, none of this precludes actually reading stuff! But it’s an interesting departure point that generates new and different questions. My less period-specific reading indicates that toxic fear exists and that it is galvanized by uncertainty and/or a lack of information. If that holds true under further and deeper scrutiny, what does that tell us about the 1980s if fear and anxiety increased? One knee-jerk reaction is to suggest that mass deregulation in the Reagan 1980s prompted less understanding and control over environmental problems. But Love Canal and Three Mile Island definitely fit into this story and they predate the Reagan administration. Perhaps this is a Superfund story—and the very idea of Superfund was enough to generate more toxic fear? Or, simply, the proliferation of crises prompted a distinct wave of environmental angst and fear.
Takeaway conclusions: we need to do more work that investigates environmental history in the 1980s. As I note evermore grey on my chin in the mornings, I’m reminded that the 1980s are receding in the rearview mirror, and it’s time we put that decade under the microscope. In American and global contexts, we know the basic story, but that narrative needs to be picked apart and complicated. Some good literature exists in environmental justice scholarship—and we should continue to expand on that—but we have little more to work with. The 1980s constitute a fascinating decade for environmental regulation agencies the world over. After the growth and (relative) successes of the 1970s, what happened in the 1980s? There’s also a distinct dearth of historical work on dioxin (Agent Orange and Vietnam notwithstanding).
I should emphasize that the above discussion of data visualization is (1) a teaching experiment, and (2) not a quantum shift in historical research. So far, I like the assignment and am drawn to the possibilities associated with coaxing first-year students into collaborative research and discovery (which can be tricky in a big survey course). But I don’t yet know what the results will be. Moreover, I do not mean to suggest that digital scholarship will transplant traditional archival research. But I do think visualization has helped me to shift my focus from a broader timeframe to a more concentrated examination of the 1980s—and to ask questions about how and why fear and anxiety proliferated during that decade.
Edit: On further analysis, I suspect the problem above is that “toxic” is the limiting term. A non-discriminatory search for “fear” in The New York Times finds only a modest increase in the word’s frequency:
Compare with “toxic”:
Could “toxic” be the problem? According to the ngram (which doesn’t relate to the NYT searches in any tangible way), “toxic” increased steadily through the 1970s:
Removing “toxic” from the search parameters raises some interesting perspectives, though. Searching for “pollution” AND “fear” changes the frequency of newspaper articles quite markedly.
But try again with “fear” AND “chemical,” and the trend indicates growth into the 1980s:
Does this make us less scared of pollution and more frightened by toxic chemicals? Or is this simply a shift in language? Or do our responses to environmental problems concentrate more specifically around toxic chemicals by the 1980s? And does this constitute some kind of evolution worth exploring in greater depth?