Renaming CBNS

I’ve written a bit about Commoner’s Center for the Biology of Natural Systems—and how it emerged in response to the popular realization that the 1960s environmental crisis defied or transcended traditional scientific disciplines. The Center’s goal was to think more broadly about what has become known as the science of the total environment.

But I raise this more as a place-marker. Last week I received an invitation to visit CBNS for its renaming. The new name will be the Barry Commoner Center for Health and the Environment, which sounds in line with Ralph Nader’s call for an Institute for Thought and Action in Commoner’s name.

Science, Conspiracy, & Journalism: A Cold War Anecdote

I’m currently teaching a third-year course on the history of truth. The course examines the historical mechanisms that contributed to the social production and consumption of knowledge over time. It interests itself in the construction of “matters of fact,” and how scientific praxis emerged as the primary mode of knowledge authority in the modern world. It aims to explore the cultural features of who could practice science and how their scientific method came to be ingrained as a method of forging consensus among scientists, and how their findings came to be adopted as truths to a more general public. More significantly, this course proposes to examine how these activities changed or evolved over time.

We read Steven Shapin and Simon Schaffer’s Leviathan and the Air-Pump and talked about Boyle’s literary technology and virtual witnessing as pillars of the new experimental science. Recently, I lectured on Robert Kohler’s Lords of the Fly as a corollary investigation of the experimental life, and I stressed Kohler’s discussion of the moral economy. Collaboration, trustworthiness, fraud, failure, metaphors in science have featured throughout lectures and discussions. But I have had little opportunity to share anecdotes. Anecdotes can be fun.

Next week, I will be running a small module on science journalism in the twentieth century. I’m especially interested in themes surrounding science literacy and the media’s role as broker in communicating scientific information—translating it for a lay audience. In his classic essay, “Roots of the New Conservation Movement,” in Perspectives in American History  6 (1972), Donald Fleming talked about politico-scientists—scientists were politically engaged (Barry Commoner, for one)—as being part of a specialized fifth estate intent on informing the public. This during a politically tense period in American history.

As a topic, it reminded me of a story Barry Commoner relayed to me during the oral histories I conducted with him. Let me start with the report written by William Laurence (the Pulitzer Prize-winning journalist—and one of our in-class subjects), which appeared in The New York Times on December 29, 1954.

Headline: "Scientist Decries Curb on Condon."
Headline: “Scientist Decries Curb on Condon.”

In 1954, E. U. Condon was an elder statesman of American physics, a notable quantum physicist from the 1920s, and the outgoing President of the American Association for the Advancement of Science. After World War II, he had also suffered serious scrutiny from a subcommittee of the House Un-American Activities Committee. Condon had been particularly critical of imposed secrecy in science, and strongly advocated continued international scientific cooperation. On 1 March 1948, the subcommittee described Condon—at the time, the director of the National Bureau of Standards—as “one of the weakest links in our atomic security.” Condon was by no means a radical thinker, but he did believe that science only functioned properly in an open society. His AAAS election (in 1951) had been somewhat controversial, and by 1954 the label of “Communist” or “security risk” constituted a black mark. But turn your attention to the final paragraph: “Dr. Condon received an ovation as he rose to address his colleagues.”

Warren Weaver was a strong supporter of Condon’s (as his remarks above might attest). The young Barry Commoner as well. The story that Commoner told me involved this evening and the standing ovation as Condon retired from his role as President. At the conference, Commoner—who knew Laurence—invited Laurence to join him and others for dinner and drinks before the evening lecture. Because the conference was in California, the time difference was such that Laurence needed to file his story before dinner so that it could appear in the following day’s paper. He hadn’t filed his story yet, and asked Commoner how the membership would respond to Condon’s term. Could vocal support be interpreted as political subversion in Cold War America? The ovation (reported) was hardly a certainty. Commoner assured his friend that there would be a standing ovation: File the story and come for a drink. Which Laurence did. The ovation was reported (if not printed) before it happened. Returning to the conference hall for the evening proceedings, Commoner walked Laurence to the front row of the auditorium to sit down. After Weaver spoke and introduced Condon, Commoner told me (almost 50 years later), Commoner pulled Laurence by the shoulder and gruffly said: “Bill, stand up!” At which point the two led the standing ovation—giving credence to the story Laurence had already filed.

It’s a fun little anecdote, and Commoner told it to me at least twice. But I was reminded of it this week while preparing to discuss and have students research the relationship between science, journalism, and the public.

Uncertainty, Fear, & Mercury at Minamata: A Brief Overview

The tragic mercury poisoning epidemic at Minamata, Japan, serves as one of the critical first chapters in the history of the Toxic Century. The mercury spill in Minamata Bay in the 1950s constitutes one of the first expressions of the new landscapes that typify the Toxic Century. From 1932 to 1967, the Chisso Chemical Plant dumped mercury into the bay, from which local villagers subsisted on a fish-heavy diet. By the early 1950s, a growing number of animals and then residents were afflicted with a mysterious disease that flummoxed medical experts. Most typically, the symptoms involved debilitating damage to their nervous system. While researchers at Kumamoto University were able to identify heavy metal poisoning, it took some time before they could point to methyl mercury with confidence. (Minamata disease symptoms were first observed in humans in 1953; in 1959, studies definitively concluded that methyl mercury was the source).

Uncertainty ruled the early response. Hospitals quarantined sick patients, concerned that their ailment was contagious. “Whenever a new patient was identified,” Akio Mishima reported in Bitter Sea, “white-coated public health inspectors hurried to his or her house to disinfect every nook and cranny.” And still the fishing community ate the fish from the bay. Kibyo—strange illness—the locals said, when another neighbour showed symptoms. In historical circles, we resist talking about passive victims, but the hapless not-knowingness of the early stages of the Minamata outbreak can be framed in a manner that would impress Alfred Hitchcock.

Fear: the delay in discovering acute mercury poisoning was the source of Kibyo provoked fear around not knowing the source of the ailment. Subsequent victims also expressed fears about dying. Another form of fear manifests itself in the cultural response to victimhood. As science pointed toward the bay and the fish therein as the source of Minamata disease, divisions within the community arose between the afflicted and the fishermen who depended upon the bay for their livelihood. Patients’ families seeking compensation suffered discrimination from their neighbours. This ostracism also stimulated new forms of fear.

The Toxic Century: An Organizing Principle

I thought I’d written this post already. For more than a year I have been organizing my research agenda around the Toxic Century—a period, post-World War II, in which a host of toxic chemicals proliferated the physical environment and created a series of health concerns. My introductory summary in a grant proposal, submitted last year:

We live in a toxic century. Each of us is a walking, breathing artifact of humanity’s toxic trespasses into nature. Unwittingly or not, we are all carrying a chemical cocktail in our blood, our bones, and our tissue, which constitutes the problematic legacy of persistent organic pollutants. This project is a history of that century from within, where “within” refers to the fact that we are still living in the toxic century—it begins after World War II—but also that this is an embodied history, which explores the history of the toxins we carry around inside us.

Persistent organic pollutants, such as synthetic pesticides, plastics, and PCBs, defy environmental degradation. As a result they pose considerable risks to human and environmental health insofar as they are able to move great distances from their points of origin and because they tend to magnify up the food chain and accumulate in human and animal tissue. They are a by-product of the chemical revolution that began at the end of the 19th century and proliferated in the marketplace in the years immediately following World War II. As carcinogens and endocrine disruptors, persistent organic pollutants have become the ominous centrepiece of the global toxic story that continues to haunt us.

The toxic century refers to the contamination of the entire planet. The synthetic chemicals defining this century have become a ubiquitous feature of the human footprint on the global landscape. More than 350 of them have been identified in the Great Lakes, where they would persist, even if their emission were halted tomorrow. They also have demonstrated a distinct capacity to travel over great distances in waterways, in the atmosphere, in our mobile bodies. Multiple chlorinated chemical by-products have been located in measurable quantities in the Canadian Arctic and over the Atlantic Ocean, for example, thousands of kilometers from their point of manufacture.

As a history of persistent organic pollutants and their science in a global context, this project first explores the manufacture and proliferation of toxic chemicals before concentrating on the post-World War II environmental science that raised alarms about their threats to human health and ecological integrity. In this manner, the project merges environmental politics with public health and toxicology to uncover the scale and scope of our toxic crisis, putting special emphasis on the emergence of environmental toxicology as a hybrid discipline designed to confront the uncertainty that has driven so much of the recent history of chemical harm. And it helps readers understand that, since World War II, a variety of military and industrial practices have introduced new chemicals into the environment and into our bodies, many of which pose serious health risks and have wrought damage to the physical environment, the extent of which we do not even know. This project aims to ensure that even if the damage remains uncertain, our understanding of the history that produced these problems—and the history of efforts to repair them—should not.

Over the past year, I moved away from the idea of drafting a project on the Toxic Century writ large. Instead, my interest in toxic fear is an avenue of inquiry within this framework. Further, the idea of telling “history from within,” provides a context for linking the Toxic Century to my other interests in the history of the future. Another angle I mean to pursue involves investigating the history of disaster science, which explicitly links toxics and the future around ideas of planning and anticipating environmental contamination.

Uncertainty: Mercury & the Politics of the Reference Dose

I keep coming back to the idea of uncertainty. It’s an omnipresent feature of the mercury project. Uncertainty, I think, is also at the heart of how toxic fear manifests itself. We’re afraid of what we don’t know—or don’t understand. And, yet, chemical pollution demands that we act quickly, and sometimes with incomplete information about the nature of the contaminant’s threat. So when uncertainty prevails, how do you develop baseline regulation? In the aftermath of mercury poisoning epidemic at Minamata, national and global health agencies raced to identify acceptable exposure limits for mercury. These were complicated by mercury’s ubiquity in industry and—scientists discovered—throughout the environment. As various organizations introduced reference dose recommendations that erred on the side of caution to accommodate unknowns in the available data (such as differences in sensitivity across a population and the inability of a single study to address all possible adverse outcomes), it became glaringly apparent that these preliminary numbers were not nearly conservative enough.

My focal point is the politics of establishing a reference dose for mercury and the manner in which uncertainty rests at the heart of this problem. The reference dose is effectively a standard uncertainty factor, and is built in to represent unknowns in the available data—such as differences in sensitivity across a population and the inability of a single study to address all possible adverse outcomes. The crux of the problem is establishing a regulatory line between safe levels of mercury in human bodies and not safe levels—and doing that without relying on a trial-and-error approach.

I want to argue that mercury has a distinctive place in the ecosystem of quantifying chemical hazards, due in no small measure to the manner in which it impressed itself through a series of acute poisoning epidemics during the latter half of the twentieth century. But also in terms of how it was measured. The weak mortar that holds this presentation together is the contradiction between the uses for toxicological research. Where the scientific endeavour seeks to identify acceptable parameters for chemical risk, legislative demands put scientific findings in conversation with competing economic and political imperatives.

To illustrate, consider the anecdotal story related by Nils-Erik Landell, reflecting on the Swedish mercury case of the 1960s. Sweden was the first developed country to locate widespread industrial mercury pollution in its water systems (this, of course, discounting the acute mercury poisoning case in Minamata, Japan). Landell recalls:

I was working at the Public Health Institute to get money for my education as a medical doctor … and my chief had written a toxicological evaluation of the maximum limit of mercury in fish. I saw it on his table, and he had written [the safe limit of mercury content in fish] 0.5 milligrams per kilogram of wet weight. The next day, the paper was still there on the table, but now I saw that he had rubbed it out and it was now 1.0 milligrams per kilogram. And I asked him why … and he said in Lake Vänern, the biggest lake in Sweden, the fishermen had pointed out that the fish had a concentration of 0.7, so he had to raise it to 1.0. And I understood that the evaluation of toxicology was not so sharp as it should be, but it was illustrative of the pressure from different companies and economic interests on the scientists.

As a reference point, the current EPA reference dose for mercury in fish is 0.1 µg/kg/day (there’s an interesting side-story here—maybe a post for another day).

To start, allow me to move away from mercury to discuss the broader history of the reference dose. Measuring the safety factor of chemicals is a feature of post-World War II environmental praxis. Starting in the United States, efforts to identify safe levels for new additives in foods in the middle-1950s prompted interest in articulating safe levels of acute and chronic exposure to harmful chemicals. The first recommendations came from two scientists at the US Food and Drug Administration. In 1954, Arnold Lehman and O. Garth Fitzhugh posited that animal toxicity tests could be extrapolated qualitatively to predict responses in humans, but that quantitative predictions were more problematic. To articulate safe levels of a given toxin, they proposed that the reference dose be evaluated by the following formula:

Reference Dose (RfD) = NOAEL/Uncertainty Factor

Lehman and Fitzhugh set their uncertainty factor at a 100-fold margin. That is to say that exposure levels to harmful chemicals should be set a hundred times lower than the point at which no adverse effects had been observed in the laboratory. The justification for the 100-fold safety factor was traditionally interpreted as the product of two separate values, expressing default values to a magnitude of 10. The protocol worked on the assumption, first, that human beings were 10 times more sensitive than the test animal, and, second, that the variability of sensitivity within the human population could be managed within a 10-fold frame.

The fundamental premise of the reference dose, as Lehman and Fitzhugh conceived it, was that it was designed to address the untidiness of extrapolating animal data and applying them to human populations outside the lab. In effect, the initial 100-fold reference point was arbitrary, without any real quantitative basis for or against it. It’s a principle that has stood up to more recent scientific scrutiny, and variants of it remain in practice sixty years later.

To mercury. Though mercury’s entry into the toxic century occurred at Minamata, it is the Swedish case study that galvanized growing interest in establishing a reference dose for mercury exposure. The Minamata case was the result of very specific mercury emissions into the bay. A combination of not looking further for mercury in the environment and broader disinterest in international circles meant that much of the Japanese research was not revisited until the 1970s when mercury was accepted as a ubiquitous environmental contaminant with universal reach. There was also some delay in identifying mercury as the source. In the mid-1960s, Swedes found mercury prevalent in wild birds—a product of mercury-coated seed grain (fungicidal properties)—and, subsequently, throughout their water systems—through a variety of industrial uses. Swedish concerns over an appropriate reference dose for mercury stemmed on the hypothetical. They had discovered mercury, but had not experienced any cases of mercury poisoning. So what was the threshold? Their analyses debated the merits of measuring mercury content in dry or wet weight of fish, measuring potential threats to the fishing industry, and determining social and individual risks associated with mercury exposure.

But if the reference dose studies in Sweden were based on conjecture, mercury’s neurotoxic potential was realized in Iraq in 1972. Widespread poisoning resulted after a mishandled supply of mercury-coated Wonder Wheat arrived too late from Mexico to be planted. Desperate, hungry farmers started making homemade bread from the seed grain. The seeds had been dyed pink to warn that they had been treated with hazardous chemicals, but farmers assumed that washing off the dye also removed the mercury. Numbers on the severity of the mercury epidemic vary drastically. Official, Ba’athist counts suggest 4500 victims; more recent, independent observers estimate at least ten times that number.

Amidst the chaos and calamity, the Iraqi case provided a critical opportunity to measure mercury exposures on human subjects. Note that whereas the Swedes were taken by measuring mercury content in fish, the new evaluations could be rendered more precise by disregarding the first 10-fold protocol, effectively by eliminating interspecies uncertainty factors—getting rid of the middle-fish. Put another way, where Lehman and Fitzhugh were addressing uncertainty factors as part of a qualitative analysis of potential risk, data derived from Iraq could engage a more quantitative approach. As a result, numerous national and international agencies—the World Health Organization and the US Food and Drug Administration foremost among them—collected data from mercury victims in the provinces around Baghdad. These studies subsequently served as the cornerstone for numerous national and international recommendations for acceptable mercury exposure for the next 25 years.

During the 1980s, however, researchers in Europe and in the United States raised concerns about the validity of the data. The measurements taken in Iraq stemmed from acute mercury poisoning—the rapid consumption of dangerously high levels of mercury. Were these findings—and the limits they proposed—consistent with the much more common chronic, low-level exposure? If mercury-contaminated fish was part of a regular diet over a longer period of time, how would mercury behave and what would be the epidemiological effects?

The first project, composed of an international team and based at Harvard, undertook an assessment of possible brain function impairment in adolescent children due to prenatal exposure to mercury when the mothers’ diet was high in seafood. They selected as their case study the small communities of the Faroe Islands to examine a traditional population that ate some fish, and occasionally feasted on mercury-contaminated whales. I’ll leave out the specifics of the study, but the authors found that high levels of mercury passed from mother to child in utero produced irreversible impairment to specific brain functions in the children. By age 7, 614 children with the most complete mercury-exposure data had lower scores in 8 of 16 tests of language, memory, and attention, suggesting that low-level mercury exposure caused neurological problems.

At roughly the same time, a team of researchers at the University of Rochester Medical Center, carried out mental and motor tests on 9-yr old children born on the Seychelles Islands. The study, begun in 1989, looked for an association between mercury exposure and behavior, motor skills, memory, and thinking in 779 children born to mothers who averaged a dozen fish meals a week. Around age 9, higher mercury exposure was associated with two test results. Boys, but not girls, were slower at one movement test, but only when using their less capable hand. Boys and girls exposed to more mercury were rated as less hyperactive by their teachers. The authors concluded, “These data do not support the hypothesis that there is a neurodevelopmental risk from prenatal methylmercury exposure resulting solely from ocean fish consumption.” So while the Faroes study indicated cause for concern in low level mercury exposure through ocean fish consumption, the Seychelles study exonerated mercury. To complicate matters, a third study in New Zealand, which followed the Seychelles methodology identified mercury risk more consistent with the Faroes study.

By way of exit strategy, let me conclude by situating talk of reference doses in its larger context. Interest in and analysis of mercury pollution and its acceptable limits constitute part of the transformation of global environmentalism after World War II. Put very roughly, prior to 1945 concern for the environment consisted of protecting nature from the onslaught of civilization; after 1945 this concern—in actions and in rhetoric—shifted to protecting civilization from itself. The environmental lexicon supports this notion. New vocabulary—bioaccumulation, biomagnification, environmental risk, chemical hazard—became prevalent, transforming our environmental engagement. Similar transformations take place within toxicological vocabularies. Environmental toxicology, toxicokinetics, toxicodynamics, suggest that specialized and nonspecialized forms of language use evolved during the second half of the twentieth century. None of this should come as a surprise, but it adds a layer of complexity to the traditional, post-materialist arguments that have typically explained the post-war environmental transformation.

The struggle for precision comes at another price, however. This bodily turn in environmental thinking has understandably shifted the gaze of environmental monitoring from the ecosystem to the body. What happens “out there,” ironically, matters less than what happens “in here.” And that fear over public health risks has galvanized a more pressing need for scientific knowledge and political action—the interaction between the two breeding a landscape of new, reactionary or crisis disciplines to make sense of environmental hazards. That policy moves faster than science and thereby shapes the practice of knowledge gathering and its place in policymaking has historically constituted one of the primary obstacles in the struggle for epistemic clarity when articulating threshold levels for mercury exposure. In somewhat related news, I received a copy of Frederick Rowe Davis’s book, Banned: A History of Pesticides and the Science of Toxicology, the other day. I have yet to get beyond the first chapter, but I look forward to seeing how he treats the messy politics of environmental toxicology—and especially the relationship between science and policy.

Lest this discussion seem more at home in the histories of science and policy, let me assert a place for it in environmental history as well. Mercury is a naturally-occurring feature of the physical environment, but human activities have increased the amount of mercury in circulation beyond any quantities that could ever be considered normal. Atmospheric levels are seven times higher and ocean-surface levels are almost six times higher than they were in 2000 BC. Half of that increase has occurred since 1950, during the toxic century. In effect, human-industrial practices provoked and set in motion the need for establishing a reference dose for mercury. But this is also a story grounded in place—or, rather, places. While the preliminary history of mercury’s reference dose took place in laboratories, it was prompted by the discovery that mercury was present in significant quantities in various specific places. Similarly, with the advent of the acute poisoning cases in Iraq in the early 1970s, reference dose studies left the lab to attend to mercury in the field, thereby transforming the nature and parameters of knowledge construction. In so doing, they invite re-readings of how we might tell stories about nature and the numbers we use to make sense of them.

Barry Commoner Revisited & Revisualized

A dirty secret to start: course preparation is never as smooth as one would like. Behind in my work, I needed a big body of text to run through data visualization tools, so I turned to my dissertation, which I still had on my computer in .pdf. The work consisted of roughly 100,000 words—10,644 unique words. Modest for big data analysis like this, but sufficient for sharing with students in order to show them how digital tools can be used in historical analysis. Here’s a word cloud of the dissertation as a whole:

"Barry Commoner and the Science of Survival." My dissertation from 2004. Word cloud generated by
“Barry Commoner and the Science of Survival.” My dissertation from 2004. Word cloud generated by

At a quick glance, this looks like a decent rendition of the work and its points of emphasis. But word clouds are simply snapshots in time and don’t provide any kind of chronological information. A good starting point, but limited. From here, I took the same text to to show my students how we could get under the hood a little more. The results surprised me a little. Not a lot, after I thought about it, but Voyant revealed some interesting evolutions within the text. Compare the relative and raw frequencies of my use of the words “science” and “environmental” throughout the dissertation in the images below.

Relative frequencies of "environmental" (716) and "science" (445) in my dissertation. Chart produced on
Relative frequencies of “environmental” (716) and “science” (445) in my dissertation. Chart produced on
Raw frequencies of "environmental" (716) and "science" (445) in my dissertation. Chart produced on
Raw frequencies of “environmental” (716) and “science” (445) in my dissertation. Chart produced on

About halfway through the dissertation, there seems to be a pretty clean transition from the history of science to environmental history. This is pretty consistent with the dissertation. The first two chapters engage Commoner’s participation in a number of scientific debates and his emergence as a scientist-activist. Heavy emphasis through these chapters considers scientists and their social responsibility, and investigates concerns over nuclear fallout (an issue that Commoner would later recall is what made him an environmentalist). The third chapter considers the Age of Ecology and scientists as public intellectuals in the developing environmental movement. This is the point where the blue line starts to climb and before the green line drops off. Eventually, I start to focus on the environmental movement as a whole and Commoner as an intellectual leader within that movement rather than as a scientist.

On a lazy morning—and buoyed by having played with some similar searches recently—I thought I could quickly pull Commoner references in The New York Times to see if I could draw any comparisons between my work and the primary source hits. Again: this is hardly a comprehensive or satisfactory methodology, but I think it provides sufficient material for working with undergraduate students as a means of showing them how historians might visualize and analyze bigger chunks of information.

“Barry Commoner” AND (science OR environment)

My search showed up in 252 articles. I elected to not use TV or radio guide references and a quick eye-test of article titles eliminated a number of non-relevant articles, so the total number of articles was reduced to 151. Too small to be a worthwhile dataset, but the articles totalled roughly 200,000 words, twice the number in my dissertation.

Here is the chronological distribution of the original search.

NYT references to Barry Commoner AND (science OR environment) by decade (1950-1989). 252 total responses.
NYT references to Barry Commoner AND (science OR environment) by decade (1950-1989). 252 total responses.

Not surprisingly, Commoner’s role as an environmental leader and outspoken activist reaches its apogee in the 1970s. His continuing work, his return to New York, and his presidential campaign likely contributed to his ongoing presence in the 1980s, even if he had technically “retired.”

Breaking up the newspaper findings into three sections—1950-1969, the 1970s, and the 1980s—the resulting clouds offer a story that is somewhat consistent with the Voyant trajectory shown above.

NYT references to Barry Commoner between 1950 & 1969.
NYT references to Barry Commoner between 1950 & 1969.

Commoner’s work in the 1950s and 1960s as a biologist, working on the Tobacco Mosaic Virus (for which he won the Newcomb Cleveland Award from the AAAS). This put Commoner within a ring of biologists informed about the developing events around heredity and the Watson-Crick discovery of DNA’s double helix. I should write about Commoner’s response to molecular biology at some point. But DNA, protein, and virus suggest this emphasis in the newspaper literature (life, too).

Another running theme in the newspaper articles and in the early stages of my dissertation is the treatment of social aspects of science. Too: Commoner’s outspoken opposition to funding for space travel, which he saw as a disconcerting expression of the military-industry complex and the Cold War arms race.

This first cloud also shows the beginning of environmental issues with “water” and some others. What else? This analysis is roughly consistent with the narrative I presented in the first three chapters of my dissertation/book (phew!).

Moving to the 1970s:

NYT references to Barry Commoner in the 1970s.
NYT references to Barry Commoner in the 1970s.

This second cloud shows a marked decline in “science,” “scientist,” and “university,” which suggests Commoner’s ascendance in environmental circles and his standing as a public intellectual.

NYT references to Barry Commoner in the 1980s.
NYT references to Barry Commoner in the 1980s.

In the third cloud, note the emphasis on “Carter” and “Reagan.” Perhaps the Reagan reference is not so surprising, but note that a goodly number of the Commoner references in the 1980s came from 1980 during Commoner’s presidential candidacy on the Citizens’ Party ticket (Harris refers to Commoner’s vice-presidential candidate, LaDonna Harris). The “Queens” reference is also indicative of Commoner’s retirement from Washington University in St. Louis and his move to CUNY Queens College (a return to his native New York City). Given my recent post, it’s also interesting to see “toxic” (in the bottom right corner) present in the 1980s.

One might also identify a change in environmental themes. “Atomic Energy Commission,” “atomic” and “radiation” in the 1950s and 1960s. “energy” in the 1970s; “recycling” and “waste” in the 1980s. “Environment”/”environmental” grow steadily in each word cloud. Clearly I prefaced this evolution in my dissertation and book—the benefit of looking backwards. And more. Again: limited as they are, I think clouds like these provide students with an interesting departure point for looking at big amounts of information, thinking about what might be present, and asking questions that will shape subsequent research. Play along: in the comments below, what evolving trends can we infer from the three newspaper clouds? What isn’t present, or surprisingly underrepresented?

Book Series Update

As I let this blog slide over the past several months, I realize I also failed to report on the “History for a Sustainable Future” book series, which published its first titles in 2014. Editing the series is a new experience, but I have been especially grateful for the support and behind-the-scenes work of friends and colleagues Peter Alagona, Benjamin Cohen, and Adam Sowards, who make up the series’ editorial board. Acquisitions editor Clay Morgan retired from the MIT Press in January, but he was instrumental in getting the series off the ground, and he has left us in Beth Clevenger’s very capable hands. We look forward to growing the series, and remain open to inquiries and book proposals.

The first book, by Derek Wall, was published in March. Wall is an English politician and member of the Green Party of England and Wales. He is also an Associate Lecturer in the Department of Politics at Goldsmiths College, University of London. Among his books areThe No-Nonsense Guide to Green Politics and The Rise of the Green Left.


According to the MIT Press site’s overview:

The history of the commons—jointly owned land or other resources such as fisheries or forests set aside for public use—provides a useful context for current debates over sustainability and how we can act as “good ancestors.” In this book, Derek Wall considers the commons from antiquity to the present day, as an idea, an ecological space, an economic abstraction, and a management practice. He argues that the commons should be viewed neither as a “tragedy” of mismanagement (as the biologist Garrett Hardin wrote in 1968) nor as a panacea for solving environmental problems. Instead, Walls sees the commons as a particular form of property ownership, arguing that property rights are essential to understanding sustainability. How we use the land and its resources offers insights into how we value the environment.

After defining the commons and describing the arguments of Hardin’s influential article and Elinor Ostrom’s more recent work on the commons, Wall offers historical case studies from the United States, England, India, and Mongolia. He examines the power of cultural norms to maintain the commons; political conflicts over the commons; and how commons have protected, or failed to protect ecosystems. Combining intellectual and material histories with an eye on contemporary debates, Wall offers an applied history that will interest academics, activists, and policy makers.

The second book is from Frank Uekötter, a reader in Environmental Humanities at the University of Birmingham. It followed hard on the heels of Wall’s book, and appeared in May. In addition to the book in our series, Uekötter is the author of The Green and the Brown: A History of Conservation in Nazi Germany and The Age of Smoke: Environmental Policy in Germany and the United States, 1880–1970.


Again, from MIT Press:

Germany enjoys an enviably green reputation. Environmentalists in other countries applaud its strict environmental laws, its world-class green technology firms, its phase-out of nuclear power, and its influential Green Party. Germans are proud of these achievements, and environmentalism has become part of the German national identity. In The Greenest Nation? Frank Uekötter offers an overview of the evolution of German environmentalism since the late nineteenth century. He discusses, among other things, early efforts at nature protection and urban sanitation, the Nazi experience, and civic mobilization in the postwar years. He shows that much of Germany’s green reputation rests on accomplishments of the 1980s, and emphasizes the mutually supportive roles of environmental nongovernmental organizations, corporations, and the state.

Uekötter looks at environmentalism in terms of civic activism, government policy, and culture and life, eschewing the usual focus on politics, prophets, and NGOs. He also views German environmentalism in an international context, tracing transnational networks of environmental issues and actions and discussing German achievements in relation to global trends. Bringing his discussion up to the present, he shows the influence of the past on today’s environmental decisions. As environmentalism is wrestling with the challenges of the twenty-first century, Germany could provide a laboratory for the rest of the world.

And there’s more to come. A few titles are in the pipeline and some stimulating conversations with prospective authors promise more in the near future. On a personal note, I am finding nice satisfaction from indirectly contributing to my field by playing a (very) small part in bringing these works to press. And I look forward to announcing more new titles soon (and more promptly).

For more on Derek Wall’s history of the commons, see the MIT Press link.

Similarly, for Frank Uekötter’s history of German environmentalism, link here.