Why Barry Commoner Matters

Here is a rough draft of a paper I published in Organization & Environment, outlining Barry Commoner’s social and historical significance. It overlaps with (and is drawn from) a talk at the American Sociological Association I posted recently, but it goes deeper into Commoner’s contributions to science, democracy, and the environment.

Why Barry Commoner Matters

It would be very difficult to properly understand the last fifty years of American environmentalism without recognizing the biologist Barry Commoner’s important contributions to its method and practice.  I make this claim with some vested interest (Egan 2007), but historical analysis of environmental activism since World War II points to a number of significant changes in American environmentalism, many of which find Commoner at their source.  Commoner’s place in the history of American environmentalism is based in large part on the breadth of his activism.  Commoner participated in scientific and activist campaigns to bring an end to aboveground nuclear weapons testing, to raise awareness about toxic chemicals in the city, on the farm, and in the home, to identify the harmful production practices of the petrochemical industry, to address economic and energy sustainability, and to create a more peaceful and equitable world.  More specifically, Commoner was centrally involved in efforts surrounding synthetic pesticides, detergents, and fertilizers; mercury, lead, and several other heavy metals; photochemical smog; population; sustainable energy; urban waste disposal and recycling; dioxin; and, more recently, a return to genetic theory.

But this essay sets out to argue that the depth and influence of Commoner’s activism is of even greater historical significance.  It sets out to provide historical context for Commoner’s career to allow for further investigation of his influence across a broad swath of American scientific, democratic, and environmental principles, and proposes to argue that Commoner saw these three pillars of his activity not as independent aspects of his political sensibilities but part of a single, intrinsic whole.  That science, democracy, and environment should be so related is indicative of Commoner’s deep-seated conviction that human societies, their politics and economies, and their physical environments functioned in larger, holistic systems.  Indeed, Commoner’s great contribution to environmental activism might be articulated as his capacity to identify the root causes of American environmental decline in the post-World War II era.  This is important—indeed, a better and popular understanding of Commoner’s activism is important—because the intersections between science, society, and the environment that serve as the cornerstone of Commoner’s career and work are not simply historical points of interest, but remain vitally relevant to contemporary debates and struggles to address toxic contaminants, energy productions crises, and global climate change.

Science

While Commoner is typically remembered as a social and political activist, it is important to stress that he came to this activism from his professional training in science.  From a very early point, Commoner was devoted to the notion that scientific research should be directed toward the public good.  His training in the 1930s and his early career at Washington University in St. Louis coincided with significant structural changes in the academy and unprecedented technological growth throughout American society.  Of that period, Commoner remembered, “I began my career as a scientist and at the same time … learned that I was intimately concerned with politics.”  That perspective helped him to develop a social perspective that he applied to his all his activities, and before he had completed his undergraduate studies at Columbia University, he was deeply committed to participating in “activities that properly integrated science into public life” (Commoner 2001).

During World War II, Commoner served in the U.S. Navy, and it was during his wartime service that he discovered firsthand that scientific innovations often possessed unanticipated and undesirable side effects.  In 1942, Commoner headed a team working to devise an apparatus that would allow torpedo bombers to spray DDT on beachheads to reduce the incidence of insect-borne disease among soldiers.  The new device was tested in Panama and at an experimental rocket station off the New Jersey coastline that was infested with flies.  The apparatus worked well, and the DDT was tremendously effective in killing the flies.  Within days, however, new swarms of flies were congregating at the rocket station, attracted by the tons of decaying fish—accidental victims of the DDT spraying—that had washed up on the beach (Strong 1988).  As the flies fed upon the dead fish, Commoner witnessed an eerie foreshadowing of how new technologies often brought with them environmental problems that their inventors had not anticipated.  Commoner (1971) would later apply this notion to his four laws of ecology, recognizing that there is no such thing as a free lunch.

Such environmental decline—a product of unforeseen consequences associated with many of the new technological products of the petrochemical industry—created a context in which an increasing gulf emerged between what was known and what it was desirable to know (Douglas & Wildavsky 1982, 3), and thereby changed the shape of American science.  Nuclear fallout, the incidental effects of DDT and other synthetic pesticides, the build-up of new detergents and fertilizers in water systems, the introduction of photochemical smog from automobile emissions, and the fact that these new petrochemical products did not break down in nature were the result of a kind of artificial reductionism, which was itself the product of this new science.  In fabricating these new products, innovators directed their attention to the benefits their use might provide, and failed to conceive of what costs these introductions might have on human health and the physical environment.  While atomic bombs, pesticides, detergents, fertilizers, automobiles, plastics, and the other creations of new science and industry were very good at doing what they set out to do, each came with a host of unanticipated environmental problems in large part because their design and implementation was encouraged by sources outside of science.  The economist, Thorstein Veblen, for example, asserted that knowledge reflected the material circumstances of its conception; the questions science asked or new technologies as they were produced were driven by external interests.  Similarly, in a more recent study, Chandra Mukerji (1990) reads a complex interdependence between science and state, wherein scientists tended to assume the role of highly skilled experts retained to provide legitimacy to government policies.  This artificial reductionism—the exercise of focusing on only a part of the larger equation—posed serious harm to both science and society, Commoner warned.  In an unpublished paper titled “The Scientist and Political Power” (1962), Commoner insisted that the integrity of science was “the sole instrument we have for understanding the proper means of controlling the enormously destructive forces that science has placed at the hands of man” (4).  Should that integrity be eroded—and this kind of artificial reductionism was a distinct threat—Commoner worried that “science will become not merely a poor instrument for social progress, but an active danger to man” (2-3).  Commoner’s was not by any stretch a novel observation, but his greater significance in the larger discussion surrounding distrust in science and technology stems from his articulation of the hazards inherent in a “disinterested” science being dictated by outside interests.  Too often, environmental problems arise from the disconnect between nature and scientific evidence on the one hand and state fantasies and directives on the other.

As Shapin has observed (1996, 164), “good order and certainty in science have been produced at the price of disorder and uncertainty elsewhere in culture.”  By way of example, Commoner found that the acceptance of synthetic detergents, which were the product of good order and certainty in science—they were, after all, rather effective in cleaning clothes—produced disorder and uncertainty when foam came from household faucets and other drinking sources because the detergents did not break down in nature and effectively choked water system bacteria.  McGucken (1991) noted the paradox that “achieving human cleanliness entailed fouling the environment.”  This paradox was not lost on Commoner, who observed that synthetic detergents “were put on the market before their impact on the intricate web of plants, animals, and microorganisms that makes up the living environment was understood” (1966a, 7).

Commoner’s concern was a fairly logical one: discoveries in the chemical and physical sciences failed to take into account the biological consequences of their introduction into the marketplace and into nature.  As he noted in Science and Survival (1966b, 25), “Since the scientific revolution which generated modern technology took place in physics, it is natural that modern science should provide better technological control over inanimate matter than over living things.”  Whereas ecology endorsed a more holistic understanding of the environment, industrial science worked in a more reductionist manner.  In “The Integrity of Science” (1965a), Commoner illustrated the dangers of this kind of reductionist approach, noting that the Soap and Detergent Association had admitted that no biological field tests had been conducted to determine how the new detergents would interact with the local ecosystem.  “The separation of the laws of nature among the different sciences is a human conceit,” Commoner concluded elsewhere.  “Nature itself is an integrated whole” (1966b, 25).

The disparity between the physicochemical sciences and the biological sciences was a direct consequence of the American science policy that followed World War II, as government funding supported nuclear physics and industry supported developments in petrochemical experimentation.  This was an important development.  Whereas the ethos of science lauded the wider discipline’s democratic principles and critical peer review, knowledge increasingly came to reflect the material circumstances of its conception.  During and after World War II, those material circumstances were increasingly shaped by an omnipresent military influence that dominated scientific research agendas across the country.  In 1939, the federal government had allotted $50 million per year to science research, 18 percent of all private and public spending on research and development.  By the end of the war, the federal investment was $500 million, and constituted 83 percent of all funding.  In 1955, the annual research and development budget was $3.1 billion.  By the early 1960s, that budget had climbed above $10 billion, and to $17 billion by 1969.  Moreover, since 1940, the federal budget had multiplied by a factor of 11; the budget for research and development had increased some 200 times.  While that money was a significant boon to scientific research, it also suggested that the American research agenda was integrally connected to political interests.  After World War II, that meant military development and, eventually, the space race (see Egan 2007, 25).  As bombs, rockets, and synthetic products emerged as the fruits of this new research—very much the reflection of the material conception—more and more environmental problems emerged.  In sum, science was very good at finding what it was looking for, but little else.

Democracy

As previously noted, Commoner’s science was also deeply imbued with a strong social responsibility.  Shortly after World War II, at the height of Cold War tensions, American scientists found that their intellectual freedoms were being somewhat curtailed by national security interests and that their primary duty was to what President Eisenhower would famously call the military-industrial complex as he left the White House.  Cold War priorities seemed in conflict with what Robert K. Merton (1957) called the “ethos of science,” which protected and preserved the scientific community’s standards, and ensured a climate in which good basic research could be conducted.  Commoner saw a contradiction between the sabre-rattling of the Cold War and the intellectual freedom that drove scientific progress.  During the 1950s, he emerged as one of the more prominent socially engaged scientists, who saw their duty residing in creating a better democratic society, not a dominant one.  The historian Donald Fleming (1972) has called these activist scientists “politico-scientists,” an apt term that is representative of Commoner’s career as a whole.

As a scientist, Commoner worked on the conviction that he had an obligation to serve the society that made his work possible.  In a paper titled “The Scholar’s Obligation to Dissent,” Commoner wrote:

The scholar has an obligation—which he owes to the society that supports him—toward … open discourse.  And when, under some constraint, scholars are called upon to support a single view, then the obligation to discourse necessarily becomes an obligation to dissent.  In a situation of conformity, dissent is the scholar’s duty to society (Commoner, 1967, 7).

Commoner had a particular expertise, and it was his social responsibility to identify and speak out on problems that would otherwise be left unaddressed.  And the Cold War was a period of intense (and, frequently, enforced) conformity.  In expressing his obligation to dissent, Commoner was bucking a national social trend in science and in society at large.

The existence of Cold War conformity posed a particular challenge to the politico-scientist, however.  “Conformity is often a sensible course of action. … One reason we conform is that we often lack much information of our own” (Sunstein 2003, 5).  As a means of challenging Cold War conformity and to deflect challenges that he was subverting American values, Commoner invented the science information movement. The reason few people objected to nuclear fallout or DDT or dioxin was because they lacked the technical information to understand the dimensions of the problem.  As a scientist—with a particular kind of expertise and responsibility to the society that supported him—Commoner felt a special duty to provide an accessible and vernacular body of scientific information on the environmental crisis.

The most celebrated example of the science information movement is the Baby Tooth Survey, which collected teeth to demonstrate the hazards of strontium-90, a particularly dangerous component of nuclear fallout.  Strontium-90 was chemically similar to calcium, and followed a similar path through the food chain, falling on grass, being consumed by cattle, and appearing—in place of calcium—in milk, consumed by people, and especially children.  The Greater St. Louis Committee for Nuclear Information, of which Commoner was a founding member, responded to growing public concerns that fallout from nuclear weapons testing could have a negative health impact on citizens, and especially children.  The Atomic Energy Commission had long defended aboveground nuclear weapons testing by downplaying any potential health risk inherent.  But by 1953, uncertainty had grown as nuclear radiation was being detected in much higher than anticipated quantities.  Again, another example of scientific hubris defied the ethos or integrity of science.  More immediately, however, Americans wanted to better understand the hazard.  In a campaign begun in early 1958, the Committee for Nuclear Information put out a call for baby teeth from the greater St. Louis area.

The Committee was inspired by an article that the biochemist Herman M. Kalckar had published in Nature in August 1958.  Titled “An International Milk Teeth Radiation Census,” the essay proposed a scientific study of baby teeth as a means of determining the extent to which fallout was being absorbed into human bodies.  “If a continued general trend toward a rise in radioactivity in children’s teeth were attained,” Kalckar wrote, “it might well have important bearings on national and international policy” (283).  In a press statement in December 1958, the Committee for Nuclear Information announced its plans to collect 50,000 baby teeth a year to monitor for strontium-90.  Because strontium-90 had begun to fall to earth roughly ten years earlier, the children who were currently losing their deciduous teeth were providing perfect samples, since these teeth had been formed from the minerals present in food eaten by mothers and infants at the nascent stages of the fallout era.

The response to the Committee for Nuclear Information’s call for teeth was considerable.  By the spring of 1960, the survey had received 17,000 teeth.  In late April 1960, St. Louis Mayor Raymond Tucker declared Tooth Survey Week to initiate the Committee’s spring tooth drive.  Support from the mayor, the St. Louis Dental Society, and the St. Louis Pharmaceutical Association provided publicity for the campaign and developed widespread grassroots support; 10,000 teeth were collected in the next month alone.  In November 1961, the Committee published the Baby Tooth Survey’s preliminary findings in Science, presenting strontium-90 absorption levels in St. Louis between 1951 and 1954, and arguing for the validity of their approach.  By that time, 67,500 had been cataloged and 1,335 had been used in the initial study, which confirmed widespread fears that strontium-90 was increasingly present in children’s bones.  The amount of strontium-90 began increasing after 1952, the year the first hydrogen bomb was detonated.  Whereas levels of strontium-90 found in teeth from 1951 to 1952 contained roughly 0.2 micromicrocuries per gram, that number had doubled by the end of 1953, and tripled and quadrupled in 1954 (Reiss 1961).

The Baby Tooth Survey officially continued its work until 1968, but from a public information standpoint, the call for baby teeth was an instant and inspired success and contributed to a sea-change in the American response to nuclear weapons testing and radioactive fallout.  Whereas Democratic presidential candidate Adlai Stevenson had barely caused a ripple among American voters in 1956 when he proposed a test ban, a more public debate over the costs and benefits of nuclear testing was front and center within a half-decade, and a Nuclear Test Ban Treaty was signed in 1963.  In an October 1964 speech, President Lyndon Johnson noted the connection between health and nuclear fallout, referring specifically to the hazards noted by Commoner and the Committee for Nuclear Information:

The deadly products of atomic explosions were poisoning our soil and our food and the milk our children drank and the air we all breathe.  Radioactive deposits were being formed in increasing quantity in the teeth and bones of young Americans.  Radioactive poisons were beginning to threaten the safety of people throughout the world.  They were a growing menace to the health of every unborn child (cited in Commoner 1966b, 14-15).

The Baby Tooth Survey is historically significant on a number of counts.  It constitutes an early example of biomonitoring as a component of environmental activism, a practice that has since become a fundamental aspect of environmental health campaigns (Corburn 2005; Daemmrich 2007; Roberts 2005).  While biomonitoring—the practice of using biological organisms to track fluctuations in the exposure to chemicals or contaminants—was a product of Progressive-era occupational health efforts to trace the impact of lead, arsenic, and other chemicals in workers (see Clark 1997, for example), the Baby Tooth Survey was a very early instance of those practices being applied to a more generic population to monitor and track the exposure of environmental pollutants at large (Egan 2007, 66-72, 75).

As a form of environmental activism, it also had the particular advantage of requiring public participation, which, in turn, provided a ready audience for the results and ensured the development of a grassroots movement.  Concerned parents sent in teeth and waited anxiously to learn the results.  Were their children being poisoned?  The Committee for Nuclear Information also found ways to include children, setting up an Operation Tooth Club.  Children who submitted teeth became members and received a certificate and a pin that read: “I gave my tooth to science.”  As young adults, this generation of children would come to witness the most emboldened and successful environmental legislation in American history and would participate—centrally—in the first Earth Day (1970).  In many respects, the participation required for the success of the Baby Tooth Survey fostered the growth of American environmental awareness by providing the public with the tools necessary for their own empowerment.

But in order to guarantee the success of the Baby Tooth Survey, Commoner and his colleagues needed to carefully translate their technical findings into a more vernacular or accessible language so that their non-scientific audience could understand and act upon their findings.  And this was a critical feature of Commoner’s science information movement: rather than telling people what to do, Commoner developed a rhetorical method of presenting accessible scientific information to the public, empowering them to participate in political decision-making.  Rather than simply sharing the results of the study, Commoner shared the hypotheses, experiments, and observations, leaving the public to participate in the interpretation of the results.  There was little question that nuclear fallout posed some risk to human health.  But how much?  And, more to the point, how much was too much?  These were social questions, not scientific questions, and Commoner saw his role as providing the public with information so that they could properly evaluate the risk and determine their collective threshold, not based on actuarial calculations made by policymakers, but within their own communities.  This re-conception of the scientist in practice—intentionally expanding the traditional peer review in order to include and communicate with a public audience—is likely the most significant development in the history of science since World War II.  This kind of risk analysis, Commoner fervently argued, was a social conversation, not a scientific one; scientists had no special moral authority to make decisions over what constituted acceptable exposure to fallout or DDT or dioxin.  He warned: “The notion that … scientists have a special competence in public affairs is … profoundly destructive of the democratic process.  If we are guided by this view, science will not only create [problems] but also shield them from the customary process of administrative decision-making and public judgment” (1966b, 108).  Commoner challenged the American faith in monitoring the environment and “leaving it to the experts.” Determining the nature of environmental hazards was a scientific exercise, but deciding how a society should address those environmental hazards was a political one.  It warrants noting that this practice of social empowerment has become the cornerstone of environmental justice activism.

This exercise remained, however, highly controversial as it bucked conformist trends.  In order to dodge the hazards of Cold War conformity, Commoner established a mechanism in which information that criticized the existing social and political order could be presented as bolstering democratic virtues.  For instance, as early as 1958, Commoner insisted that the scientific information be presented without conclusion or evaluation.  If the data were sufficiently accessible, the public would be able to draw their own conclusions.  This kind of activity promoted democracy, science’s role in democracy, and how both were involved in the emergence of a new kind of environmentalism after World War II.

Environment

Commoner regularly admitted that his work on fallout had made him an environmentalist.  Whereas the Atomic Energy Commission often limited their studies of fallout to direct exposure, Commoner demanded that they also consider radioactive exposure through the food chain.  People did not live in isolation, but rather as part of a larger ecological community.  The hazards imposed by nuclear fallout or, indeed, the new products of the petrochemical industry, were not simply direct threats to human health, but rather indirectly in their proliferation throughout the environment.  For Commoner, then, the science of fallout was not at all far removed from the contamination of air and water.  This was brought home even more concretely; shortly after the Committee for Nuclear Information began its campaign against aboveground nuclear weapons testing, Rachel Carson breathed new life into the American environmental movement with the 1962 publication of Silent Spring.  The book was remarkably well received by a public audience, already primed by alarming discoveries surrounding radioactive fallout (see Lutts 1985).  Like the Committee for Nuclear Information, Carson also exhibited an astute knack for presenting complicated, technical information in an accessible and persuasive manner.

Prompted by the resounding success of Silent Spring and the emergence of a charismatic generation of environmental scientists—among them Commoner, Paul Ehrlich, LaMont Cole, and Kenneth Watt—the environmental movement gained widespread credibility by relying on scientific expertise of their own.  This rise of popular ecology and the scientific leadership of 1960s environmentalism marks another historically important development.  After World War II the environmental movement was led “not by poets or artists, as in the past, but by individuals within the scientific community.  So accustomed are we to assume that scientists are generally partisans of the entire ideology of progress,” the historian Donald Worster (1994, 22) has observed, “that the ecology movement has created a vast shock wave of reassessment of the scientist’s place in society.”  For more than fifty years, Barry Commoner was at the vanguard of that scientists’ movement.

Commoner’s primary contribution here stems from his resistance to reductionist science and environmental thought.  Building on his earlier discussion of risk and public participation, he pointed to the limitations of science and expertise when it came to environmental problems.  To illustrate these problems, Commoner devoted a chapter of The Closing Circle (1971), his classic treatise on the environmental crisis, to the air pollution problem in Los Angeles; he began by claiming that “for teaching us a good deal of what we now know about modern air pollution, the world owes a great debt to the city of Los Angeles. …  There are few cities in the world with climates so richly endowed by nature and now so disastrously polluted by man” (Commoner 1971, 66).

Los Angeles has suffered a host of air pollutants; one of the earliest during the Second World War was dust from industrial smokestacks and incinerators.  By 1943, residents of Los Angeles started noticing the whitish haze, tinged with yellow-brown that bothered many peoples’ eyes.  They eventually started referring to this new pollutant as smog after the term invented in England to describe the thick clouds that had killed 4,000 Londoners.  The dangerous component in London smog was sulfur dioxide, which had increased in Los Angeles with wartime industrialization; the burning of coal and fuel oil that contain sulfur produced sulfur dioxide.  By 1947 fuel changes and controls began to reduce the amount of sulfur dioxide in the air, and Los Angeles reached prewar levels by 1960.  But instead of getting better, the smog got worse.  Later research determined that the problem in Los Angeles began with nitrogen oxides, which caused photochemical smog.  Nitrogen oxide is produced whenever air becomes hot enough to cause its natural nitrogen and oxygen to interact.  The primary culprit seemed to be high temperature power plants, and authorities imposed rigid controls on open venting of the numerous oil fields and refineries that surrounded the city.  With this new information in hand, Los Angeles authorities sought methods to control and reduce the levels of photochemical smog.  But still the smog got worse until scientists stumbled across the notion that cars and trucks were emitting more hydrocarbons and creating more nitrogen oxide than was the petroleum industry.  Detroit introduced engine modifications that reduced hydrocarbon emissions, but at the same time increased nitrogen oxides through the 1960s.  Los Angeles had effectively traded one pollutant for another, and the step-by-step process pursued by researchers of smog and the one-dimensional response from the auto industry proved myopic in really addressing the air pollution problem in Los Angeles.

The Los Angeles case also highlights problems inherent in scientific method as we understand it today.  As Commoner noted in The Closing Circle in reference to air pollution in Los Angeles, “it is extremely difficult to blame any single air pollutant for a particular health effect.  Nevertheless, ‘scientific method’ is, at present closely bound to the notion of a singular cause and effect, and most studies of the health effects of air pollution make strong efforts to find them” (Commoner 1971, 78).  This is the great flaw in reductionist science, and why it is so particularly difficult to prove that any single air pollutant is the specific cause of a particular disease, and how tobacco and lead among other threats have been so difficult to regulate against.  When we are forced into a reductionist rubric, it becomes near impossible to target an individual pollutant.  At the same time, we are simply missing the bigger picture.  By concentrating things down to their smallest elements, we reduce our scientific peripheral vision, limiting our capacity to consider—never mind recognize—the potential for multiple causes and effects.  If ecology has taught us nothing else, Commoner repeatedly argued, it has amply demonstrated the complexity that living systems are subject to a multiplicity of intricate relationships on macro and micro scales that defy definitive specialized explanations.

Commoner combated this reductionism on a variety of levels.  The most famous expression of this contempt is, perhaps, his articulation of ecology’s four laws:

  1. Everything is connected to everything else
  2. Everything must go somewhere
  3. Nature knows best
  4. There is no such thing as a free lunch

These four laws have been regularly cited and repeated in popular and scholarly arenas, but they deserve some comment here, as their importance to Commoner’s environmental thinking is frequently understated (Egan 2002).  With the benefit of almost forty years’ hindsight, we might treat Commoner’s four laws as a larger expression of social and environmental interaction and recognize that the connections, changes, knowledges, and free lunches are not merely ecological transpirations, but socioeconomic ones, too.  Industrial pollution, the source of the postwar environmental crisis, was generally considered the cost of postwar affluence; it represented jobs, productivity, and reduced prices of consumer goods and services.  Because the petrochemical industry could manufacture synthetic fertilizers in huge quantities—which lowered production costs—synthetic fertilizers quickly came to dominate the market.  Pollution controls, sustainable energy consumption, and greater efforts to ensure workplace safety and health were frequently marginalized because they reduced the scale of profits enjoyed by such high-polluting industries.  Pollution, inefficient energy use, and the trivialization of worker safety became popular accepted as the price of progress, but in reality they cumulatively constituted a false prosperity.

The real costs of pollution, Commoner argued, were not appearing on the balance sheet.  While private industries belched carcinogens into the environment, the public suffered rising cancer rates.  In The Closing Circle, Commoner stressed the significance of externalities: the infliction of involuntary, non-beneficial, or indeed, detrimental repercussions on another industry or the environment or the public.  “Mercury benefits the chloralkali producer but harms the commercial fisherman,” he observed (Commoner 1971, 253).  With its pollution and unanticipated costs, the technological revolution that followed World War II introduced a series of “external diseconomies,” the external or third-party effects of commerce.  As early as 1966, Commoner saw this disconnect between the apparent and real costs of new technologies.  “Many of our new technologes and their resultant industries have been developed without taking into account their cost in damage to the environment or the real value of the essential materials bestowed by environmental life processes. … While these costs often remain hidden in the past, now they have become blatantly obvious in the smog which blankets our cities and the pollutants which poison our water supplies.  If we choose to allow this huge and growing debt to accumulate further, the environment may eventually be damaged beyond its capacity for self-repair” (Commoner 1966a, 13).

Not only did these externalities hide the true damage of the environmental crisis, they were also an expression of reductionist thinking.  “Environmental degradation represents a crucial, potentially fatal, hidden factor in the operation of the economic system,” Commoner argued in The Closing Circle (273).  Coal-burning power companies were among the greatest polluters of air, but disparity between their rising profits as demand for electricity increased and the growing and social and environmental costs suggested a paradox.  Stressing the nature of external diseconomies, Commoner observed that “if power companies were required to show on electric bills the true cost of power to the consumer, they would have to include the extra laundry bills resulting from soot [from burning coal], the extra doctor bills resulting from emphysema, the extra maintenance bills due to erosion of buildings [from acid rain].”  These were hidden expenses.  “The true account books are not in balance,” Commoner continued, “and the deficit is being paid by the lives of the present population and the safety of future generations” (Commoner 1970, 5-6).  As a result of these kinds of externalities, Commoner insisted “the costs of environmental degradation are chiefly borne not by the producer, but by society as a whole.”  In noting these external diseconomies, Commoner identified the social impact of environmental decline.  “A business enterprise that pollutes the environment is … being subsidized by society” (Commoner 1971, 268).

Commoner also emphasized the hazards of reductionist science, introducing a kind of systems thinking to environmental activism.  Systems thinking works on the premise that the component parts of a system will act differently when isolated.  As a concept, we might recognize the relationship between systems thinking and holistic interpretations; in each case the sum is greater than its parts.  With respect to Commoner’s career, science, democracy, and the environment might be taken as the three key systems that drove the post-World War II world and Commoner identified how they were intrinsically linked.  Commoner’s historical significance is the product of his capacity to recognize that “everything is connected to everything else” and then to explain that in accessible and persuasive language.  Identifying the relationship between biodiversity, occupational health, social equality, and peace literally transformed the landscape of environmental thinking during the 1960s and 1970s.  What’s important here is the fact that Commoner drew persuasive connections between the myriad social problems that emerged after World War II.  The discovery of pollutants like dioxin rarely altered production choices, in large part because expertise demanded a more reductionist examination of the problem.  Instead, management of those risks became a more prominent feature of the technological landscape.  (This is a variant on the old prevention vs. cure routine).  Irrespective of which pollutants are particularly harmful, we can conclusively insist that polluted air makes people sicker than they would otherwise be.  In a discussion of public environmental risk, Commoner argued there was something inherently wrong with existing methods of measuring harmful elements in the environment when the burden of proof rested on the side of human health.

Identifying the nature of these burdens was also critical.  Whereas Commoner noted that society shared in the costs of environmental degradation, they rarely did so equally.  The unequal distribution of environmental risks also posed a deeper social problem insofar as environmental pollutants inhibited human health, which, in turn, inhibited social progress.  A vicious circle: poor and minority communities were more exposed to environmental hazards, suffered greater health problems, and were prevented from achieving significant social progress.  This prompted Commoner to charge that “there is a functional link between racism, poverty, and powerlessness, and the chemical industry’s assault on the environment” (Russell 1989, 25).  In observing that poor and minority communities faced greater environmental threats by dint of their geographic location and limited political power in work dating back to the 1960s, Commoner effectively anticipated the environmental justice movement.

Conclusion

On 17 February 1965, at the 4th Mellon Lecture at the University of Pittsburgh’s School of Medicine, Commoner gave a paper entitled “Is Biology a Molecular Science?” He criticized molecular biology and the new cult of DNA, which promised to unlock the secret of life, and concluded his remarks with the assertion: “If we would know life, we must cherish it—in our laboratories and in the world” (Commoner 1965b, 40).  It was a simple statement, but one that would resonate through most all of his activism and take on especially poignant significance as we move into the twenty-first century.  Knowing and cherishing life applied to Commoner’s integration of science, democracy, and the environment insofar as it challenges us to think about poverty, health, inequality, racism, sexism, war, means and modes of production, scientific method and practice, and our exploitation of natural resources.  Commoner’s felicity at grasping for the larger picture puts these disparate themes into harmonious conversation with each other.

Commoner worried about reductionism accompanied by startling advances in chemistry, physics, and biology.  He appreciated the urgent need for the greater study of living things, not just as a scientific endeavor, but also as a social and environmental imperative.  And as an environmental necessity, this approach demands greater public participation and interaction in addition to more scientific recognition.  For fifty years, Commoner’s criticisms of the petrochemical industry focused on the manner in which its products barged unwelcome into the chemistry of living things and polluted people, animals, and ecosystems. While most of the chemicals manufactured or released as waste by the petrochemical industry resembled the structure of chemical components found in nature, they were sufficiently different to be hazardous to life. To Commoner, the connection to twenty-first century genetic engineering was clear: we were in the process of committing the same tragic error, but this time with the secret of life.

But the message was the same.  Environmental risks were being unequally disseminated throughout the environment without the public’s approval or participation.  They were being distributed unevenly, and the public was frequently unaware of the inherent hazards.  This larger phenomenon constitutes a central feature of American environmental history since World War II, and the public response—for which Commoner was a key catalyst—is a pivotal component of the history of American environmentalism.  Barry Commoner matters—or deserves scholarly and political attention—because of the method and practice of a career spent developing a social mechanism for developing and disseminating information, bringing science and the environment into the mainstream, and challenging scientists, the public, and policy-makers to examine the world in more holistic frames.  Combined, these portions of Commoner’s career offer a historically significant account of the past half-century of American environmentalism, but they also offer a poignant and positive prescription for the future.  Amid journalistic criticisms eulogizing the death of environmentalism (Nordhaus & Shellenberger 2007), Commoner, almost forty years ago, provided a template that resonates as clearly in the twenty-first century:

In our progress-minded society, anyone who presumes to explain a serious problem is expected to offer to solve it as well.  But none of us—singly or sitting in committee—can possibly blueprint a specific “plan” for resolving the environmental crisis.  To pretend otherwise is only to evade the real meaning of the environmental crisis: that the world is being carried to the brink of ecological disaster not by a singular fault, which some clever scheme can correct, but by the phalanx of powerful economic, political, and social forces that constitute the march of history.  Anyone who proposes to cure the environmental crisis undertakes thereby to change the course of history.

But this is a competence reserved to history itself, for sweeping social change can be designed only in the workshop of rational, informed, collective social action.  That we must act is clear.  The question which we face is how (Commoner 1971, 300).

Barry Commoner

I just received the following e-mail from Barry Commoner’s longtime associate David Kriebel, informing me that Commoner died in his sleep today. He was 95. I’m still gathering my own thoughts on this, but I am very grateful for the time he made available to me over the past decade. Very quickly, he shifted from research subject to friend, and I am so glad I had a chance to spend some time with him last month in New York. More to follow.

Barry Commoner died today. His wife Lisa called this evening to say he died peacefully in the hospital with her by his side. They’d had a lovely conversation just last night, and he died in his sleep.

Barry was an optimist. He said that since it was human economic development that had messed up the planet, it was entirely feasible for humans to fix it.

He was also a deep systems thinker, who had no time for the academic jargon of systems. He never used diagrams in his books because he said that if his ideas were going to have impact, they ought to be understandable in plain English.

Barry believed in giving ordinary people the information about the ecologic impacts of technology and he trusted that they would make the right decisions. He thought scientists should serve the public in this way, and he was very skeptical of putting experts in charge of making decisions for the public.

Barry said that good political strategy should be based on good science; trying to force the facts to fit a position would fail because sooner or later the truth would come out, and you would lose the confidence of the public.

In our progress-minded society, anyone who presumes to explain a serious problem is expected to offer to solve it as well.  But none of us – singly or sitting in committee – can possibly blueprint a specific “plan” for resolving the environmental crisis.  To pretend otherwise is only to evade the real meaning of the environmental crisis: that the world is being carried to the brink of ecological disaster not by a singular fault, which some clever scheme can correct, but by the phalanx of powerful economic, political, and social forces that constitute the march of history.  Anyone who proposes to cure the environmental crisis undertakes thereby to change the course of history.  But this is a competence reserved to history itself, for sweeping social change can be designed only in the workshop of rational, informed, collective social action. : That we must act now is clear. The question which we face is how.

– Barry Commoner, The Closing Circle, p. 300

 

Thinking Food, Agriculture, Local, Corporate-type Things…

A new Starbucks recently opened in Dundas, which is distressing to me insofar as Dundas was a small strip of small businesses until now. One of which is a terrific bike-themed café with the best espresso I’ve ever had. The coffee is roasted locally in Concord, Ontario with one eye on environmental sustainability, which raises a series of interesting questions about sustainable and fair trade practices, priorities, and processes. I’m less concerned about the longevity of my local café, which I think has identified a niche that should allow it to thrive, and more about the aesthetics of the downtown shops.

But that’s not what I want to talk about (today). Just a couple of doors away from the new Starbucks is a wonderful butcher that draws its meat from its own local farms and a family-owned (since 1915) grocer’s. Across the street is a cheese shop with a variety of local and international cheeses. My brother is in town this term, teaching a course on animals and technology. He’s a vegetarian and experimenting with veganism at the moment.

All of which has had me thinking about food and where we get it from and the relationship we have with our foods. I wish I could say that my family does all its shopping in the various shops in Dundas, but our budget sadly doesn’t afford that luxury. But between my brother’s course and Starbucks’ arrival, I’ve been ruminating more about this. Also, too, a very promising doctoral student interested in the history of cheese (more on that some other time).

Almost a decade ago, I wrote this as part of a larger conversation about foods, their history, and how they influence our social and environmental pasts and futures. It was subsequently published on Common Dreams (complete with pretty pictures) and a variety of other places, too. José Bové is a pretty interesting character. He’s not really an environmentalist, but you can see how he might be considered one. It serves as a reminder that environmentalism increasingly extends well beyond nature and into a number of social contexts.

The Poverty of Power: Energies, Economies, and the Relevance of Environmental History

Below is a paper I delivered at the American Historical Association annual meeting in Seattle in 2005. It’s drawn predominantly from work I did on Barry Commoner, but it hints at some directions I’d like to revisit. One topic that interests me especially is the American relationship between energy technologies and economic recession. It seems as though rhetoric and enthusiasm for alternative energy technologies increases during periods of economic decline. Witness the 1930s and 1970s, for example. As the economy recovers, optimism for newer energy technologies subsides. This is an issue that deserves more consideration, but here’s an early departure for my interest in the idea. The title of the post was the title that appeared in the program.

My paper plays with the double entendre in our session’s title, “Environmental Matters.”  On the one hand, my subject material is of an intrinsically environmental persuasion.  In keeping with this panel’s focus on the 1970s, I turn my attention to the oil embargo, the energy crisis, the related economic crisis and their cumulative cultural impact on American environmental understanding.  On the other hand, I propose to use my brief account of the energy crisis to demonstrate environmental history’s significance not just within the larger existing historiography, but also outside the academy.  While this second assertion is hardly novel, it warrants our continued attention, particularly with respect to the study of the more recent past like the 1970s.  More specifically, the current debate over American energy production remains mired in the same polluting and wasteful bind that propelled the energy crisis thirty years ago.  Just as environmental  history argues that nature is an actor in the human drama—not just the backdrop—I want to present this paper as a series of different scenes that, combined, provide us with a deeper understanding of our energies, economies, and history.  So this is a paper of many acts, which, I fear, may conclude as more of a sermon.

[Act I]: To many, the 1970s was a period most aptly interpreted by Doonesbury’s characters, who, at decade’s end, toasted “a kidney stone of a decade.”[1]  During the 1970s, the euphoria that followed World War II dissipated into tension, angst, and crisis, punctuated by the Watergate scandal, defeat in Vietnam, the oil embargo, and severe economic depression.  Noting the popular response to such unsettling events, Tom Wolfe proclaimed the 1970s the “Me Decade,” characterized by self-exploration, fragmentation, and separation; Christopher Lasch called it a “culture of narcissism,” which involved living in the moment and for self, not predecessors or posterity.[2]  “After the political turmoil of the sixties,” wrote Lasch, “Americans … retreated to purely personal preoccupations.”[3]  A sort of spiritual hedonism swept American culture and helped to insulate Americans from the crises that pervaded public and political life.  In a strange sense, it was a perfect and yet impossible condition for the burgeoning environmental consciousness that had progressively become an integral feature of the American mainstream through the 1960s.  On the one hand, the narcissist demanded a clean and beautiful environment; on the other, there existed a disconnect between present, past, and future that rendered almost hopeless efforts for effective, long-term environmental protection.  The perceived immediacy of the crises that struck the 1970s belied their historical origins, and public and policymakers alike exhibited little vision in scrambling for short-term fixes to bigger problems.  Nowhere was this more apparent than in the popular response to the energy crisis and the continuing demand for cheap energy, pervasive since the economic boom after the Second World War.

The decade began, of course, with Earth Day and the widespread acceptance of an environmental crisis that demanded public and policy attention.  After the unprecedented success of the first Earth Day, Denis Hayes and other Earth Day organizers targeted “the Dirty Dozen,” the twelve Congressmen with the worst environmental records; during the fall 1970 elections, seven of the twelve lost their seats and the environmental movement presented itself as a legitimate and powerful new force in Washington, D.C.  This victory was followed by strong legislation to clean air and water, control pesticides and pesticide use, and protect endangered species.[4]  The energy crisis in 1973 gave further credence to the importance of the conservation of natural resources as oil shortages caused mass hysteria in the press and at the gas pump, but it also muted the broader environmental agenda and left Americans clamoring for cheap fuel and electricity, not responsible energy use and conservation.  By the middle of the decade, America found itself consumed in a dire economic crisis, and the environmental momentum gained by successes early in the decade was dead.  [End of Act I.  In the interest of time, we’ll forego intermissions between acts and jump straight into Act II, which examines the energy crisis from an economic and socio-political perspective.]

Since the end of World War II, the American economy had been buoyed by unparalleled, rampant prosperity.  Indeed, such was the boom and sense of confidence, during the 1960s President Lyndon Johnson had tried to fight the Vietnam War without raising taxes.  For a time, it seemed as though the bullish economy would sustain Johnson’s efforts, but by the time he left office in 1969, his defiance of economic logic posed difficult problems for the Nixon administration.[5]  After more than two decades of economic growth and prosperity, the bottom fell out in the 1970s and the economy was in an acute state of crisis, which precipitated the onset of stagflation, manifest by a series of related factors: productivity was in decline; unemployment was on the rise and so were interest rates and inflation, in part a result of Johnson’s tax-free war; and trade deficits, unbalanced budgets, and a growing deficit were stalling the national economy.  Sagging productivity, galloping inflation, and stifling unemployment—especially among minorities and the millions of baby boomers now entering the workforce—constituted a difficult challenge for the new Nixon administration, and it proved quickly that it was not up to the challenge.

The socio-economic hazards inherent in the inefficient consumption of energy hit home to Americans with the onset of the 1973 energy crisis.  According to Walter Rosenbaum, “on the eve of the ‘energy crisis’ of 1973, per capita American energy use exceeded the rest of the globe’s per capita consumption by seven times and remained twice the average of that in European nations with comparable living standards.”[6]  In October 1973, Americans experienced a “crude awakening.”  Angered by Nixon’s devaluation of the American dollar—which had already resulted in raised oil prices and contributed to worldwide inflation—and the American support for Israel during the Yom Kippur War, Arab leaders of the Organization of Petroleum Exporting Countries (OPEC) imposed an embargo on shipments of oil to the United States.  In December 1973, OPEC raised the price of oil to $11.65 a barrel, almost four times the cost prior to the Yom Kippur War.  The oil embargo lasted five months—from 16 October 1973 to 18 March 1974—as Americans contended with what Nixon called “a very stark fact: We are heading toward the most acute shortage of energy since World War II.”[7]

Nixon’s statement belied a dire miscalculation of the global economic climate on the part of his administration.  American policy dictated that Arab oil exporters needed American capital and technology more than Americans needed their oil.  After all, an attempted embargo in 1967 had supported this point of view; amid regional instability and embargoes, the oil still got through.[8]  What had changed by 1973?  In short: domestic oil production peaked in 1970.[9]  The strength of domestic wells had allowed the United States to stockpile a surplus capacity of about 4,000,000 barrels of oil a day between 1957 and 1963.  By the 1970s, that surplus had dropped to 1,000,000 barrels a day, and the United States was forced to become a major oil importer.  American demand for oil, extraction at full capacity at home, and growing dependence on an unstable and volatile part of the world for its lifeblood prompted former Commerce Secretary Peter Peterson to wryly claim: “Popeye is running out of spinach.”[10]  It certainly seemed the case.  In 1967, 19% of oil for American consumption came from overseas; by 1972, that figure had risen to 30%, and 38% two years later.[11]  Oil imports more than doubled between 1967 and 1973—from 2.2 million barrels a day to 6,000,000 barrels a day—and the increasing importation of Arab oil, not to mention the enormous quantities of dollars held by Arab oil countries contributed markedly to the devaluation of the dollar in 1971 and again in 1973.[12]

As a result, the Nixon administration’s position dramatically underestimated the American dependence on foreign oil.[13]  According to Bruce Schulman, “the world’s great superpower seemed suddenly toothless, helpless, literally and metaphorically out of gas.”[14]  The oil embargo precipitated a series of events that demonstrated the centrality of oil to the American economic system.  The price of gasoline, heating oil, and propane climbed markedly, as did many petrochemicals like fertilizers and pesticides that were made from petroleum products.  Gasoline prices, combined with the shortage of gasoline, depressed car sales, and the automotive industry experienced a serious decline.  According to a 1975 issue of Survey of Current Business, a Department of Commerce publication, within a year of the embargo, the $5.3 billion decline in gross auto product during the fourth quarter of 1974 accounted for more than 25% of the decline in real Gross National Product.  In simpler terms, the battered auto industry was pinched by the oil embargo and contributed to the spreading economic alarm by laying off over 100,000 autoworkers.[15]  Increased fuel prices raised transportation costs and the price of agricultural chemicals, both of which contributed to inflated grocery bills.  Costs for heating went through the roof.[16]  Energy problems became problems of inflation and unemployment; energy had become firmly enmeshed in the deepening economic crisis.[17]  The energy crisis brought the country to its knees.  As Secretary of State Henry Kissinger told Business Week late in 1974—ironically a year after receiving the Nobel Prize for Peace—forceful action against Middle Eastern countries withholding oil might be justifiable in preventing “some actual strangulation of the industrialized world.”[18]  A few weeks earlier, Newsweek had quoted a “top U.S. official” as saying that “if the oil-producing nations drive the world into depression in their greed, the West might be forced into a desperate military adventure.”[19] [Fade to black: End of Act II]

On a recent flight across the country, I took with me Wallace Stegner’s biography of Bernard DeVoto.  On one level, I hoped that both Stegner and DeVoto would rub off on the conference paper I still had to finish (it didn’t work).  It made for a thoroughly enjoyable flight, however, and in preparing my paper, I was reminded of DeVoto’s lesson—one pre-iterated and subsequently re-iterated by countless others—that the past is not something historians actually recover.  “We are chained and pinioned in our moment,” DeVoto instructed.  “What we recover from the past is an image of ourselves, and very likely our search sets out to find nothing other than just that.”[20]  This is a declaration that environmental historians have taken to heart.  It’s become a mantra of sorts, no doubt inspired by the persistence of the conservation and environmental issues they study. Within a couple of decades of the energy crisis that rapt the American consciousness during the mid-1970s, for example, American drivership had pushed gas and oil consumption to per capita levels higher than those prior to the oil crisis of the 1970s.  While the average fuel rate (miles per gallon) for vehicles on American roads has increased substantially since 1973 (from 11.9 miles per gallon in 1973 to 16.9 miles per gallon in 2000), recently there has been a distressing move to larger vehicles.  The ratio of cars to total vehicles declined from 80% in 1977 to 64% in 1995.[21]

[Act IV]: Environmental history is engaged in charting what Worster has called a “path across the levee.”  It’s a multi-faceted levee between nature and society, mind and matter, and, from an academic perspective, the humanities and the sciences.[22]  This is a vital exercise, in which environmental historians find themselves especially well situated contemplate a critical social question: how did we come find ourselves immersed in a global environmental crisis.  As we consider this question, we learn that the nature of the question is complex.  Our societies and our economies are driven by our natural resources.  Their misuse, overuse, or depletion, then, take on grave socio-political implications.  The extraction of, use of, and dependence on fossil fuels and the contemporary concerns about the future of energy production mirror the intellectual and political dilemmas of the 1970s.  This leads us to another path across the levee: between the past and the present.  How can an environmental history of the energy crisis inform our contemporary questions?

Can historians tell the story of the 1970s energy crisis without delving into the environmental perspective?  Yes, but we do so at our own peril.  In addition to increased fuel consumption and the abandonment of serious conservation measures, our elected methods of energy production are particularly polluting.  Coal, for example, constitutes a significant percentage of our national power production, while its pollutants—among them mercury vapors—intoxicate our air and contaminate our waters.  Distinct regions, communities, and peoples bear the brunt of our polluting legacy.  The health and economic well-being—and the two are intimately linked—of the inhabitants industrial areas—like the provocatively nicknamed “Cancer Alley” in Louisiana—are indicative of this.  Our landscapes represent not just our interaction with nature, but also our interactions with race, class, gender, and disability.  This is more than simply an environmental story, but one that offers a peculiar lens into the inner dynamics of the origins of social angst that have been such a pervasive feature of American history.

This is a story we need to learn and one that historians need to teach.  I began this paper by invoking the title of our session; let me conclude by invoking my own title, “The Poverty of Power,” which I have borrowed from Barry Commoner, who was an active and tireless opponent of American energy policy during the 1970s.  Just prior to the energy crisis, Commoner warned: “We are living in a false prosperity.  Our burgeoning industry and agriculture has produced lots of food, many cars, huge amounts of power, and fancy new chemical substitutes.  But for all these goods we have been paying a hidden price.”  That price, Commoner argued, was the destruction of the ecological system that supported not only human existence, but also—ironically—the very industries that threatened it.  “What this tells us,” he surmised, “is that our system of productivity is at the heart of the environmental problem.”[23]  By the 1970s, Commoner was a veteran of social and environmental activism, and he very consciously recognized that government and business needed to have environmental destruction explained in economic terms in order to be swayed by the gravity of the situation.  Even before Earth Day, Commoner was conscious of this, and in a 1969 address at the 11th annual meeting of the National Association of Business Economists, he translated the environmental crisis into economic terms:

“The environment makes up a huge, enormously complex living machine—the ecosphere—and on the integrity and proper functioning of that machine depends every human activity, including technology.  Without the photosynthetic activity of green plants there would be no oxygen for our smelters and furnaces, let alone to support human and animal life.  Without the action of plants and animals in aquatic systems, we can have no pure water to supply agriculture, industry, and the cities.  Without the biological processes that have gone on in the soil for thousands of years, we would have neither food, crops, oil, nor coal.  This machine is our biological capital, the basic apparatus on which our total productivity depends.  If we destroy it, our most advanced technology will come to naught and any economic and political system which depends on it will founder.  Yet the major threat to the integrity of this biological capital is technology itself.”[24]

The message was ecological, but it was also unmistakably and profoundly economic.  And it was a damning indictment of the industrial forces behind the technological revolution.  Commoner summarized these ideas in The Closing Circle.  “Environmental problems seem to have an uncanny way of penetrating to the core of those issues that most gravely burden the modern world,” he told his readers.  “There are powerful links between the environmental crisis and the troublesome, conflicting demands on the earth’s resources and on the wealth created from them by society.”[25]  In this context, environmental history does not directly help us with solving the problem at hand, but it ensures that we ask the right questions.


[1] G. B. Trudeau, The People’s Doonesbury: Notes from Underfoot, 1978-1980 (New York: Henry Holt, 1981).

[2] Tom Wolfe, The Purple Decades (New York: Berkley Books, 1983), 265-296; and Christopher Lasch, The Culture of Narcissism: American Life in an Age of Diminishing Expectations (New York: W. W. Norton & Co., 1979).  For Wolfe’s famous essay, see also Wolfe, “The ‘Me’ Decade and the Third Great Awakening,” New York (23 August 1976), 26-40. For overviews of the 1970s, see Bruce J. Schulman, The Seventies: The Great Shift in American Culture, Society, and Politics (New York: The Free Press, 2001); Peter N. Carroll, It Seemed Like Nothing Happened: The Tragedy and Promise of America in the 1970s (New York: Holt, Rinehart, & Winston, 1982); Arlene S. Skolnick, Embattled Paradise: The American Family in an Age of Uncertainty (New York: Basic Books, 1991); Jim Hougan, Decadence: Radical Nostalgia, Narcissism, and Decline in the Seventies (New York: Morrow, 1975); and James T. Patterson, Grand Expectations: The United States, 1945-1974 (New York: Oxford University Press, 1996).

[3] Lasch, The Culture of Narcissism, 4.

[4] Among the more prominent pieces of legislation were the National Environmental Protection Act (1970), the Clean Air Act (1970), the Clean Water Act (1970), the Federal Insecticide, Fungicide, and Rodenticide Act (1972), and the Endangered Species Act (1973).

[5] Robert Hargreaves notes: “even at its worst, [the war in Vietnam] never directly accounted for more than 3.5% of the gross national product.  But by dissembling about the true costs of the military involvement and attempting to pay for it out of deficit spending, Johnson and [Robert] McNamara had unleashed forces that would sooner or later—but inevitably—bring America to the reckoning.”  Robert Hargreaves, Superpower: A Portrait of America in the 1970s (New York: St. Martin’s Press, 1973), 111.

[6] Rosenbaum, The Politics of Environmental Concern, 38.

[7] Richard Nixon’s speech was published in the New York Times, 8 November 1973, 32.  Also cited Carroll, It Seemed Like Nothing Happened, 118.  For a good overview of the American energy crisis in relation to the embargo, see Martin V. Melosi, Coping with Abundance: Energy and Environment in Industrial America (Philadelphia: Temple University Press, 1985), 277-293.

[8] For the oil crisis of 1967, see Yergin, The Prize, 554-558.

[9] In the spring of 1971, the San Francisco Chronicle printed a cryptic one-sentence announcement: “The Texas Railroad Commission announced a 100 percent allowable for next month.”[9]  [Cited in Kenneth S. Deffeyes, Hubbert’s Peak: The Impending World Oil Shortage (Princeton, NJ: Princeton University Press, 2001), 4].  The Texas Railroad Commission was effectively a government-sanctioned cartel that matched domestic oil production to demand.  In 1971, Texas wells began pumping oil at full capacity and domestic oil fields could no longer keep up with American demand.  In 1960, Americans consumed 9,700,000 barrels of oil a day; by 1970, that number had grown to 14,400,000, and had climbed to 16,200,000 in 1974.  [Carroll, It Seemed Like Nothing Happened, 119].  Said Byron Tunnell, chairman of the Texas Railroad Commission, after it reached the decision to pump at full capacity: “We feel this to be an historic occasion.  Damned historic, and a sad one.  Texas oil fields have been like a reliable old warrior that could rise to the task, when needed.  That old warrior can’t rise anymore.”  [Hargreaves, Superpower, 176.  Also cited in Yergin, The Prize, 567].

[10] Carroll, It Seemed Like Nothing Happened, 121.  Also cited in Hargreaves, Superpower, 176.

[11] Patterson, Grand Expectations, 785; and Carroll, It Seemed Like Nothing Happened, 119.  Too, the energy crunch extended beyond oil to natural gas.  Indeed, to make matters worse, Martin Melosi notes that “in 1968, for the first time in U.S. history, more natural gas was sold than was discovered.”  Melosi, Coping with Abundance, 282.

[12] Hargreaves, Superpower, 176.  For oil importation numbers, see Yergin, The Prize, 567.  In addition to the limitations of domestic oil, Yergin also stresses the importance of OPEC’s growing strength and its ability to dictate oil prices on the global market as contributing to the severity of the 1973 oil embargo.  See Yergin, The Prize, 554-612.

[13] See Carroll, It Seemed Like Nothing Happened, 117-118.

[14] Schulman, The Seventies,125.

[15] United States Department of Commerce, Survey of Current Business 55 (February 1975), 2.

[16] Between January 1973 and January 1974, the average monthly residential bill for #2 fuel oil, for example, increased between 59% to 90%.  Gas heating prices rose by as much as 25% and electricity prices by as much as 63% over the same period.  See Foster Associates, Energy Prices, 1960-1973 (Cambridge, MA: Ballinger Publishing Company, 1974), 5-7.

[17] Commoner, The Poverty of Power, 34.

[18] Cited in Commoner, The Poverty of Power, 265.

[19] “Thinking the Unthinkable,” Newsweek (7 October 1974), 50-51.  Quotation is from page 51.  In addition to such dire language, the article was accompanied by sketches of an airborne attack on oil fields.

[20] Bernard DeVoto, “What’s the Matter with History?,” Harper’s 179 (June 1939), 109, 110.

[21] See George Martin, “Grounding Social Ecology: Landscape, Settlement, and Right of Way,” Capitalism Nature Socialism, 13 (March 2002), 3-30.  Bigger vehicles invariably mean greater fuel consumption, so while cars have continued to become more fuel-efficient the average fuel rate for all vehicles has only fluctuated mildly since 1991.  See the Department of Energy website statistics: http://www.eia.doe.gov/emeu/aer/txt/ptb0209.html.  Further, while per vehicle fuel consumption is lower than the pre-oil crisis rate, it has climbed back up to the 1980 rate, a disturbing rise associated with the increasing popularity of sport utility vehicles.  See John Cloud & Amanda Bower, “Why SUV is all the Rage,” Time 161 (24 February 2003).  See also Keith Bradsher, High and Mighty: The Dangerous Rise of the SUV (New York: Public Affairs, 2003).

[22] See Donald Worster, “Paths Across the Levee,” in The Wealth of nature: Environmental History and the Ecological Imagination (New York: Oxford University Press, 1993), 16-29.

[23] Barry Commoner, “Untitled Talk,” Harvard University, 21 April 1970 (Barry Commoner Papers, LoC, Box 36), 5.  In 1973, E. F. Schumacher reiterated this general premise in his surprisingly successful book, Small is Beautiful, an economic tract that defied the maxims of growth and bigness, both perceived as integral to the free market.  E. F. Schumacher, Small is Beautiful: Economics as if People Mattered (New York: Harper & Row, 1973).

[24] Commoner, “The Social Significance of Environmental Pollution,” 11th annual meeting of the National Association of Business Economists, Chicago, 26 September 1969 (Barry Commoner Papers, LoC, Box 130), 4.

[25] Commoner, The Closing Circle (New York: Alfred A. Knopf, 1971), 250.

The History of a Sustainable Future?

A bit of background on the book series. In 2007, a group of young(ish) scholars across North America created the Sustainable Future History Project, a loosely organized cabal that provided opportunities for networking and collaborative projects. We also shared a strong sense that history’s contemporary relevance—especially with respect to environmental issues (our shared specialization)—was frequently overlooked and that history could provide important context in planning for a more sustainable future. From the Sustainable Future History Project’s website:

It’s a bit of a funny name and a peculiar concept (looking backward to look forward), but the Sustainable Future History Project is predicated on the idea that in order to fully understand the social, political, economic, and ecological extent of our contemporary environmental crisis we need to be conscious of its historical context.  Moreover, resolving our global environmental problems requires careful thought and planning; future success is dependent upon a deeper appreciation of the past.  This is the point: historicizing sustainable futures is based less on the notion that we should learn from past mistakes, but rather on the premise that solving the environmental crisis will demand the most and best information available, and history provides valuable insight into the creation and proliferation of the environmental ills we hope to curb.

Lots of interesting conversations and ideas sprang out of the groups various informal chats and meetings at conferences. The most substantial development thus far was the creation of the MIT Press book series, which was started in 2009. The real tenor of the series is to try something different. The books are short; maybe half the length of a standard academic monograph. The idea is to produce a series of short, smart, and accessible books (complete with the traditional academic apparatus: notes, bibliography, etc.) on the history of topics that have pressing environmental resonance. The point is to produce books that appeal not only to our peers, but also to undergraduate classrooms and policy makers and activists. At the time of writing, we have received considerable interest from a very interesting variety of scholars, and are looking forward to receiving the first manuscript submissions later in 2012. Stay tuned.

Histories of Science, Technology, and the Environment

When I was hired at McMaster (in 2005), the position called for an historian of science & technology. Which I sort of was. A little, anyway. I had completed my PhD in environmental history, but my dissertation considered the biologist Barry Commoner’s career as a social and environmental activist. As a result, I was rather interested in questions pertaining to science and society and the scientist’s social responsibility. For Commoner, the post-World War II environmental crisis was a product of poor technological decisions and short-sighted modes of production. I spent the year after completing my PhD as a fellow at the Chemical Heritage Foundation in Philadelphia, where I began work on my current mercury project. Much of that year was spent transforming myself into something approximating an historian of science (I’d had no previous formal training in this field). So: sort of, kind of, an historian of science and technology.

Caveat: the histories of science and technology writ large are two very distinct fields with very distinct disciplinary and professional backgrounds and markedly different historiographies. I made a point of stressing this during my interview, but then concentrated on the ways they could be brought together. After all, as much as they are distinct fields or sub disciplines of history, there are some explicit overlaps, especially in the context of twentieth-century history. And those overlaps, especially when they intersected with environmental issues, were at the heart of my own research agenda. Environmental historians are an accepting bunch, but the kinds of work I do have long been outside the realm of “real” history of science. Which is odd, and a shame. At conferences, blue-blood history of science colleagues would be disparaging about my efforts to teach a global history of science and technology as a second-year survey course. All this to say that I’ve long thought of myself as working between three fields. Or four. The other natural intersection here is with STS, and I’ve found myself an eclectic reader in that over the past several years.

Things are starting to change, though, and rather than thinking about myself as not having a singular intellectual home, there seems to be a growing trend among a younger generation of scholars intent on working in much the same kinds of interstices as I am. In August 2010, I attended a workshop in Trondheim, Norway, that sought to bring STS and environmental history in more explicit conversation with each other and the warm reception to the MIT series suggests there is strong interest in seeing how these sub disciplines talk to each other. Earlier this week, I came across this bio of my friend and colleague, Ben Cohen (he of anti-ant fame). It’s a nice write-up (and his office looks much nicer than mine), but midway through the article, Ben says:

The history of science, technology, and the environment reveals a world where people made decisions based on particular conditions in particular places about how to live in nature.

Nice. But what struck me was Ben’s singular use of “history.” Where I’ve been trying to juggle three things, Ben is doing one (and he does it very well, by the way; check out his book, Notes from the Ground). Maybe this is all semantics. Maybe it’s silly, academic territoriality that really doesn’t mean much of anything. But it suggests an appealing intellectual starting point for what I do. I may have to start thinking about my field in a singular manner…