Uncertainty, Fear, & Mercury at Minamata: A Brief Overview

The tragic mercury poisoning epidemic at Minamata, Japan, serves as one of the critical first chapters in the history of the Toxic Century. The mercury spill in Minamata Bay in the 1950s constitutes one of the first expressions of the new landscapes that typify the Toxic Century. From 1932 to 1967, the Chisso Chemical Plant dumped mercury into the bay, from which local villagers subsisted on a fish-heavy diet. By the early 1950s, a growing number of animals and then residents were afflicted with a mysterious disease that flummoxed medical experts. Most typically, the symptoms involved debilitating damage to their nervous system. While researchers at Kumamoto University were able to identify heavy metal poisoning, it took some time before they could point to methyl mercury with confidence. (Minamata disease symptoms were first observed in humans in 1953; in 1959, studies definitively concluded that methyl mercury was the source).

Uncertainty ruled the early response. Hospitals quarantined sick patients, concerned that their ailment was contagious. “Whenever a new patient was identified,” Akio Mishima reported in Bitter Sea, “white-coated public health inspectors hurried to his or her house to disinfect every nook and cranny.” And still the fishing community ate the fish from the bay. Kibyo—strange illness—the locals said, when another neighbour showed symptoms. In historical circles, we resist talking about passive victims, but the hapless not-knowingness of the early stages of the Minamata outbreak can be framed in a manner that would impress Alfred Hitchcock.

Fear: the delay in discovering acute mercury poisoning was the source of Kibyo provoked fear around not knowing the source of the ailment. Subsequent victims also expressed fears about dying. Another form of fear manifests itself in the cultural response to victimhood. As science pointed toward the bay and the fish therein as the source of Minamata disease, divisions within the community arose between the afflicted and the fishermen who depended upon the bay for their livelihood. Patients’ families seeking compensation suffered discrimination from their neighbours. This ostracism also stimulated new forms of fear.

Uncertainty: Mercury & the Politics of the Reference Dose

I keep coming back to the idea of uncertainty. It’s an omnipresent feature of the mercury project. Uncertainty, I think, is also at the heart of how toxic fear manifests itself. We’re afraid of what we don’t know—or don’t understand. And, yet, chemical pollution demands that we act quickly, and sometimes with incomplete information about the nature of the contaminant’s threat. So when uncertainty prevails, how do you develop baseline regulation? In the aftermath of mercury poisoning epidemic at Minamata, national and global health agencies raced to identify acceptable exposure limits for mercury. These were complicated by mercury’s ubiquity in industry and—scientists discovered—throughout the environment. As various organizations introduced reference dose recommendations that erred on the side of caution to accommodate unknowns in the available data (such as differences in sensitivity across a population and the inability of a single study to address all possible adverse outcomes), it became glaringly apparent that these preliminary numbers were not nearly conservative enough.

My focal point is the politics of establishing a reference dose for mercury and the manner in which uncertainty rests at the heart of this problem. The reference dose is effectively a standard uncertainty factor, and is built in to represent unknowns in the available data—such as differences in sensitivity across a population and the inability of a single study to address all possible adverse outcomes. The crux of the problem is establishing a regulatory line between safe levels of mercury in human bodies and not safe levels—and doing that without relying on a trial-and-error approach.

I want to argue that mercury has a distinctive place in the ecosystem of quantifying chemical hazards, due in no small measure to the manner in which it impressed itself through a series of acute poisoning epidemics during the latter half of the twentieth century. But also in terms of how it was measured. The weak mortar that holds this presentation together is the contradiction between the uses for toxicological research. Where the scientific endeavour seeks to identify acceptable parameters for chemical risk, legislative demands put scientific findings in conversation with competing economic and political imperatives.

To illustrate, consider the anecdotal story related by Nils-Erik Landell, reflecting on the Swedish mercury case of the 1960s. Sweden was the first developed country to locate widespread industrial mercury pollution in its water systems (this, of course, discounting the acute mercury poisoning case in Minamata, Japan). Landell recalls:

I was working at the Public Health Institute to get money for my education as a medical doctor … and my chief had written a toxicological evaluation of the maximum limit of mercury in fish. I saw it on his table, and he had written [the safe limit of mercury content in fish] 0.5 milligrams per kilogram of wet weight. The next day, the paper was still there on the table, but now I saw that he had rubbed it out and it was now 1.0 milligrams per kilogram. And I asked him why … and he said in Lake Vänern, the biggest lake in Sweden, the fishermen had pointed out that the fish had a concentration of 0.7, so he had to raise it to 1.0. And I understood that the evaluation of toxicology was not so sharp as it should be, but it was illustrative of the pressure from different companies and economic interests on the scientists.

As a reference point, the current EPA reference dose for mercury in fish is 0.1 µg/kg/day (there’s an interesting side-story here—maybe a post for another day).

To start, allow me to move away from mercury to discuss the broader history of the reference dose. Measuring the safety factor of chemicals is a feature of post-World War II environmental praxis. Starting in the United States, efforts to identify safe levels for new additives in foods in the middle-1950s prompted interest in articulating safe levels of acute and chronic exposure to harmful chemicals. The first recommendations came from two scientists at the US Food and Drug Administration. In 1954, Arnold Lehman and O. Garth Fitzhugh posited that animal toxicity tests could be extrapolated qualitatively to predict responses in humans, but that quantitative predictions were more problematic. To articulate safe levels of a given toxin, they proposed that the reference dose be evaluated by the following formula:

Reference Dose (RfD) = NOAEL/Uncertainty Factor

Lehman and Fitzhugh set their uncertainty factor at a 100-fold margin. That is to say that exposure levels to harmful chemicals should be set a hundred times lower than the point at which no adverse effects had been observed in the laboratory. The justification for the 100-fold safety factor was traditionally interpreted as the product of two separate values, expressing default values to a magnitude of 10. The protocol worked on the assumption, first, that human beings were 10 times more sensitive than the test animal, and, second, that the variability of sensitivity within the human population could be managed within a 10-fold frame.

The fundamental premise of the reference dose, as Lehman and Fitzhugh conceived it, was that it was designed to address the untidiness of extrapolating animal data and applying them to human populations outside the lab. In effect, the initial 100-fold reference point was arbitrary, without any real quantitative basis for or against it. It’s a principle that has stood up to more recent scientific scrutiny, and variants of it remain in practice sixty years later.

To mercury. Though mercury’s entry into the toxic century occurred at Minamata, it is the Swedish case study that galvanized growing interest in establishing a reference dose for mercury exposure. The Minamata case was the result of very specific mercury emissions into the bay. A combination of not looking further for mercury in the environment and broader disinterest in international circles meant that much of the Japanese research was not revisited until the 1970s when mercury was accepted as a ubiquitous environmental contaminant with universal reach. There was also some delay in identifying mercury as the source. In the mid-1960s, Swedes found mercury prevalent in wild birds—a product of mercury-coated seed grain (fungicidal properties)—and, subsequently, throughout their water systems—through a variety of industrial uses. Swedish concerns over an appropriate reference dose for mercury stemmed on the hypothetical. They had discovered mercury, but had not experienced any cases of mercury poisoning. So what was the threshold? Their analyses debated the merits of measuring mercury content in dry or wet weight of fish, measuring potential threats to the fishing industry, and determining social and individual risks associated with mercury exposure.

But if the reference dose studies in Sweden were based on conjecture, mercury’s neurotoxic potential was realized in Iraq in 1972. Widespread poisoning resulted after a mishandled supply of mercury-coated Wonder Wheat arrived too late from Mexico to be planted. Desperate, hungry farmers started making homemade bread from the seed grain. The seeds had been dyed pink to warn that they had been treated with hazardous chemicals, but farmers assumed that washing off the dye also removed the mercury. Numbers on the severity of the mercury epidemic vary drastically. Official, Ba’athist counts suggest 4500 victims; more recent, independent observers estimate at least ten times that number.

Amidst the chaos and calamity, the Iraqi case provided a critical opportunity to measure mercury exposures on human subjects. Note that whereas the Swedes were taken by measuring mercury content in fish, the new evaluations could be rendered more precise by disregarding the first 10-fold protocol, effectively by eliminating interspecies uncertainty factors—getting rid of the middle-fish. Put another way, where Lehman and Fitzhugh were addressing uncertainty factors as part of a qualitative analysis of potential risk, data derived from Iraq could engage a more quantitative approach. As a result, numerous national and international agencies—the World Health Organization and the US Food and Drug Administration foremost among them—collected data from mercury victims in the provinces around Baghdad. These studies subsequently served as the cornerstone for numerous national and international recommendations for acceptable mercury exposure for the next 25 years.

During the 1980s, however, researchers in Europe and in the United States raised concerns about the validity of the data. The measurements taken in Iraq stemmed from acute mercury poisoning—the rapid consumption of dangerously high levels of mercury. Were these findings—and the limits they proposed—consistent with the much more common chronic, low-level exposure? If mercury-contaminated fish was part of a regular diet over a longer period of time, how would mercury behave and what would be the epidemiological effects?

The first project, composed of an international team and based at Harvard, undertook an assessment of possible brain function impairment in adolescent children due to prenatal exposure to mercury when the mothers’ diet was high in seafood. They selected as their case study the small communities of the Faroe Islands to examine a traditional population that ate some fish, and occasionally feasted on mercury-contaminated whales. I’ll leave out the specifics of the study, but the authors found that high levels of mercury passed from mother to child in utero produced irreversible impairment to specific brain functions in the children. By age 7, 614 children with the most complete mercury-exposure data had lower scores in 8 of 16 tests of language, memory, and attention, suggesting that low-level mercury exposure caused neurological problems.

At roughly the same time, a team of researchers at the University of Rochester Medical Center, carried out mental and motor tests on 9-yr old children born on the Seychelles Islands. The study, begun in 1989, looked for an association between mercury exposure and behavior, motor skills, memory, and thinking in 779 children born to mothers who averaged a dozen fish meals a week. Around age 9, higher mercury exposure was associated with two test results. Boys, but not girls, were slower at one movement test, but only when using their less capable hand. Boys and girls exposed to more mercury were rated as less hyperactive by their teachers. The authors concluded, “These data do not support the hypothesis that there is a neurodevelopmental risk from prenatal methylmercury exposure resulting solely from ocean fish consumption.” So while the Faroes study indicated cause for concern in low level mercury exposure through ocean fish consumption, the Seychelles study exonerated mercury. To complicate matters, a third study in New Zealand, which followed the Seychelles methodology identified mercury risk more consistent with the Faroes study.

By way of exit strategy, let me conclude by situating talk of reference doses in its larger context. Interest in and analysis of mercury pollution and its acceptable limits constitute part of the transformation of global environmentalism after World War II. Put very roughly, prior to 1945 concern for the environment consisted of protecting nature from the onslaught of civilization; after 1945 this concern—in actions and in rhetoric—shifted to protecting civilization from itself. The environmental lexicon supports this notion. New vocabulary—bioaccumulation, biomagnification, environmental risk, chemical hazard—became prevalent, transforming our environmental engagement. Similar transformations take place within toxicological vocabularies. Environmental toxicology, toxicokinetics, toxicodynamics, suggest that specialized and nonspecialized forms of language use evolved during the second half of the twentieth century. None of this should come as a surprise, but it adds a layer of complexity to the traditional, post-materialist arguments that have typically explained the post-war environmental transformation.

The struggle for precision comes at another price, however. This bodily turn in environmental thinking has understandably shifted the gaze of environmental monitoring from the ecosystem to the body. What happens “out there,” ironically, matters less than what happens “in here.” And that fear over public health risks has galvanized a more pressing need for scientific knowledge and political action—the interaction between the two breeding a landscape of new, reactionary or crisis disciplines to make sense of environmental hazards. That policy moves faster than science and thereby shapes the practice of knowledge gathering and its place in policymaking has historically constituted one of the primary obstacles in the struggle for epistemic clarity when articulating threshold levels for mercury exposure. In somewhat related news, I received a copy of Frederick Rowe Davis’s book, Banned: A History of Pesticides and the Science of Toxicology, the other day. I have yet to get beyond the first chapter, but I look forward to seeing how he treats the messy politics of environmental toxicology—and especially the relationship between science and policy.

Lest this discussion seem more at home in the histories of science and policy, let me assert a place for it in environmental history as well. Mercury is a naturally-occurring feature of the physical environment, but human activities have increased the amount of mercury in circulation beyond any quantities that could ever be considered normal. Atmospheric levels are seven times higher and ocean-surface levels are almost six times higher than they were in 2000 BC. Half of that increase has occurred since 1950, during the toxic century. In effect, human-industrial practices provoked and set in motion the need for establishing a reference dose for mercury. But this is also a story grounded in place—or, rather, places. While the preliminary history of mercury’s reference dose took place in laboratories, it was prompted by the discovery that mercury was present in significant quantities in various specific places. Similarly, with the advent of the acute poisoning cases in Iraq in the early 1970s, reference dose studies left the lab to attend to mercury in the field, thereby transforming the nature and parameters of knowledge construction. In so doing, they invite re-readings of how we might tell stories about nature and the numbers we use to make sense of them.

Post-Normal Science

In their 2007 book, Rethinking Expertise, Harry Collins and Robert Evans reiterated their contention that “science, if it can deliver truth, cannot deliver it at the speed of politics.” This is the enduring tension of the mercury project in general. Since the Commoner book, I’ve been drawn to some older work by Jerome Ravetz, where he introduces the notion of post-normal science, which is a reflection of science occurring in conjunction with social, political, and economic values weighing in on the results. The project of this post-normal science—a derivative of Thomas Kuhn’s paradigm-based normal science—is not to collect and present definitive knowledge, but rather to function within a highly complex network of policymaking interests, best described by Latour’s notion of “co-production,” which marries the production of knowledge with the production of social order.

In effect, Ravetz is especially interested in public participation in science and subsequent political decision-making. He sees it as a positive and viable—indeed necessary—direction for contemporary science. Post-normal science reflects the new nature of scientific inputs to policy processes. According to Ravetz, “only through post-normal science can scientific endeavor recover from the loss of morale and commitment that started with the Bomb … and is now rampant under the capture of science by globalization.” Similarly, in a 1992 article in Theory, Culture, & Society, Ulrich Beck also raised another potential boon for scientific democracy. “The exposure of scientific uncertainty,” he wrote, “is the liberation of politics, law, and the public sphere from their patronization by technocracy.”[1] (I’ll need to devote another post to uncertainty; this is also especially fertile ground).

Public science has and will continue to foster greater scientific literacy and a more informed public. That was certainly my interpretation of post-normal science in the Commoner book. Commoner was a scientist-activist, who devoted an incredible amount of time and energy to ensuring that the public was informed and had the necessary tools with which to participate in public debate (I wrote about this kind of vernacular science the other day). In my book, I stressed the importance of Commoner’s activist apparatus, which consisted of science, dissent, and information. In many respects, Commoner was a model of what Ravetz had in mind, both in terms of praxis, but also as a means of restoring scientific integrity.

I still regard Commoner as a central and positive figure in twentieth-century history (I don’t feel at all uncomfortable with the more hagiographic elements of the book—Commoner’s story is essential to the environmental history of the twentieth century and one of the most important players in American environmentalism), but the mercury project has me changing gears a little bit on the idea of post-normal science. Return to the relationship between science and politics: science moves less quickly. In a complex network of competing interests, science can be relegated to participant at a diverse table, equal with economic interests or local knowledge or political imperatives. All well and good, perhaps, but it seems to me that science—while I would question its capacity to deliver unmitigated truths—is about the best and most reliable source of knowledge-gathering we have at our disposal. And sometimes expertise and democracy are at odds. The mercury project shows this in multiple case studies and iterations. So while I adhere to the democratic principle of post-normal science, I wonder, sometimes, about its universal validity.

My interest here is to make less of a judgment on the moral nature of post-normal science, but rather to recognize its mechanisms as a prevalent feature of the scientific landscape after World War II. Too, I’m fascinated by the intricate dance involving science, policy, publics, expertise, and uncertainty.

[1] Ulrich Beck, “From Industrial Society to the Risk Society: Questions of Survival, Social Structure, and Ecological Enlightenment,” Theory, Culture & Society 9 (1992), 97-123.

On Writing

I’m probably not supposed to admit to it quite so freely, but history is an exercise in storytelling. I revel in writing and in crafting narrative. I’m not especially adept at either, but these creative aspects of my discipline hold me captive and bring me considerable pleasure. The best days consist of uninterrupted time to write. Some of those days are good days. The writing comes smoothly, the ideas are sound, and the arguments develop flawlessly and almost on their own. Other days, writing is a struggle. Words, sentences, mental images, arguments get lost or muddled. Interesting material becomes dry and tangential.

Recently, I’ve been trying to breathe life into mercury’s biogeochemical cycle, the manner in which it circulates through the physical environment. While mercury possesses beguiling characteristics and flows across so much modern environmental history in its capacity as a pollutant and poison, my efforts to make its independent motility interesting (distinct from the fascinating debates in human science and politics) have been less successful. After spending almost a week hammering out three or four pages that described the steps of mercury’s biogeochemical cycle and how we should also start to think about an anthropogenic cycle for mercury, derived from human activities moving mercury around the planet, I became frustrated with just how bored I was. And if I was bored with a topic I found inherently stimulating, this couldn’t be a good sign. From multiple pages and extensive notes, I cut my description down to the following (still a bit rough):

It might suffice to assert that mercury has a natural biogeochemical flow. It goes up into the atmosphere and returns to the surface, where it shuffles around the lithosphere and biosphere, before evaporating again and rising up into the atmosphere in perpetual, independent, and agonizingly slow cycle. Sometimes it returns to the earth’s sediment, replaced in the cycle by mercury loosed from the same source; this cycle can take centuries or millennia. Over the past 2000 years (and especially in the last 200) human activities—primarily digging holes in the ground and burning things—have dramatically increased the amount of mercury in circulation.

Mercury’s biogeochemical cycle is. Since mercury first iterated itself, it always was. This would be of little consequence to human history but for the role humans have played in augmenting the amount of mercury cycling through the environment and its concomitant effect on human health as it bioaccumulates up the food chain. As mercury cycles through the environment, it moves not just through inanimate matter, but also through living organisms. As organisms are eaten by bigger organisms, the mercury increases in concentration at every step. In aquatic systems, phytoplankton are eaten by zooplankton, who are eaten by fish, who are eaten by bigger fish, who find their way into the human diet. On land, the process of biomagnification traces a similar route through plants, birds, and mammals to the top of the food chain. Us.

Far from perfect (and maybe still only interesting to me), but the prevailing lesson here is that simplicity trumps exhaustive description, especially when dealing with technical information. This could be longer, but for the purposes of stressing that mercury moves with human intervention and without it (and that both these processes have profound historical implications), maybe shorter is better. That, of course, raises interesting issues surrounding the craft of knowing the technical ins and outs of the science behind the processes and how best to translate them.

A Mercurial History

For a little too long now, I’ve been working on the history of knowing and regulating mercury pollution. The project is more or less global in scale and concentrates on the period since World War II, starting with Minamata in the 1950s. When asked what it’s about, I tend to say that it follows the struggle for epistemic clarity among and between scientists and policy makers—how science and politics work at different speeds and how the twentieth-century environmental crisis has pushed knowledge makers and brokers into novel and curious collaborations. It’s also a book about environmental toxicology, but I still have to work that out more clearly. I’ll be writing about this project from time to time on this blog, largely as a means of trying to organize my own thoughts and prod the writing along. Here’s a recent attempt at providing a summary overview of some of the project’s themes:

Twentieth-century mercury pollution is a slippery subject. Mercury’s transition from elemental isolation to unwelcome ecological integration—a physical and an epistemological journey—offers an intriguing blend of human and natural partnerships of the sort that make environmental history an important avenue of inquiry; in effect, the history of the global mercury problem affords scholars with a valuable lens through which to examine interaction with an element that human practices invoke but do not define.

The challenges inherent in understanding and regulating this dangerous and prolific environmental pollutant across boundaries, jurisdictions, and constituencies constitute a vital testing ground for the examination of how environmental knowledge and policy travel in tandem over time and across boundaries; it also comprises one of the most critical chapters of a larger history of the hazardous chemicals regime—a series of independent but functionally related treaties and programs—that emerged after World War II to address the proliferation of new chemicals and pollutants introduced into the environment. In the decades after World War II, mercury was identified as a pollutant deriving from fungicides, mildew-resistant paint, run-off from gold mining, coal-fired power plant emissions, and the construction of hydroelectric reservoirs. Devastating mercury “epidemics” struck local populations in Japan, Guatemala, Ghana, Pakistan, Iraq, and Canada; high concentrations of mercury were discovered in water systems throughout the developed world, most notably in Sweden, Canada, and the United States; and as mercury became universally recognized as a toxic hazard, its disposal posed myriad new problems. In a focused study of this problem, I propose to examine the development of environmental toxicology in light of growing international concerns over mercury pollution after World War II, and put the budding scientific field in conversation with the policies that urgently sought to control mercury’s dangers.

While national and international governing bodies sought to develop legal and commercial mechanisms to reduce the release of mercury into the environment, sustainable resolutions have been elusive, due in no small measure to the apparent disconnect between scientific knowledge and policy decisions. As mercury proliferated throughout the environment, scientists and policymakers around the world scrambled to make sense of and respond to this new hazard. Within the scientific community, environmental toxicology emerged as an important branch of toxicology studies that aimed to illuminate the relationship between environmental pollution and public health. For their part, politicians at both the national and international levels sought to reconcile competing industrial and public health interests.  That these competing interests were frequently incommensurable only magnifies the tension between our exploitation of the physical environment and our understanding of it.

“All history is the history of unintended consequences,” writes historian David Blackbourn, “but that is especially true when we are trying to untangle humanity’s relationship with the natural environment.” In the case of mercury pollution, the proliferation of mercury and the difficulties inherent in regulating it were the direct result of a new science—and the scientific institutions that drove it—being asked to weigh in on the severity of a problem after the ecological hazard had already presented itself.  The unintended consequences that drive the history of knowing and regulating mercury constitute an important lesson in the politics of scientific engagement.