Uncertainty, Fear, & Mercury at Minamata: A Brief Overview

The tragic mercury poisoning epidemic at Minamata, Japan, serves as one of the critical first chapters in the history of the Toxic Century. The mercury spill in Minamata Bay in the 1950s constitutes one of the first expressions of the new landscapes that typify the Toxic Century. From 1932 to 1967, the Chisso Chemical Plant dumped mercury into the bay, from which local villagers subsisted on a fish-heavy diet. By the early 1950s, a growing number of animals and then residents were afflicted with a mysterious disease that flummoxed medical experts. Most typically, the symptoms involved debilitating damage to their nervous system. While researchers at Kumamoto University were able to identify heavy metal poisoning, it took some time before they could point to methyl mercury with confidence. (Minamata disease symptoms were first observed in humans in 1953; in 1959, studies definitively concluded that methyl mercury was the source).

Uncertainty ruled the early response. Hospitals quarantined sick patients, concerned that their ailment was contagious. “Whenever a new patient was identified,” Akio Mishima reported in Bitter Sea, “white-coated public health inspectors hurried to his or her house to disinfect every nook and cranny.” And still the fishing community ate the fish from the bay. Kibyo—strange illness—the locals said, when another neighbour showed symptoms. In historical circles, we resist talking about passive victims, but the hapless not-knowingness of the early stages of the Minamata outbreak can be framed in a manner that would impress Alfred Hitchcock.

Fear: the delay in discovering acute mercury poisoning was the source of Kibyo provoked fear around not knowing the source of the ailment. Subsequent victims also expressed fears about dying. Another form of fear manifests itself in the cultural response to victimhood. As science pointed toward the bay and the fish therein as the source of Minamata disease, divisions within the community arose between the afflicted and the fishermen who depended upon the bay for their livelihood. Patients’ families seeking compensation suffered discrimination from their neighbours. This ostracism also stimulated new forms of fear.

Uncertainty: Mercury & the Politics of the Reference Dose

I keep coming back to the idea of uncertainty. It’s an omnipresent feature of the mercury project. Uncertainty, I think, is also at the heart of how toxic fear manifests itself. We’re afraid of what we don’t know—or don’t understand. And, yet, chemical pollution demands that we act quickly, and sometimes with incomplete information about the nature of the contaminant’s threat. So when uncertainty prevails, how do you develop baseline regulation? In the aftermath of mercury poisoning epidemic at Minamata, national and global health agencies raced to identify acceptable exposure limits for mercury. These were complicated by mercury’s ubiquity in industry and—scientists discovered—throughout the environment. As various organizations introduced reference dose recommendations that erred on the side of caution to accommodate unknowns in the available data (such as differences in sensitivity across a population and the inability of a single study to address all possible adverse outcomes), it became glaringly apparent that these preliminary numbers were not nearly conservative enough.

My focal point is the politics of establishing a reference dose for mercury and the manner in which uncertainty rests at the heart of this problem. The reference dose is effectively a standard uncertainty factor, and is built in to represent unknowns in the available data—such as differences in sensitivity across a population and the inability of a single study to address all possible adverse outcomes. The crux of the problem is establishing a regulatory line between safe levels of mercury in human bodies and not safe levels—and doing that without relying on a trial-and-error approach.

I want to argue that mercury has a distinctive place in the ecosystem of quantifying chemical hazards, due in no small measure to the manner in which it impressed itself through a series of acute poisoning epidemics during the latter half of the twentieth century. But also in terms of how it was measured. The weak mortar that holds this presentation together is the contradiction between the uses for toxicological research. Where the scientific endeavour seeks to identify acceptable parameters for chemical risk, legislative demands put scientific findings in conversation with competing economic and political imperatives.

To illustrate, consider the anecdotal story related by Nils-Erik Landell, reflecting on the Swedish mercury case of the 1960s. Sweden was the first developed country to locate widespread industrial mercury pollution in its water systems (this, of course, discounting the acute mercury poisoning case in Minamata, Japan). Landell recalls:

I was working at the Public Health Institute to get money for my education as a medical doctor … and my chief had written a toxicological evaluation of the maximum limit of mercury in fish. I saw it on his table, and he had written [the safe limit of mercury content in fish] 0.5 milligrams per kilogram of wet weight. The next day, the paper was still there on the table, but now I saw that he had rubbed it out and it was now 1.0 milligrams per kilogram. And I asked him why … and he said in Lake Vänern, the biggest lake in Sweden, the fishermen had pointed out that the fish had a concentration of 0.7, so he had to raise it to 1.0. And I understood that the evaluation of toxicology was not so sharp as it should be, but it was illustrative of the pressure from different companies and economic interests on the scientists.

As a reference point, the current EPA reference dose for mercury in fish is 0.1 µg/kg/day (there’s an interesting side-story here—maybe a post for another day).

To start, allow me to move away from mercury to discuss the broader history of the reference dose. Measuring the safety factor of chemicals is a feature of post-World War II environmental praxis. Starting in the United States, efforts to identify safe levels for new additives in foods in the middle-1950s prompted interest in articulating safe levels of acute and chronic exposure to harmful chemicals. The first recommendations came from two scientists at the US Food and Drug Administration. In 1954, Arnold Lehman and O. Garth Fitzhugh posited that animal toxicity tests could be extrapolated qualitatively to predict responses in humans, but that quantitative predictions were more problematic. To articulate safe levels of a given toxin, they proposed that the reference dose be evaluated by the following formula:

Reference Dose (RfD) = NOAEL/Uncertainty Factor

Lehman and Fitzhugh set their uncertainty factor at a 100-fold margin. That is to say that exposure levels to harmful chemicals should be set a hundred times lower than the point at which no adverse effects had been observed in the laboratory. The justification for the 100-fold safety factor was traditionally interpreted as the product of two separate values, expressing default values to a magnitude of 10. The protocol worked on the assumption, first, that human beings were 10 times more sensitive than the test animal, and, second, that the variability of sensitivity within the human population could be managed within a 10-fold frame.

The fundamental premise of the reference dose, as Lehman and Fitzhugh conceived it, was that it was designed to address the untidiness of extrapolating animal data and applying them to human populations outside the lab. In effect, the initial 100-fold reference point was arbitrary, without any real quantitative basis for or against it. It’s a principle that has stood up to more recent scientific scrutiny, and variants of it remain in practice sixty years later.

To mercury. Though mercury’s entry into the toxic century occurred at Minamata, it is the Swedish case study that galvanized growing interest in establishing a reference dose for mercury exposure. The Minamata case was the result of very specific mercury emissions into the bay. A combination of not looking further for mercury in the environment and broader disinterest in international circles meant that much of the Japanese research was not revisited until the 1970s when mercury was accepted as a ubiquitous environmental contaminant with universal reach. There was also some delay in identifying mercury as the source. In the mid-1960s, Swedes found mercury prevalent in wild birds—a product of mercury-coated seed grain (fungicidal properties)—and, subsequently, throughout their water systems—through a variety of industrial uses. Swedish concerns over an appropriate reference dose for mercury stemmed on the hypothetical. They had discovered mercury, but had not experienced any cases of mercury poisoning. So what was the threshold? Their analyses debated the merits of measuring mercury content in dry or wet weight of fish, measuring potential threats to the fishing industry, and determining social and individual risks associated with mercury exposure.

But if the reference dose studies in Sweden were based on conjecture, mercury’s neurotoxic potential was realized in Iraq in 1972. Widespread poisoning resulted after a mishandled supply of mercury-coated Wonder Wheat arrived too late from Mexico to be planted. Desperate, hungry farmers started making homemade bread from the seed grain. The seeds had been dyed pink to warn that they had been treated with hazardous chemicals, but farmers assumed that washing off the dye also removed the mercury. Numbers on the severity of the mercury epidemic vary drastically. Official, Ba’athist counts suggest 4500 victims; more recent, independent observers estimate at least ten times that number.

Amidst the chaos and calamity, the Iraqi case provided a critical opportunity to measure mercury exposures on human subjects. Note that whereas the Swedes were taken by measuring mercury content in fish, the new evaluations could be rendered more precise by disregarding the first 10-fold protocol, effectively by eliminating interspecies uncertainty factors—getting rid of the middle-fish. Put another way, where Lehman and Fitzhugh were addressing uncertainty factors as part of a qualitative analysis of potential risk, data derived from Iraq could engage a more quantitative approach. As a result, numerous national and international agencies—the World Health Organization and the US Food and Drug Administration foremost among them—collected data from mercury victims in the provinces around Baghdad. These studies subsequently served as the cornerstone for numerous national and international recommendations for acceptable mercury exposure for the next 25 years.

During the 1980s, however, researchers in Europe and in the United States raised concerns about the validity of the data. The measurements taken in Iraq stemmed from acute mercury poisoning—the rapid consumption of dangerously high levels of mercury. Were these findings—and the limits they proposed—consistent with the much more common chronic, low-level exposure? If mercury-contaminated fish was part of a regular diet over a longer period of time, how would mercury behave and what would be the epidemiological effects?

The first project, composed of an international team and based at Harvard, undertook an assessment of possible brain function impairment in adolescent children due to prenatal exposure to mercury when the mothers’ diet was high in seafood. They selected as their case study the small communities of the Faroe Islands to examine a traditional population that ate some fish, and occasionally feasted on mercury-contaminated whales. I’ll leave out the specifics of the study, but the authors found that high levels of mercury passed from mother to child in utero produced irreversible impairment to specific brain functions in the children. By age 7, 614 children with the most complete mercury-exposure data had lower scores in 8 of 16 tests of language, memory, and attention, suggesting that low-level mercury exposure caused neurological problems.

At roughly the same time, a team of researchers at the University of Rochester Medical Center, carried out mental and motor tests on 9-yr old children born on the Seychelles Islands. The study, begun in 1989, looked for an association between mercury exposure and behavior, motor skills, memory, and thinking in 779 children born to mothers who averaged a dozen fish meals a week. Around age 9, higher mercury exposure was associated with two test results. Boys, but not girls, were slower at one movement test, but only when using their less capable hand. Boys and girls exposed to more mercury were rated as less hyperactive by their teachers. The authors concluded, “These data do not support the hypothesis that there is a neurodevelopmental risk from prenatal methylmercury exposure resulting solely from ocean fish consumption.” So while the Faroes study indicated cause for concern in low level mercury exposure through ocean fish consumption, the Seychelles study exonerated mercury. To complicate matters, a third study in New Zealand, which followed the Seychelles methodology identified mercury risk more consistent with the Faroes study.

By way of exit strategy, let me conclude by situating talk of reference doses in its larger context. Interest in and analysis of mercury pollution and its acceptable limits constitute part of the transformation of global environmentalism after World War II. Put very roughly, prior to 1945 concern for the environment consisted of protecting nature from the onslaught of civilization; after 1945 this concern—in actions and in rhetoric—shifted to protecting civilization from itself. The environmental lexicon supports this notion. New vocabulary—bioaccumulation, biomagnification, environmental risk, chemical hazard—became prevalent, transforming our environmental engagement. Similar transformations take place within toxicological vocabularies. Environmental toxicology, toxicokinetics, toxicodynamics, suggest that specialized and nonspecialized forms of language use evolved during the second half of the twentieth century. None of this should come as a surprise, but it adds a layer of complexity to the traditional, post-materialist arguments that have typically explained the post-war environmental transformation.

The struggle for precision comes at another price, however. This bodily turn in environmental thinking has understandably shifted the gaze of environmental monitoring from the ecosystem to the body. What happens “out there,” ironically, matters less than what happens “in here.” And that fear over public health risks has galvanized a more pressing need for scientific knowledge and political action—the interaction between the two breeding a landscape of new, reactionary or crisis disciplines to make sense of environmental hazards. That policy moves faster than science and thereby shapes the practice of knowledge gathering and its place in policymaking has historically constituted one of the primary obstacles in the struggle for epistemic clarity when articulating threshold levels for mercury exposure. In somewhat related news, I received a copy of Frederick Rowe Davis’s book, Banned: A History of Pesticides and the Science of Toxicology, the other day. I have yet to get beyond the first chapter, but I look forward to seeing how he treats the messy politics of environmental toxicology—and especially the relationship between science and policy.

Lest this discussion seem more at home in the histories of science and policy, let me assert a place for it in environmental history as well. Mercury is a naturally-occurring feature of the physical environment, but human activities have increased the amount of mercury in circulation beyond any quantities that could ever be considered normal. Atmospheric levels are seven times higher and ocean-surface levels are almost six times higher than they were in 2000 BC. Half of that increase has occurred since 1950, during the toxic century. In effect, human-industrial practices provoked and set in motion the need for establishing a reference dose for mercury. But this is also a story grounded in place—or, rather, places. While the preliminary history of mercury’s reference dose took place in laboratories, it was prompted by the discovery that mercury was present in significant quantities in various specific places. Similarly, with the advent of the acute poisoning cases in Iraq in the early 1970s, reference dose studies left the lab to attend to mercury in the field, thereby transforming the nature and parameters of knowledge construction. In so doing, they invite re-readings of how we might tell stories about nature and the numbers we use to make sense of them.

Post-Normal Science

In their 2007 book, Rethinking Expertise, Harry Collins and Robert Evans reiterated their contention that “science, if it can deliver truth, cannot deliver it at the speed of politics.” This is the enduring tension of the mercury project in general. Since the Commoner book, I’ve been drawn to some older work by Jerome Ravetz, where he introduces the notion of post-normal science, which is a reflection of science occurring in conjunction with social, political, and economic values weighing in on the results. The project of this post-normal science—a derivative of Thomas Kuhn’s paradigm-based normal science—is not to collect and present definitive knowledge, but rather to function within a highly complex network of policymaking interests, best described by Latour’s notion of “co-production,” which marries the production of knowledge with the production of social order.

In effect, Ravetz is especially interested in public participation in science and subsequent political decision-making. He sees it as a positive and viable—indeed necessary—direction for contemporary science. Post-normal science reflects the new nature of scientific inputs to policy processes. According to Ravetz, “only through post-normal science can scientific endeavor recover from the loss of morale and commitment that started with the Bomb … and is now rampant under the capture of science by globalization.” Similarly, in a 1992 article in Theory, Culture, & Society, Ulrich Beck also raised another potential boon for scientific democracy. “The exposure of scientific uncertainty,” he wrote, “is the liberation of politics, law, and the public sphere from their patronization by technocracy.”[1] (I’ll need to devote another post to uncertainty; this is also especially fertile ground).

Public science has and will continue to foster greater scientific literacy and a more informed public. That was certainly my interpretation of post-normal science in the Commoner book. Commoner was a scientist-activist, who devoted an incredible amount of time and energy to ensuring that the public was informed and had the necessary tools with which to participate in public debate (I wrote about this kind of vernacular science the other day). In my book, I stressed the importance of Commoner’s activist apparatus, which consisted of science, dissent, and information. In many respects, Commoner was a model of what Ravetz had in mind, both in terms of praxis, but also as a means of restoring scientific integrity.

I still regard Commoner as a central and positive figure in twentieth-century history (I don’t feel at all uncomfortable with the more hagiographic elements of the book—Commoner’s story is essential to the environmental history of the twentieth century and one of the most important players in American environmentalism), but the mercury project has me changing gears a little bit on the idea of post-normal science. Return to the relationship between science and politics: science moves less quickly. In a complex network of competing interests, science can be relegated to participant at a diverse table, equal with economic interests or local knowledge or political imperatives. All well and good, perhaps, but it seems to me that science—while I would question its capacity to deliver unmitigated truths—is about the best and most reliable source of knowledge-gathering we have at our disposal. And sometimes expertise and democracy are at odds. The mercury project shows this in multiple case studies and iterations. So while I adhere to the democratic principle of post-normal science, I wonder, sometimes, about its universal validity.

My interest here is to make less of a judgment on the moral nature of post-normal science, but rather to recognize its mechanisms as a prevalent feature of the scientific landscape after World War II. Too, I’m fascinated by the intricate dance involving science, policy, publics, expertise, and uncertainty.


[1] Ulrich Beck, “From Industrial Society to the Risk Society: Questions of Survival, Social Structure, and Ecological Enlightenment,” Theory, Culture & Society 9 (1992), 97-123.