The bias of industry-funded research is pervasive and well-documented. When industry funds a study, it is more likely to produce pro-industry conclusions than is a non-industry funded study. Companies regularly use this pro-industry science to cast doubt on research that hurts their bottom line. Classic cases come from the tobacco and fossil-fuel industries, challenging evidence about the harms of smoking and the human causes of climate change. Businesses have marshaled pro-industry science to defend everything from asbestos, lead, and plastics, to pharmaceutical drugs and even sugary drinks. But how exactly does industry use science to defend their products? What’s the methodology behind their political strategy?
Let’s look at a recent case involving organophosphates like chlorpyrifos, formerly used in the household pesticide Raid, and now one of the most common pesticides for agricultural use. The EPA proposed to ban chlorpyrifos in 2015, based on a series of observational studies published in 2011 documenting the neurotoxic effects of low dose exposure in children from independent teams at the University of California Berkeley, Columbia University, and Mt. Sinai Medical School. These original studies documenting neurotoxic effects were conducted by epidemiologists (scientists who study disease outbreaks among populations) who linked exposure with deficits in IQ, working memory, and perceptual reasoning. All three studies used a prospective birth-cohort design, meaning that they recruited pregnant women and their children before birth. They measured in-utero exposure and cognitive development across early childhood up to age 7.
While there were warning signs earlier, the studies found that exposure in utero was linked to adverse effects up to 7 years later. The greater the prenatal exposure, the poorer the cognitive development. These observed effects began at 12 months and continued through childhood. One study found that with each 10-fold increase in a pregnant woman’s organophosphate levels, her child’s IQ dropped by 6 points. In another study, for each unit of exposure to chlorpyrifos in the umbilical cord, childhood IQ declines 1.4% and, specifically, working memory declines 2.8% by age 7. Together they provided “strength in numbers”–a convergence from three independent studies on three different populations throughout the US that strongly suggested that organophosphates are developmental neurotoxins. Furthermore, much of exposure occurs among farmworkers and their children, who are disproportionately Latinx.
Nevertheless, then-EPA Commissioner Scott Pruitt denied the ban last year, citing a lack of “regulatory certainty” and calling the studies’ results “novel and uncertain.” Regarding the proposed ban, Pruitt and others met regularly with industry representatives and promised to hear their case. DOW Chemical, one of the major producers of pesticides, is close with the Trump Administration, having donated $1M to its inaugural activities and having its CEO serve as head of the America Manufacturing Council.
DOW rejected the evidence from these studies that documented harm. Following the triple publication in 2011, DOW and other insecticide manufacturers funded research to review and reject the harms of their product by reinterpreting the results of the earlier studies. In three review articles, funded by both DOW AgroSciences LLC and the European Crop Protection Association, all in the same journal (Journal of Toxicology and Environmental Health, Part B: Critical Reviews), reviewers contended that while individual studies might have found correlations, no larger pattern emerges. Accordingly, they concluded that “evidence of causality…is not compelling” and that “epidemiologic studies do not support a causal association.”
While these review articles suffer from many serious problems, I’ll focus on one particularly glaring and pervasive methodological error. All three sets of reviewers required results across the different studies to be statistically significant, which is a standard commonly used to avoid jumping to conclusions too hastily. For instance, Li and colleagues compared two studies that reported finding statistically significant associations between chlorpyrifos in the mother or child and adverse outcomes like pervasive developmental disorder (PDD) and attention deficit hyperactivity disorder (ADHD) (see p. 134 & table 3). While one study found increased risks for both PDD and ADHD that were statistically significant, the other found increased risks but only with statistical significance for PDD and only with the increased metabolites in the child not the mother. The authors interpreted the findings as evidence against a causal connection: “exposures to pesticides other than just [chlorpyrifos] may be involved” (p. 129).
Like the other industry-funded reviews, when some of the results were not statistically significant, they interpreted the data as providing “no consistent patterns of adverse association across studies,” and dismissing the argument that chlorpyrifos causes neurodevelopmental harms. However, their interpretation is highly problematic: While epidemiologists often use statistical significance as a indicator of a possible connection, it is incorrect to use the lack of statistical significance as evidence that no harmful effect exists. While experimental studies have highly controlled environments, observational studies like birth cohorts do not, making it problematic to equate statistical significance with biological significance.
Why is this problematic? First, because these sorts of studies are observational rather than experimental, they do not fulfill the basic statistical prerequisites for requiring results to be statistically significant: while experimental studies have protocols to reduce the chance of lurking variables or coincidences, observational studies do not. Without these precautionary procedures, the mathematical assumptions of statistical testing do not hold.
Second, drawing an inference of “safe” or even “no evidence of harm” from some of these data is akin to committing the fallacy of appeal to ignorance: just because we have no knowledge about a substance’s risks, does not mean it is not risky! Third, we do have evidence of the harms in some cohorts, even if all the data do not achieve statistical significance. Given the difficulties of documenting the effects of environmental exposures, a positive result should not be taken lightly.
Regrettably, arguments that focus on the scientific and logical errors that these industry-funded researchers commit are unlikely to convince everyone, especially industry supporters. Scientific judgment is an interpretive enterprise, and scientists can appeal to different standards to draw differing conclusions from the same data. It is interesting and important, however, to understand specific methods that industry uses to defend their products with publications in scientific journals.
As with many other cases, industry and its funded scientists often cite the need for scientific certainty, as Pruitt has, and then point to very demanding and inappropriate standards–like requiring statistical significance across observational studies. In the words of historians Naomi Oreskes and Erik Conway, these scientists act as “merchants of doubt” by demanding an unreasonable standard of certainty to undermine regulatory efforts. Until we can understand how industry subtly misuses scientific authority for its interests, we will be left in endless cycles of bad-faith interpretation and debate with self-interested parties. And, most importantly, we will sacrifice the health of children and communities in the process.