Scott Firestone works as a researcher in evidence-based surgery, and recently started blogging about public health and environmental issues at His Science Is Too Tight, where this post originally appeared. You can find him on Twitter at @scottfirestone.
Kevin Drum from Mother Jones has a fascinating new article detailing the hypothesis that exposure to lead, particularly tetraethyl lead (TEL), explains the rise and fall of violent crime rates from the 1960s through the 1990s—at which point the compound was phased out of gasoline worldwide. It’s a good bit of public health journalism compared to much of what you see, but I’d like to provide a little bit of epidemiology background to the article. There’s so many studies listed that it’s a really good intro to the types of study designs you’ll see in public health. It also illustrates the concept of confirmation bias, and why regulatory agencies seem to drag their feet even in the face of such compelling stories as this one.
Drum correctly notes that the correlation is insufficient to draw any conclusions regarding causality. The research (pdf) published by economist Rick Nevin was simply looking at associations, and saw that the curves were heavily correlated, as you can quite clearly see. When you look at data involving large populations, such as violent crime rates, and compare with an indirect measure of exposure to some environmental risk factor such as levels of TEL in gasoline during that same time, the best you can say is that your alternative hypothesis of there being an association (null hypothesis always being no association) deserves more investigation. This type of design is called a cross-sectional study, and it’s been documented that values for a population do not always match those of individuals when looking at cross-sectional data.
This is the ecological fallacy, and it’s a serious limitation in these types of studies. Finding a causal link in a complex behavior with an environmental risk factor such as violent crime, as opposed to something like a specific disease, is exceptionally difficult, and the burden of proof is very high. We also need several additional tests of our hypothesis using different study designs to really turn this into a viable theory. As Drum notes:
During the ’70s and ’80s, the introduction of the catalytic converter, combined with increasingly stringent Environmental Protection Agency rules, steadily reduced the amount of leaded gasoline used in America, but Reyes discovered that this reduction wasn’t uniform. In fact, use of leaded gasoline varied widely among states, and this gave Reyes the opening she needed. If childhood lead exposure really did produce criminal behavior in adults, you’d expect that in states where consumption of leaded gasoline declined slowly, crime would decline slowly too. Conversely, in states where it declined quickly, crime would decline quickly. And that’s exactly what she found.
I looked a bit further at Reyes’s study. In the study, she estimates prenatal and early childhood exposure to TEL based on population-wide figures, and accounts for potential migration from state to state, as well as other potential causes of violent crime, to get a stronger estimate of the effect of TEL alone. After all of this, she found that the fall in TEL levels by state account for a very significant 56% of the reduction in violent crime.
Again, though, this is essentially a measure of association on population-level statistics, estimated on the individual level. It’s well thought out and heavily controlled for other factors, but we still need more than this. Drum goes on to describe significant associations found at the city level in New Orleans. This is pretty good stuff too, but we really need a new type of study, specifically, a study measuring many individuals’ exposure to lead, and to follow them over a long period of time to find out what happened to them. This type of design is called a prospective cohort study. Props again to Drum for directly addressing all of this.
It turns out there was in fact a prospective study done—but its implications for Drum’s argument are mixed. The study was a cohort study done by researchers at the University of Cincinnati. Between 1979 and 1984, 376 infants were recruited. Their parents consented to have lead levels in their blood tested over time; this was matched with records over subsequent decades of the individuals’ arrest records, and specifically arrest for violent crime. Ultimately, some of these individuals were dropped from the study; by the end, 250 were selected for the results.
The researchers found that for each increase of 5 micrograms of lead per deciliter of blood, there was a higher risk for being arrested for a violent crime, but a further look at the numbers shows a more mixed picture than they let on. In prenatal blood lead, this effect was not significant. If these infants were to have no additional risk over the median exposure level among all prenatal infants, the ratio would be 1.0. They found that for their cohort, the risk ratio was 1.34. However, the sample size was small enough that the confidence interval dipped as low as 0.88 (paradoxically indicating that additional 5 µg/dl during this period of development would actually be protective), and rose as high as 2.03. This is not very convincing data for the hypothesis.
For early childhood exposure, the risk is 1.30, but the sample size was higher, leading to a tighter confidence interval of 1.03-1.64. This range indicates it’s possible that the effect is as little as a 3% increase in violent crime arrests, but this is still statistically significant.
For 6-year-olds, it’s a much more significant 1.48 (confidence interval 1.15-1.89). It seems unusual to me that lead would have such a more profound effect the older the child gets, but I need to look into it further. For a quick review of the concept of CI, see my previous post on it. It really matters.
Obviously, we can’t take this a step further into experimental data to enhance the hypothesis. We can’t purposefully expose some children to lead to see the direct effects. These cohort studies are the best we can do, and the findings are possibly quite meaningful, but perhaps not. There’s no way to say with much authority one way or another at this point, not just because of the smallish sample size and the mixed results on significance, but because the results haven’t been replicated. A cohort study is still measuring correlations, and thus we need more than one significant result. More prospective cohort studies like this, or perhaps retrospective ones done more quickly on previously collected blood samples, are absolutely necessary to draw a strong conclusion that lead is behind this effect. Right now, this all still amounts to a hypothesis without a clear mechanism for action, although it’s a hypothesis that definitely deserves more investigation.
There are a number of other studies mentioned in the article showing other negative cognitive and neurological effects that could certainly have an indirect impact on violent crime, such as ADHD, aggressiveness, and low IQ, but that’s not going to cut it either. By all means, we should try to make a stronger case for government to actively minimize exposure to lead in children more than we currently do, but we really, really should avoid statements like this:
Needless to say, not every child exposed to lead is destined for a life of crime. Everyone over the age of 40 was probably exposed to too much lead during childhood, and most of us suffered nothing more than a few points of IQ loss. But there were plenty of kids already on the margin, and millions of those kids were pushed over the edge from being merely slow or disruptive to becoming part of a nationwide epidemic of violent crime. Once you understand that, it all becomes blindingly obvious (emphasis mine). Of course massive lead exposure among children of the postwar era led to larger numbers of violent criminals in the ’60s and beyond. And of course when that lead was removed in the ’70s and ’80s, the children of that generation lost those artificially heightened violent tendencies.
That’s quite a bit overconfident. It’s beyond debate that lead can have terrible effects on people. But there is no real scientific basis for calling the violent crime link closed with such strong language. It’s a mostly benign case of confirmation bias, complete with putting blame of inaction on powerful interest groups. Drum’s motive is clearly to argue that we can safely add violent crime reduction to the cost-benefit analysis of lead abatement programs paid for by the government. I’d love to, but we just can’t do that yet.
Image courtesy of Albert Lozano / Shutterstock