When journalists talk junk

Terence Corocoron wrote an opinion piece for the Financial Post recently that I take exception with.  The subheading to the article is “Junk Science Week: Finding that people who consume certain drinks also develop cancer is not an indicator that the drinks cause the disease”.  To summarize, Corcoron argues that much of research in nutrition epidemiology is flawed, but that do-gooder socialists will still use it to justify regulations of one thing or another.

I think being critical of science is not only healthy, but essential to the sceintific process.  However, Corocoron is not a scientist, and his critique is incomplete and unhelpful.  So, I sent him an email.  Here is the text of the email (edited from the original for style and clarity):

Dear Terence,

I agree with the general need to be skeptical about science, and have myself been accused of being overly skeptical by some of my peers. I think many of California’s regulations on carcinogens are nuts. I think we worry too much about air pollution in Canada, the impacts of wind turbines on sleep, fluoride in the drinking water and the profit motives of big pharma.

However, some journalists have a way of taking this kind of skepticism too far. Your article on alcohol and cancer is a perfect example of this.

It is true that some study designs are weaker than others; experimental study designs have clear strengths, observational case studies based on self reported data have clear weaknesses. The challenge is that almost all health research on humans about exposure to nutritional or environmental harm are forced to employ inferior study designs. This is because we can almost never do proper experimental research on links between the environment/nutrition and human health. It is often unethical and almost always impractical.

Given this, we could choose to dismiss all non-experimental research on humans as junk science. This would include almost all the research on tobacco and cancer, by the way, as well as all the research linking asbestos to mesothelioma. Indeed, if we hold all science to the highest standard of evidence, then we would probably have to conclude that salt has no demonstrated impact on hypertension, exercise may or may not extend life (or even improve the quality of life) and that drinking a two-four every weekend may or may not be harmful to your liver. We’d also have to say that there is no convincing evidence that free markets are good for economic productivity, that killing terrorists reduces the risk of terrorism, and that free expression is good for the human soul.

The fact is that we (as societies and people) still have to make decisions about the possibility that some things in are world may be harmful (or good) in spite of the absence of rigorous and convincing evidence. Almost any research on the health impacts of alcohol will be unsatisfying if we hold it up to the gold standard of experimental research designs. It’s for this reason that epidemiologists come up with heuristics (“Hill’s criteria”, is one example) to make decisions considering things like effect size, replicability, study design, agency and other factors under conditions of empirical uncertainty. It’s also why science works best as an iterative process of falsification, rather than truth finding.

While you seem to see research on alcohol and cancer as part of a pernicious socialist plan to regulate the world, I see it as part of the process of untangling a mess of mixed evidence that leads to incrementally better information over time.

It’s true that some regulators use inconclusive science to justify unnecessary regulation. The antidote to bad decisions is to raise the level of science literacy not just invoke general skepticism towards certain areas of research.

I understand that nuance is not good for selling papers (or click baiting) but your article does little to help explain the scientific process properly, or help inform people about the challenges of making decisions about food and environmental safety.

Terence did not respond.

Changes of Income and Housing Value in Hamilton

Here’s an interactive map you can use to look at the average annual changes in income and dwelling value in Hamilton, Ontario from 2000 to 2015.  Zoom in with the +/- sign, and navigate around with your mouse.  Click on a house icon to see average annual change in dwelling value, and on the area immediately surrounding it to see the average annual change in income.  Positive values indicate an increase, and negative values indicate a decrease.

Methodological details

Data are from the census at the dissemination area level.  Missing data are excluded from the map; this includes any DA for which data were missing for any of the years between 2000 and 2015.  The numbers are the slopes of the linear trend fitted to the four Census years (2000,2005,2010,2015).  So in short, these represent average linear change over the 15 year period.  The values have been divided by 5 (the between census interval) to give an annual average.   All data are in 2015 dollars using Bank of Canada inflation adjustments.

 

Cancer clusters and social media

I use Google Alerts to notify me of English language reporting on cancer clusters.  I have been keeping track of some stories about cancer clusters for a few years, and I have noticed a fairly persistent model for how concerns about cancer clusters enter the public sphere.  It looks like this:

As an example, we can look at the Auburn ocular melanoma cluster in Alabama.  The first main reporting on the cluster was back in February 2, 2018.  At the time, reporting suggested 5 cases, 3 women and 2 men, all of whom had links to Auburn University in over the same period.

Right around when these first media stories came out a Facebook support group was formed.  By February 13th, another media outlet reported the cluster now included 18 people.  The Facebook account reported 31 cases as of March 22, 2018.  By April 4, Healthline, an online media outlet reported a cluster of 33 people.

The problem

I’ts hard to say whether or not this cluster is a real concern.  Five cases may or may not be higher than expected by chance (it’s tricky to know for sure), but certainly is high enough to justify some further investigation.  However, I find it very hard to believe that the cluster has 33 confirmed cases in it.  If it did, then there is something seriously, seriously wrong at Auburn.

As everyone knows, social media is excellent at connecting people, but isn’t excellent at sharing correct information, and is usually not a platform for rigorous analysis and decision making.  Online media sources (and some traditional sources) seem increasingly susceptible to ignoring good journalistic practice, and focus on the sensational–in the case, the large number of cases reported by Facebook–rather than carefully vetting information to confirm it’s validity.

The problem is that this journalistic failure can have many serious and tangible adverse consequences.

First, it can create unnecessary alarm.  I imagine many Auburn alumni are now very concerned about their eye health.  This concern has an emotional, financial and physical cost to them and their families.  The emotional concern and possible medical interventions that could follow may even lead to new other health challenges.

Second, the reputation of Auburn University (and the town it is situated in) may have been damaged.  Even if there is a cluster, it’s possible that the cause has nothing to do with the university at all.  If an investigation does find fault–either presently, or historically–then someone at the university should be held responsible, but for the moment, there is insufficient evidence to even imply blame.

Third, the outcome of cancer cluster investigations are rarely satisfying to the communities they affect.  The vast majority of the time, these investigations find no evidence of a cluster, or even an elevated risk of cancer.  To the people in the community this is often inconceivable–especially once the media has amplified their concerns.  The result is dissatisfaction, a loss of faith in the institutions involved–including cancer experts and government–and even rifts in the community.

I don’t mean to imply that the media sources behind some of this reporting are being deliberately dishonest, or that the information shouldn’t make the news.  However, given the potential consequences of misinformation, they have a responsibility to be exceptionally careful about how they report the story.  Unfortunately, I see few examples of the media (traditional or otherwise) reporting this information with the necessary care or attention to detail.

The solution

As I have proposed before, one solution to this problem is to get out ahead of it.  Government agencies need to do routine surveillance of cancer and main environmental cancer determinants, and then routinely report this information to the public.  This openness can build trust, inform the public about what the risks actually are, and provides useful context to media reports that could emerge over time.  This also increases the rigour of cancer investigations.

There are many challenges to implementing such surveillance schemes, perhaps chief among them is cost of implementation.  However, the costs of cancer cluster investigations are not trivial.  I am not aware of any analysis of the actual economic costs, but even if we assume that there are only 1000 investigations a year in the US (probably a low estimate) and that each costs $100,000 in salaries, travel, lab costs, etc., then that’s $100 million a year spent on cluster investigations.  Routine cancer surveillance does not have to cost much money as the data are already collected as standard practice in many jurisdictions, and the monitoring for clusters could be done using fairly simple machine learning systems.

Even setting the costs aside, the benefit of a routine surveillance approach is that real clusters are more likely to be detected in a timely manner.  Good surveillance systems may be able to identify statistical anomalies earlier on in the process,which could help reduce the risk of future harm.

Conclusion

Cancer clusters have been a fraught subject for decades.  People affected, statisticians, epidemiologists and physicians all have their own take on it generally, and in specific cases, and sometimes furiously disagree.  Unfortunately, some media participation in this subject stirs up controversy and concern.  Since social media and dubious online reporting is here to stay, we need to improve cluster surveillance practice to get ahead of the challenge.