Monthly Archives: January 2016

A primer on calculating attributable morbidity and mortality

Quantifying the impact of exposure to environmental hazard on a population is probably easier than you think.  It requires basic math (subtraction, addition, multiplication and division) and data that are often available from public online sources.  Quantifying the impact of exposure to hazard is useful because it helps put the primary consequences of exposure to environmental hazards into proper perspective, which is very important for making decisions about the health information we often encounter in the world.

I am going to use this post to explain the process of quantifying risk in terms of attributable morbidity (but it could be used to calculate attributable mortality as well).  I have demonstrated this simple technique to undergraduate students in my environment and health class for about 6 years, and this example is based on some work of a student from a few years ago.  The data I use in this calculation are from public sources, and I provide links to them in text. This process will give you a sense of how easy it is to approximate the impact of  exposure to some environmental hazard on a community or population.

With this in mind, I would caution that this process should not be viewed as exact; this is a simplification based on data that probably have error.  As such, it is my practice to err on the side of overestimating risks of harm.  In environmental health ‘uncertainty factors’ are often used to accomplish this–they are basically multipliers of risk estimates that ensure human safety is protected whenever the science is unclear.

The example I will use is lung cancer risk due to exposure to radon in the city of Winnipeg, Manitoba Canada.  My intent is to come up with an estimate of the number of cases of lung cancer due to residential radon exposure.  I will refer to this as attributable morbidity (AM).

The basic idea behind attributable risk is to calculate the difference in morbidity between persons in a population who are exposed to an environmental hazard and persons who are not.  For example, consider a population of 1000 people, half of whom are exposed to a hazard.  Now among those who are exposed there are 10 sick people and in the non-exposed there are 5 sick people.  If the exposed and unexposed populations were otherwise the same, it is easy to see 5 cases of illness are attributable to exposure to hazard.

In the real world we very often lack precise measures of incidence and exposure in the settings we’re interested in, so we need to cobble together estimates of AM based on other data.

What data do I need to calculate AM?  At the very least I need:

  1. The underlying or baseline incidence of lung cancer
  2. The risk of lung cancer due to radon exposure per dose of exposure (a dose-relative risk)
  3. Estimate of exposure to radon in the population

1. Baseline incidence

I found baseline lung cancer incidence for Canada from the Canadian Cancer Society, who report cancer incidence at 58 per 100,000 for men and 48 per 100,000 for women.  I’ll use 52 per 100,000 as a weighted half-way point between the two (it’s weighted because there are more women than men, so the population average should be slightly more like the estimate for women than the estimate for men).

Baseline lung cancer incidence is 0.00052

2. Dose-specific risk

The WHO has published a handbook on the cancer risks associated with radon exposure, from which we can obtain a risk of lung cancer per dose of exposure.  In this report they conclude that there is a 16% increase in the risk of lung cancer for every 100 Bq/m3 long-term residential exposure to radon .  I convert this into a measure of relative risk (RR=1.16).

Dose-specific relative risk is 1.16 per 100 Bq/m3

3. Exposure to radon in the population

According to a Health Canada report, 12.1% of Winnipeg homes have a concentration of exposure at or above 200 Bq/m³.  I will assume that 12.1% of the homes is equivalent to 12.1% of the population, so if we multiply Winnipeg’s population (663,000) by 12.1%, we get 80,233 people exposed at the 200 Bq/m³ level.

Population exposed is 80,233 (note that this ignores lower and higher exposure levels)

4. Final calculations

If we assume that risk from exposure is linear and additive (where doubling the dose of exposure results in a doubling of risk), then the relative risk of lung cancer for persons in Winnipeg is 1.32 (16% x 2 and converted into relative risk).  If these 80,233 people had the Canadian average risk of lung cancer, we would expect

80,233 x 0.00052 = 42

cases of lung cancer in this population every year.  The additional risk they experience is

80,233 x 0.00052 x 1.32 = 55

cases of lung cancer every year.  The difference between these values is the lung cancer attributable morbidity (AM) for radon, in this example, 13 cases per year.  Because I like to be conservative, I will multiply this value by an uncertainty factor of 2, resulting in an AM of 26.  I’ll treat this as the upper limit of my AM estimate.  This means that I estimate that Winnipeg has has a radon attributable lung cancer morbidity of somewhere between 13 and 26 per year.  Given that Winnipeg probably has somewhere around 350 lung cancer cases a year, this means that between roughly 5 and 10% of lung cancer cases may be the result of residential radon exposure.

5. Limitations

There are many limitations with this approach.  For one, radon exposures at higher or lower than 200 Bq/m³ may increase risk of lung cancer, so 13 cases might be a low estimate of total AM.  This is partly why I multiplied the calculated AM by 2.  There is also a possible interaction between radon exposure and smoking; some of the risk of lung cancer is due to these factors combined, and this may not be accounted for in these calculations.  Furthermore, the data I used for the calculations are estimated with error, and some of that error propagates–is carried through–my calculations.  But the 13-26 AM I have calculated is probably in the ballpark of correct, and more importantly, I have demonstrated an easy but useful way to quantify the impact of exposure to an environmental hazard on health.

 

Are NHL hockey players getting older and better?

For the period 1988 to 2010, NHL hockey players playing the forward positions averaged 26.19 years of age.  This is fairly stable throughout the period; on average, the distribution of NHL player ages was roughly the same in 1990 as it was in 2000 and 2010.  While the average age of forwards in the NHL has not changed much overall, the figure below suggests that there is a demographic shift underway; there are more players over 30 over the past decade than the previous decade.  Also note that I have intentionally excluded players 40+ from my analysis because there weren’t any that played in the period between 1990 and 1999.

nhl player ages

Over the same period of time, NHL forwards had their best per-game point production (BPGPP) at 24.61 years of age.  This observation is also fairly stable over time; when I divided the data into two time periods (1990-1999 and 2000-2009) the average age BPGPP was very similar: 24.44 for 1990-1999 and 24.74 for 2000-2009.

However, the following figure seems to suggest a slight shift in point production with age—with point production higher earlier in the careers of players in the 1990-1999 period than players in the 2000-2009 period.

nhl ppg by age raw

The figure above does not correct for different league goal production over time; the shift in hockey strategy to the clutch and grab or ‘trap’ style in the late nineties coincides with a fairly substantial drop in the overall number of goals scored, which could explain part of the difference between the two series.  So I normalized both series by subtracting the average points per game over the two time periods in order to make the two series more comparable.

nhl ppg by age centred

We can see here that there is less difference between the two series in the period between 22 and 29 years, but interestingly, greater differences at the margins of a player’s career; in the 2000-2009 period, normalized point production is comparatively higher at older and younger ages.  So while the ages of players are not substantially different over the two time periods, older players seem to have generated more offense over the 2000-2009 period than in the decade previous to that.

Young players seem to be producing more goals in recent years than in the past.  Perhaps this is because only the very best young offensive players get to play in the NHL, while grinders and checkers spend more of their early career in the minors developing defensive skills, ‘character’ and the like.  This causes a selection bias where younger players in the NHL are mostly ‘skill’ players and middle-aged players are a more diverse mix.

A similar kind of selection bias is probably affecting the pattern for older players too; players with long NHL careers tend to be good players.  After all, it is because they are good and productive that teams are willing to sign them to contracts at older ages.  Unproductive players tend not to have long careers.  So the high average point production at older ages is probably because the lower skill players retire earlier.

My conclusion?  NHL hockey players are not getting older–the demographics of NHL players have been relatively constant over the last 20 years–but older players seem to be scoring more goals in recent years compared to years previous.  The change over the two time periods is pretty striking, and may point to improvements in training and/or medicine that allow skill players to have longer and more productive careers.  It could also suggest a change in attitudes towards aging in sport–where good older players are less likely to be overlooked than they may have been in the past.  In any case, it would seem unwise for a coach or GM to assume that older players can’t contribute offense in hockey–clearly some players are doing just that.

43 years young, and still producing points

43 years young, produces points

Differential scrutiny bias

When I set out to analyze data as part of the research process, I almost always have an hypothesis in mind.  I try to avoid being too attached to an hypothesis a priori, but I often do have expectations one way or another based on my reading of the current literature and/or theory.

When in my research I discover evidence that is not consistent with my expectations, I always look for what’s wrong.  I scrutinize notes and computer code, and if there is an error, I correct it, and re-analyze the data.  If the results are still inconsistent with expectation, I accept the unexpected findings, and try to write them up.  If the results are unpublishable because I can’t fix a genuine problem with the data or analysis, I discard them and drown the wasted effort in a cold beer.  Actually, I don’t drink much beer.  I’m more likely to drown my wasted effort in a Popsicle.

popsicles_better_than_beer

However, if I find evidence consistent with my expectations, I am less likely to scrutinize my findings to the same degree.  If I see what I expect to see, then I assume I’ve done nothing wrong.  I do double check my notes and code as a regular practice, but I suspect that when I see what I expect to see, I don’t re-examine my results or analytical procedure with the same vigour as I would have if the results were unexpected.

This bias in rigour may be unique to me, but I doubt it.  I suspect that every scientist encounters this, whether they acknowledge it or not.  The problem is that the behaviour is probably correlated with a expectation; scientists very often have an expected result, and that expectation corresponds to an expectation of findings.  Deliberate deceptions to ensure results agree with expectations do occur (here is an interesting example).  Less often discussed is an accidental bias in the direction of expected findings.  This bias works very simply: the greater the departure of an observed from an expected finding, the greater the scrutiny following preliminary results, and the greater chance of detecting error.  The closer an observed finding is to an expected finding, the less scrutiny, and the less chance of detecting error.  In the long run, this biases the population of findings to expected results.  Importantly, if expectations are not randomly distributed in the population researchers, the effect is a long run bias in findings apparent in meta-analysis.  I am calling this the Differential Scrutiny Bias (DSB).

The implications of the DSB on science might be profound.  Consider the following silly example.

Consider an attempt to estimate the height of Smurfs through repeated sampling of a Smurf population.  If we assume that experimental error is normally distributed around the truth, then we should expect that on average, research should still provide an unbiased (though perhaps not precise) perspective on reality.  Some samples will observe heights larger than the true height of Smurfs, some smaller, but provided the error is due to sampling or unsystematic measurement errors, the distribution of results of these studies should be approximately normal, and we should get a good sense of Smurf heights in the long run.

In the presence of the DSB, the distribution of research findings would be systematically wrong in the long run, and in particular, slow to detect changes in the properties of phenomena being sampled.  Let’s assume that due to a change in nutrition, the true height of existing Smurf populations is now 2.8 apples high, but that the historically expected height is 3 apples high.  According to the DSB, researchers will more carefully scrutinize findings that do not conform to their expectation (in this case, 3 apples), which means that there will be more ‘3 apple’ study results than there should be due to simple random error.  As a result, any meta-analysis of these research findings will be biased to find the average Smurf height as higher than 2.8, and the crisis in Smurf malnutrition might never be detected.

hungrysmurf

It is hard to know how serious this effect is in practice, but it is a plausible demonstration of how many studies can be combined together and yet still be wrong. We could do things to reduce the impact of DSB, however.  To minimize this effect in my own research I have taken to always doing my analysis first on synthetic data–that is, data with the same structure as my study data, but with randomized or shuffled measurements. This way I resolve most technical and coding errors before I see the results.  It doesn’t entirely eliminate the DSB, but goes part of the way by ensuring that there is a minimum level of scrutiny over the methods before I have any results to look at.

A Tale of N Cities

When highlighting differences between cities, it is strikingly common for social scientists to include the words “a tale of two cities” in the titles of their publications.  Here is a very small sample of the seemingly hundreds of articles out there:

  • Sexual networks and sexually transmitted infections: a tale of two cities
  • Bodies, borders, and sex tourism in a globalized world: A tale of two cities-Amsterdam and Havana
  • Simulating trends in poverty and income inequality on the basis of 1991 and 2001 census data: a tale of two cities
  • A tale of two cities: The miniature frescoes from Thera and the origins of Greek poetry
  • How do predatory lending laws influence mortgage lending in urban areas? a tale of two cities
  • Cardiovascular disease in Edinburgh and north Glasgow–a tale of two cities
  • A tale of two cities: sociotenurial polarisation in London and the South East, 1966-1981
  • The economic incorporation of Mexican immigrants in southern Louisiana: A tale of two cities

taleoftwocities

Naturally one may ask what titles researchers choose when looking at more than two cities?  Well, it seems they continue to borrow from Dickens:

  • A tale of three cities: persisting high HIV prevalence, risk behaviour and undiagnosed infection in community samples of men who have sex with men.
  • A tale of four cities: intrametropolitan employment distribution in Toronto, Montreal, Vancouver, and Ottawa-Hull, 1981-1996.
  • Declines in minority access: a tale of five cities.
  • Transfer of Water from an International Border Region: A Tale of Six Cities and the All American Canal.
  • The impact of fiscal limitation: A tale of seven cities.
  • Time delays in the diagnosis and treatment of acute myocardial infarction: A tale of eight cities.
  • Aboriginal over-representation in the criminal justice system: A tale of nine cities.
  • A tale of ten cities: the triple ghetto in American religious life.

I couldn’t find any peer-reviewed publications with the title “a tale of eleven cities”, but perhaps it’s just a matter of time.  Happily, there are also minimalist titles out there:

  • Student housing: a cautionary tale of one city.