Arson and the Black Swan effect


About 7 years ago or so I had a graduate student working on geographic patterns of arson in Toronto.  We published one of the chapters from her work, but then the other one lingers.  It lingers because it was pretty clear to me that in spite of the fact that our analysis suggested some interesting processes at work, arson is a black swan.

What do I mean by black swan?  Well, I mean it in the sense used by Nasim Taleb, a cranky statistical philosopher who authored three important books: Fooled by Randomness (my favourite), The Black Swan and Antifragile.  Taleb argues that many of things that appear to have explanation in this world are largely unexplained, and we give false authority to many experts who claim understanding of these unexplained events.  Black swan events are those very rare but highly impactful phenomena that are very hard (if not impossible) to predict, but that we often come up with explanations for after the fact.  Examples include stock market collapses and major terror attacks.

Arson is a black swan spatial process because the realization of arson frequency in space is made up of a small amount of explanable variation (population, poverty, housing conditions, street permiability) and a whole lot of hard to explain variation.  The unexplained variation could be due to many processes, known and unknown.  For example, the unexplained variation could be driven by serial behaviour; an arsonists sets a large number of fires in a small area in one year, and then nothing in the next.  We know that serial criminal behaviour occurs, however predicting it is hard (if not impossible).  Or perhaps it has to do with some unknown process. In either case, our work on this problem strongly suggested that predicting the location of small clusters of intense arson activity will occur in the long run is a fool’s errand.

What’s the problem?

It is fairly easy to publish research showing only the explained part of a system even if the explained part is a small component of the variation of the system overall.  This is because any explanation (even if small) seems to be of some value.  If a physician tells you that you need to change your diet to reduce your blood pressure, she’s not making a specific prediction about what will happen to your blood pressure if you don’t change your diet.  This is impossible to know.  in fact, most variation in blood pressure is not caused by diet.  Nevertheless, she’s using information that shows how diet explains some variation in blood pressure in populations as a whole to offer you advice that, on average, is probably helpful.

When we worked on the factors that explain geographic variation in arson, my student came up with a model that explained some of the geographic variation in arson.  She was even able to identify which areas of the city had higher and lower arson frequency in the long run.  However, year to year black swans (what I suspect are probably clusters of unpredictable serial arson behaviour) made the predictions of arson quite poor, usually leading to major under-predictions.  The following figure is illustrative:

Arson predicitons

Purple dots are the count of arson across Toronto neighbourhoods–the variation from place to place is quite variable, and varies considerably year to year

The purple dots on this figure are the actual number of arson events in a given year across Toronto neighbourhoods.  In some neighbourhoods there are over 35 arson events in a year, but the city average is around 2 or 3.  Attempts to model these data using things like population, commercial activity, poverty, street permiability and other factors can’t predict these extreme variations, and we never found a term to put into a model that picks up much of this variation in a training data set and can then predict the variation in other data sets.

(technical note: what’s particularly funny about this above example is that the ‘best’ performing model structure here is the old-fashioned linear model, seemingly because it picks up the possibility of extreme variations better than models that attempt to parameterize it through some specific model link structure or some scaling parameter.


Given enough data, anyone can model some of the pattern in almost any phenomena.  The fundamental question is how well can your model predict future patterns?  For this arson project, the predictions were just not compelling enough; sure, we could predict the relatively higher and lower arson neighbouhoods, and some of the factors that may explain some variation from neighbourhood to neighbourhood.  However, the real challenge is being able to predict the extremes–the neighbouhoods which are suddenly targetted by serial arsonists, and result in a large number of arson events that whip up fear and threaten the safety of a community.  This is not a simple task, and we certainly had no luck with it, so the chapter sits unpublished.

This also points to the importance of context; a model that explains a small amount of variation in a system might be very useful if if that knowledge can save lives, or save money.  In this case, I did not feel that our model was useful for anything–not for arson prevention, policing, urban design, etc., even if it did explain some variation in arson.  However, going back to the hypertension example, there is evidence that a little information about diet and hypertension might be useful at a population scale.

TAR: a model for university instruction

I present here a simple idea for breaking down how I typically plan out courses.

I have three considerations: time (T), accessibility (A) and rigour (R).  Accessibility is the breadth of audience that I reach; basically, the number of students who will get value from a lecture or class.  Rigour is the completeness of the material.  Time is the time available to teach.

With this in mind, I propose the following.

1. Time is proportional to the product of accessibility and rigour (T = A*R)

Time increases as rigour and accessibility increase

2. Accessibility is proportional to time divided by rigour (A = T/R)

The idea here is that if infinite time were available, it would be possible to teach any student anything with as much rigour as required.

3. Rigour is proportional to time divided by accessibility (R = T/A)

For a fixed period of time, any increase in accessibility will reduce rigour.

With this in mind, we get the following visual model to help understand the relationship:

As a university professor I have some control over time, but not much.  I do have control over accessibility and rigour.  For courses in which I know the material must remain accessible to a broad audience, I generally have to lower rigour.   If a course needs to be rigorous, then I expect accessibility to decline.

While I have little control over classroom time, I have discovered that online tools can be useful for increasing the time of instruction.  Using readings, online quizzes, and video content, I can increase content without requiring more class time.  I use this extra time to delve into details I can’t cover in class–and add rigour.


This is all obvious to experienced instructors, however, my treatment here is a bit more rigorous than what one typically sees in discussions of teaching strategies.  Which, unfortunately, means I very likely lost your attention several paragraphs ago.

The cost of research publications for Canadian universities

How much does it cost to publish research?  Obviously this varies considerably across disicpline, but I did a little comparison across a non-random selection of Canadian universities.  The sources of data include Web of Science publication count, Web of science count of highly cited papers and funding data from this source.

Once I combined these data, I came up with this file.

The data include research fund and paper publication totals by year and institution.  Arguably research funds are used to do things other than fund the publication of research–like training graduate students–but this is a rough starting point. Furthermore, published research is a key metric of measuring research output for a university, and at the very least, is probably a good proxy for research output more generally, especially in comparisons over time and between universities.


These data tell us a few interesting things.  First, research productivity is trending upwards.  Click on the figure below to see a larger version of the trends over time.

Papers published by year for different universities

This pattern is true for all Web of Science publications (on the left) and highly cited Web of Science publications (on the right).  The rate of increase is fairly similar across institutions.

Second, universities receive funding in the neighbourhood of $50,000 and $100,000 per Web of Science paper.  This is probably an over-estimate of the per-paper research costs since many papers published do not end up in the Web of Science system.  Perhaps the real number is half this value–say between $25,000 and $50,000 per paper, but this gives us a basis for comparison.

Third, across Canadian universities in this sample, there is some noteworthy variation. Averaged over the period of time I used in this sample, funding per paper varies beween 61K and 89K.

The University of Saskatchewan spends the most per paper, and the University of Toronto spends the least.  This could reflect some economy of scale effect; the U of T is big, and is able to leverage its size to be more productive per dollar spent.  If we look at costs per highly cited paper, we see a similar pattern, but it’s more exaggerated.  You can see that the cost per highly cited paper is 5 times higher for the University of Saskatchewan than the U of T.  Also note that while Queens University publishes papers at a lower average cost than McMaster, McMaster spends much less for highly cited papers, spending roughly half as much per highly cited paper as at Queens.

The good news (depending on your perspective, I suppose) is that the expenditure per paper published is trending downwards.

All Web of Science papers

Between 2008 and 2016, most institutions in this sample see an improvement of publication efficiency in terms of overall numbers, mainly because of an increase in the number of papers published with a stable (or slightly declining) funding level.  Similarly, we seem to be getting more research ‘bang’ today when measured by highly cited publications as well.  Note the variation for the University of Saskatchewan; this is largely due to the small numbers involved.

Only highly cited papers in Web of Science


What does this all mean?  Well, we know, at least within an order of magnitude, how much we should expect to pay for publications, on average.  This certainly varies by discipline, but it gives us a ballpark for comparisons across research intensive universities.  If you can get one paper out for $10,000 or less, then you are probably doing well in most fields.  If it costs more than $100,000, then you probably have a lot of research overhead–in lab equipment, staff, etc.  Some universities seem to spend more–like the Unviersity of Saskatchewan–and some spend less–like the Unviersity of Toronto.  This makes sense given the location of these institutions; the University of Saskatchewan is more isolated geographically, and is the lone research intensive university in Saskatchewan (sorry, Regina).  Toronto is a large institution surrounded by other universities, and at the centre of economic activity in Canada.  This probably allows it to leverage a mix of resources that increases it’s efficiency at publication.

Second, between 2008 and 2016, research funding in Canada did not radically change, but cost per paper went down.  This is mostly because the number of papers and highly cited papers in Web of Science went up.  This could be a good sign; the universities in this sample managed to adapt to a stable (and slightly declining) research funding pot.

The statistic of a statistic problem

One easy (and not uncommon) mistake in data analysis is to calculate a statistic from a statistic without considering statistical weighting.  For illustration purposes, consider the following example.

Say I have data on neighbourhood income and population for a small city.  The table of data look like this:


Perhaps the first thing I want to know is the average income for the city as a whole.  It seems pretty natural to simply take the average of the average incomes across these neighbourhoods.  This would give me an average income for the city of $68,712.  However, this number is incorrect.  Taking the average of the average assumes that each neighbourhood contributes the same amount of information to the city average.  This is clearly not the case.  Gastown has 305 residents, and so the average income of these residents clearly contributes less information to the city average than Zinger Park, which has 3900 residents.

The solution

The solution is to simply take the weighted average.  In this example, the weight is the population in the neighbourhood (perhaps better would be the popualation of employed people in the neighbourhood).  If we sum the products of these weights and average income, and then divide this by the sum of the weights, we get a weighted average.  Here is a table that illustrates this visually:

The red cell is the sum of the product of weights and average income.  The yellow cell is the sum of the weights (in this case, population).  If I divide the red cell by the yellow cell, I get the weighted average (in orange).


Weights are common in statistical data analysis, and their role is usually to adust a statistic based on the information it contains.  In this example it’s pretty straightforward.  None of this is rocket science, but taking an unweighted average of average (or average of proportions, or average of any statistic) is done all the time.  I see it in academic work, public reports and newspaper articles.  It’s an easy mistake to make with a (usually) easy fix.

Is there a surgical mortality cluster in a Florida hospital?

I recently read a story in the National Post about a physician and hospital that were implicated in a CNN story about surgery related deaths in Florida.  It is a useful example of the challenge of communicating health risk in a way that is truthful and useful to the public. The specific issue concerns deaths related to a particular surgical intervention involving newborns with congenital heart anomalies, and whether or not the death rate among these patients at one hospital is higher than the death rates at hospitals across the country. Here are links to some CNN stories on the subject:

The CNN reports suggest that there were 9 surgical deaths between the end of 2011 and June 2015. Based on numbers provided in the stories, it seems there were 27 surgeries in 2012, 23 in 2013, 18 in 2014. Using these numbers, I’ll assume that there were 9 surgeries between January 1 and June 2015. This gives a total of 77 surgeries (approximately) over this period, and a surgical death rate of around 12%. The national average death rate is closer to 3%, meaning that this particular hospital’s death rate is 4 times higher than the national average if we assume all the above numbers are correct.

Now if we further assume that the expected number of deaths is equal to the national average rate times the number of surgeries performed, we should expect around 2.5 deaths at this hospital over this time period, with around 1 in 1000 chance of getting as many as 9 deaths if the true risks of death at this hospital were actually the same as the national average.

Based on these rough calculations, there would seem to be good reason for some follow-up investigation, and CNN has uncovered an important problem.  However, the hospital released information questioning the data CNN used in their report in early June of 2015, claiming that the true surgical mortality rate is 4.9% over the same period.  Furthermore, this hospital reports their data to the Congenital Heart Surgery Database maintained by the Society of Thoracic Surgeons (STS), which supports the 4.9% estimate.

A need for more prospective surveillance

I have not dug into what explains these different results, but I suspect that the hospital’s numbers adjust for differences in patient complexity, and perhaps other patient attributes.  Much of the focus so far has been on the hospital’s surgical record and the rigour of CNN’s reporting, but the deeper issue concerns risk communication and whether or not either of these parties can be expected to fully serve the public’s interest.  CNN is incentivized to tell an engaging story; hospitals are incentivized to perform procedures that are profitable.  Most of the time most people in both these organisations mean well, but these good intentions might sometimes be secondary to institutional, professional and other motivations.

One solution is to improve routine surveillance and public reporting.  Whether performed by government or merely regulated by government, routine and prospective surveillance of surgical outcomes by some impartial third party can help avoid some potential conflicts of interest.  Furthermore, ensuring a regular and routine flow of data into the public sphere could improve public trust.  To some extent, this is done by the STS in the routine reporting of surgical outcomes, though it’s not clear whether the reporting system has any regulatory oversight, and as of late 2017, only includes about two thirds of enrolled program participants across the U.S.

Public trust is not helped by the reporting of false information, or delayed and/or unprofessional reactions of health professionals or hospitals.  Given the stakes of the problem, I feel that this episode would have been avoided altogether had there been a routine and regulated prospective surveillance system with clear thresholds for investigation already in place.  Without such a system, these apparent clusters will continue to emerge, more stories will be told, and more members of the public will feel exasperated by the conflicting information about surgical risk.