I recently read a story in the National Post about a physician and hospital that were implicated in a CNN story about surgery related deaths in Florida. It is a useful example of the challenge of communicating health risk in a way that is truthful and useful to the public. The specific issue concerns deaths related to a particular surgical intervention involving newborns with congenital heart anomalies, and whether or not the death rate among these patients at one hospital is higher than the death rates at hospitals across the country. Here are links to some CNN stories on the subject:
The CNN reports suggest that there were 9 surgical deaths between the end of 2011 and June 2015. Based on numbers provided in the stories, it seems there were 27 surgeries in 2012, 23 in 2013, 18 in 2014. Using these numbers, I’ll assume that there were 9 surgeries between January 1 and June 2015. This gives a total of 77 surgeries (approximately) over this period, and a surgical death rate of around 12%. The national average death rate is closer to 3%, meaning that this particular hospital’s death rate is 4 times higher than the national average if we assume all the above numbers are correct.
Now if we further assume that the expected number of deaths is equal to the national average rate times the number of surgeries performed, we should expect around 2.5 deaths at this hospital over this time period, with around 1 in 1000 chance of getting as many as 9 deaths if the true risks of death at this hospital were actually the same as the national average.
Based on these rough calculations, there would seem to be good reason for some follow-up investigation, and CNN has uncovered an important problem. However, the hospital released information questioning the data CNN used in their report in early June of 2015, claiming that the true surgical mortality rate is 4.9% over the same period. Furthermore, this hospital reports their data to the Congenital Heart Surgery Database maintained by the Society of Thoracic Surgeons (STS), which supports the 4.9% estimate.
A need for more prospective surveillance
I have not dug into what explains these different results, but I suspect that the hospital’s numbers adjust for differences in patient complexity, and perhaps other patient attributes. Much of the focus so far has been on the hospital’s surgical record and the rigour of CNN’s reporting, but the deeper issue concerns risk communication and whether or not either of these parties can be expected to fully serve the public’s interest. CNN is incentivized to tell an engaging story; hospitals are incentivized to perform procedures that are profitable. Most of the time most people in both these organisations mean well, but these good intentions might sometimes be secondary to institutional, professional and other motivations.
One solution is to improve routine surveillance and public reporting. Whether performed by government or merely regulated by government, routine and prospective surveillance of surgical outcomes by some impartial third party can help avoid some potential conflicts of interest. Furthermore, ensuring a regular and routine flow of data into the public sphere could improve public trust. To some extent, this is done by the STS in the routine reporting of surgical outcomes, though it’s not clear whether the reporting system has any regulatory oversight, and as of late 2017, only includes about two thirds of enrolled program participants across the U.S.
Public trust is not helped by the reporting of false information, or delayed and/or unprofessional reactions of health professionals or hospitals. Given the stakes of the problem, I feel that this episode would have been avoided altogether had there been a routine and regulated prospective surveillance system with clear thresholds for investigation already in place. Without such a system, these apparent clusters will continue to emerge, more stories will be told, and more members of the public will feel exasperated by the conflicting information about surgical risk.