I was very interested to read that some scientists had been found guilty of manslaughter in relation to advice given about an earthquake:
Six Italian scientists and a government official have been found guilty of multiple manslaughter for underestimating the risks of a killer earthquake in L’Aquila in 2009.
They were sentenced to six years in jail in a case that has provoked outrage among scientists worldwide.
The experts were also ordered to pay more than €9 million ($11.5 million) in damages to survivors and inhabitants. Under the Italian justice system, the seven will remain free until they have exhausted two chances to appeal against the verdict.
All seven were members of the Major Risks Committee, which met in the central Italian town on March 31, 2009, six days before the quake devastated the region, tearing down houses and churches and leaving thousands homeless.
The government committee met after a series of small tremors in the preceding weeks had sown panic – particularly after a resident began making worrying unofficial earthquake predictions.
Italy’s top seismologists were called to evaluate the situation and the then-vice-director of the Civil Protection Agency, Bernardo De Bernardinis, gave media interviews saying the seismic activity in L’Aquila posed “no danger”.
“The scientific community continues to assure me that, to the contrary, it’s a favourable situation because of the continuous discharge of energy,” he said.
A son of one of the victims, Aldo Scimia, said, “We cannot call this a victory. It’s a tragedy, whatever way you look at it, it won’t bring our loved ones back. I continue to call this a massacre at the hand of the state, but at least now we hope that our children may live safer lives.”
This case interests me, because it raises so many legal issues and social policy questions. I posted a link to the article on my Facebook account yesterday and immediately got a chain of outraged responses in support of the scientists.
I’ve been working lately on the concepts of causation and remoteness in private law. The law has a concept that defendants should only be responsible for loss that they cause, generally in a ‘but for’ sense – i.e. ‘but for’ the defendant’s actions, would the plaintiff still have suffered the loss. Of course it gets more complex when you’ve got multiple sufficient causes, or where you’ve got later intervening events which might be a more direct cause of the loss. ‘But for’ doesn’t really work.
Even if the defendant is found to have caused the loss, often that is not the end of the inquiry. In many areas of private law, there is still a question of whether the loss is too remote, particularly where the loss is unusual or a distant consequence of the defendant’s action. Mixed up in the concept of remoteness are ideas of personal responsibility and awareness of a defendant’s ability to control the outcome of an event. If the defendant couldn’t have exercised control over the event, it seems more unfair to make him liable, unless there are other broader concerns which mean we might want to make him strictly liable. There is a concern not to place defendants under an unjust burden. There are social utility concerns: we sometimes let defendants get away with conduct which causes loss to an individual when that conduct has social utility; conversely, we are unlikely to let defendants get away with loss-making conduct if there is no social utility in that conduct. Of course, the case here was in a criminal context, which makes the consequences for the defendants all the more important.
I want to suggest that it is these kinds of concerns which inform our instinctive doubt about the court’s conclusion with regard to the scientists above. Part of the issue is the nature of prediction. Recently I read Tim Harford’s book Adapt: Why Success Always Starts With Failure (Abacus, 2011). In the first chapter (pgs 6 – 8), Harford recounts an experiment undertaken by a psychologist, Philip Tetlock, who asked a wide range of experts to make specific, quantifiable predictions about certain complex occurrences. He then measured how accurate the predictions were. In fact, the experts’ predictions were rarely correct, although they were more accurate than a control group of undergraduate students. (Fascinatingly, the least accurate experts tended to be those who frequently gave expert predictions in the media). But, before you crow too much over that latter tidbit…is the fault with the experts, or is it with the problems that they were asked to face? Harford says (at pg 7 -8):
‘his [Tetlock's] results clearly show that experts do outperform non-experts. These intelligent, educated and experienced professionals have insights to contribute – it’s just that those insights go only so far. The problem is not the experts; it is the world they inhabit – the world we all inhabit – which is simply too complicated for anyone to analyse with much success.’
Keep this in mind with detailed predictions as to the future made by experts. They are likely to be more accurate than the predictions a layperson might make, but perhaps only somewhat more accurate, and perhaps they are more likely to be wrong than right. This is not because the experts are stupid, and it’s not that they are trying to dupe us. This is because the world is a very, very complex place, and it is impossible to factor in all the complex variables. In fact, our solutions to problems are evolutionary, and Harford convincingly argues that we need to be accepting of and leave room for failures – they are all part of the evolutionary process. Because, sometimes, just sometimes, those crazy ‘out there’ ideas do not fail.
Harford’s example came back to me in relation to the seismologists who incorrectly predicted that the small tremors in L’Aquila did not presage a larger earthquake. Part of the issue, I suggest, is that when an expert says something, often people assume it must be true. And the family members of victims are really angry that the prediction in this case did not turn out to be true: they’ve learned the message of the Tetlock experiment in the hardest way. But…seismology deals with random and unpredictable forces of immense complexity. It is, if you like, an Act of Nature, something over which humans have very little control. We do our absolute best to understand it, but we cannot fully understand it, and we cannot fully predict everything that will occur. Indeed, the very nature of science is that you can never categorically say that something is true. All you can say is that on the evidence we have available, the present hypothesis as to what is occurring here appears to be the best explanation. And the scientists are giving their expert opinion from that basis – knowing that they can never be absolutely accurate. Hence the outrage of scientists around the world with regard to this case: science doesn’t work like that, we can never be absolutely confident in our hypotheses. (Indeed, Einstein is said to have quipped, “No amount of experimentation can ever prove me right; a single experiment can prove me wrong.”) There’s a mismatch between the public’s expectations of the scientists’ opinion, and the scientists’ understanding of their opinion and the context in which it should be understood.
The other thing which came to mind is the issues of causation and remoteness which have concerned me recently. The issue is this: the experts’ prediction was wrong. The small tremors were not a discharge of energy, they were a build-up to a giant earthquake which led to deaths. The experts allayed the fears of L’Aquila locals, and told them it was safe to be indoors. I’m sure the feeling among people who lost family members is that those family members might otherwise have left the area or stayed outdoors if the experts had not reassured them, and therefore the deaths would not have occurred. But then we get to the point of the ‘but for’ test. It’s really hard to say, ‘but for’ the statement by the experts, whether the deaths would still have occurred. Maybe if nothing had been said by experts, the deaths would still have occurred?
And that statement by the family member of a victim that this was ‘a massacre at the hand of the state’ bothers me, although it should be forgiven in the circumstances. The bottom line is this: the most direct cause of the deaths was the earthquake. ‘But for’ the huge earthquake, the deaths would not have occurred. It was a subsequent intervening cause (in legal-speak) – an event which came after the statement of the experts and most directly caused the death. It is hyperbole to say that this was a ‘massacre’, which suggests some kind of personal responsibility in a direct sense for the deaths, as if the experts went and personally shot the victims. There is no way in which the experts intended the death of the victims. I presume that they are devastated by the deaths and if they could take back their advice prior to the earthquake, I’m sure that they would.
So then the question is whether the actions of the experts are, or should be, too remote. Then we get to questions of responsibility and control, questions of unjust burdens, and questions of social utility. The prevailing feeling seems to be that the scientists were not directly responsible for the deaths (the earthquake was), that it would be an unjust burden to subject them to criminal liability for those deaths, and that holding them liable for the deaths may lead to undesirable social consequences (experts will not want to advise governments in future as to the likelihood of things such as this occurring). I would not have held the scientists liable for manslaughter. But I’d be interested in hearing your thoughts!