As Lex reports today, the reputation of nuclear technology has "suffered a mortal blow". For those seeking readable technical background on nuclear safety, Charles Perrow gives a lucid analysis of why at least older nuclear power stations are inherently risky particularly once something starts to go wrong, in his book “Normal Accidents”.
We are already hearing claims that Japan's nuclear problems "couldn't happen here” and that modern nuclear power can be “safe”. This claim needs rigorous examination. As Japan's nuclear travails again show, a mistake in balancing the probabilities and the potential harm exposes large numbers to exceptionally great harm.
Both politicians and CEOs regularly make bad decisions in balancing short term gain (tax flows, profit flows, growth and votes) with a small probability of a very serious harm. Their decisions are regularly shown to be spectacularly bad once enough time has elapsed for the small probability in any year to turn into a real and nasty event. Whether this is because they underestimate the risks or because they correctly conclude that 'it' is unlikely to blow up on their watch is a question for more research. Nicholas Taleb would say many are 'fooled by randomness'.
The problem is that whilst the basic risk seems small (one chance per thousand per unit per year seems minuscule), over enough units and years, the risk can become a substantial probability. And if the probability is of something very nasty and equally expensive, it matters. For some national examples, consider Japan's nuclear problems, Iceland's banks or Ireland's economy. And for companies, consider Andersen, Lehman, Northern Rock and oil companies drilling in deep water, not to mention what has become the "too big to fail" problem in the banking sector.
There needs to be greater understanding of risk. This should begin with greater clarity in explaining risks particularly of the very low probability/very high impact type. Nuclear power is an excellent and topical example, particularly since the nuclear industry does not have a reputation for openness or honesty.
One approach is to show risk not as pure probability but as the probability (including any time factor) multiplied by the harm. At its simplest, a random “once in a hundred years” risk of a £10 billion harm can be visualised as having a present value of £100 million per year; or (give or take an assumption or two) £2 billion over 20 years.
If the nuclear industry wants to regain public trust, it needs to be open and scrupulously honest in explaining nuclear risks. It should start doing things differently today.
This won't happen immediately, if at all, so the UK's civil service - not to mention its political masters - need to get scientifically and statisitcally literate in rapid order. It is more than 40 years since Lord Fulton pointed out its scientific illiteracy but there is still no sign of a change.