About Me

My photo
Reputability LLP are pioneers and leaders globally in the field of reputational risk and its root causes, behavioural risk and organisational risk. We help business leaders to find these widespread but hidden risks that regularly cause reputational disasters. We also teach leaders and risk teams about these risks. Here are our thoughts, and the thoughts of our guest bloggers, on some recent stories which have captured our attention. We are always interested to know what you think too.

Monday, 10 July 2017

Intelligent Dissent

On 13 May, 1940, Sergeant Walther Rubarth was in the vanguard of the German army invading France. His company had survived a hail of French machine gun fire as it crossed the River Meuse and fought through French defences.

Having reached his objective his orders were to dig in, but he was surprised to find that a key part of the battlefield was undefended – for the time being. He saw a golden opportunity to further the army’s overall goal and advance, but to exploit it he had to disobey his orders. As he pondered the options, an officer arrived and ordered him to dig in. Rubarth challenged the order and won the argument. His subsequent actions went on to create “such destructive chaos that it unlocked the heart of the French defences and had decisive operational significance”.

This was not extraordinary. For decades, the German army had cultivated a culture of “intellectual development through curiosity, critical thinking, imagination and openmindedness”, according to Professor Lloyd Clark,(1) that permitted and encouraged considered dissent underpinned by a clear articulation of overall objectives. It was an essential element of what the Germans call auftragstaktik (mission-orientated command).

Adopted by the German army in the nineteenth century, it is widely used in the British and US armies today. To work, it requires a clear culture shared across the organisation, well-defined goals and mutual trust. Execution is delegated to subordinates, working within the ethos and culture they have been trained to share. Intelligent dissent is encouraged.

Provided you have a good enough reason, and stay within the cultural rules, you can disobey orders to achieve the overall goal. Culture is, therefore, a central pillar supporting leaders as they exert control over their military machine. The feedback provided by intelligent dissent is essential to keeping it in good working order and using its innate intelligence to the full.

Fast forward 76 years to the City of London in 2016. Andrew Bailey, then leading the Prudential Regulation Authority and now chief executive of the Financial Conduct Authority (FCA), recognised the crucial effect of culture on outcomes that matter to regulators. His assessment (2) of recent failures was damning of management and leadership. He said:
“There has not been a case of a major prudential or conduct failing in a firm which did not have among its root causes a failure of culture as manifested in governance, remuneration, risk management or tone from the top.”
So culture sowed the seeds of disasters,
“for instancewhere management are so convinced of their rightness that they hurtle for the cliff without questioning the direction of travel”.
People find it easy to discuss the familiar, such as market, credit, liquidity or conduct risk, but are reluctant to talk about risks from individual behaviour, let alone the behaviour of their leaders. Most people find it embarrassing, dangerous, or both, to raise such subjects.  Bailey did not mince his words, continuing:
“You can add to that [list], hubris risk, the risk of blinding over-confidence. If Imay say so, it is a risk that can be magnified by broader social attitudes. Ten years ago, there was considerable reverence towards, and little questioning of, the ability of banks and bankers to make money or of whether boards demonstrated a sufficient diversity of view and outlook to sustain challenge.How things have changed. Healthy scepticism channelled into intelligent and forceful questioning of the self-confident can be a good thing.”

 A central aim of the FCA is to drive fair treatment of customers through a culture that puts customers first and a system that allocates responsibility unambiguously. Who can argue with its requirement that managers communicate that aim to staff? Or with the responsibility placed on managers, via the senior managers regime, to put customers at the heart of strategy, staff training, reward or controls? (3) But is that enough?

The FCA’s themes are sound. Allocating responsibility clearly ensures that all know who is  in charge of what. The FCA understands that culture is rooted in history and can take years to change. It recognises that bad cultures from the past leave toxic legacies that endure. A  company or industry that has recruited, rewarded and promoted on aggression, self-confidence and greed for decades has a problem that will take decades, or a cull, to fix.  Antony Jenkins, the former chief executive of Barclays, saw the enormity of the problem he faced when he wrote:
“There might be some who don’t feel they can fully buy into an approach which so squarely links performance to the upholding of our values. My message to those people is simple: Barclays is not the place for you.” (4)
The FCA emphasises tone from the top. How you behave matters even more than what you say. But in an industry that, for years or decades, has recruited and promoted for what are now seen as undesirable character and behavioural traits, where do you find leaders who combine technical competence with the traits, attitudes and values now required?

The answer is in the question. Desirable character traits should become an explicit part of the specification of every leader and potential leader and be given at least equal weight with skills, knowledge and experience in recruitment and promotion. As Canada’s respected Ivey Business School explained, good leaders balance confidence with humility; aggressiveness with patience; analysis with intuition; principle with pragmatism; deliberation with  decisiveness; candour with compassion. (5) Organisations that dig more deeply may be pleasantly surprised to discover seams of people who were previously overlooked as potential leaders, including women and minorities of many kinds, with both technical skills and desirable character traits.

Any potentially risky aspects of leaders’ characters should be discussed openly by boards and regulators. Those of senior leaders should feature prominently on the risk register. There are advantages in an enthusiastic, forceful or charismatic chief executive, but the corresponding risks should be recognised and managed. I was surprised when I first heard of a chief executive whose “dominant” character featured in the company’s risk register; but its presence there made it possible for his dominant tendencies to be managed in normal polite discussion.

Another aspect of tone is the company’s strategy and how it is expressed: not just what you are trying to achieve but also how you manage clashes between objectives and principles and with what consequences. This feeds through to reward patterns.

Of course bonuses matter because you can expect to get more of what you reward – although you should take care what you wish for. Bonuses drove payment protection insurance sales that produced pain. The same applies to other kinds of reward, from a pat on the back through public praise to promotion. These patterns determine who leaves, who stays and who rises as particular character traits are encouraged and a culture built and reinforced.

Most telling is how you respond when objectives clash with principles. How do you deal with someone who gets the right result by crossing your red lines? And what about someone who forgoes a deal because they would not cross them?

But let us move into your office, today. What do you do when faced with a rule that does not work in your real world of work? Do you shrug, obey the rule and achieve the wrong result? Do you “work around” or disregard the rule, perhaps after discussing the problem with colleagues? Or do you tell your superiors that the rule needs to change and why? My experience suggests that more people take the first two options than the third. These undermine the ground rules – risking serious breaches – whereas feedback from intelligent dissent reinforces and improves them.

Another question: what happens if something goes wrong? Not so badly that it is obvious to your boss, but bad enough to need fast or fancy footwork. Do you tell your superiors? Analyse what went wrong and why? Make sure weaknesses are fixed and lessons learned widely? More likely the problem is discussed locally, if at all, then buried; yet mishaps  without bad consequences provide valuable feedback as to how well the system is working, or not. They are often symptoms of systemic weaknesses where a bad outcome has been prevented by a mixture of luck and crisis management. When luck runs out, something far nastier happens. Consequences can be personal, painful and protracted.

Part of the reason for the persistence of risk areas is that leaders have not created psychologically safe spaces where subordinates, let alone leaders, can admit to mistakes and deal with them. Some leaders lack the humility and selfconfidence to cope with contradiction, let alone regular intelligent dissent. The penal aspects of the UK senior managers regime, imposed on financial regulators may play a part, by causing leaders to see admitting errors as a weakness rather than a strength and an opportunity to learn from mistakes. Whatever the cause, the result is that rules are undermined and organisations fail to learn, leaving systemic weaknesses unresolved until something blows up.

Putting your customers first will please the FCA. But a more comprehensive route to sustainable success is to adapt auftragstaktik and intelligent dissent to achieve a culture that learns and repairs itself. It will also put your trusted team’s expensively bought brainpower to more productive use.

Anthony Fitzsimmons
Chairman,
Reputability LLP
London

Endnotes

1. Clark L (2017), ‘The Intelligently Disobedient Soldier’. Centre for Army Leadership. Available at www.army.mod.uk/documents/general/Centre_For_Army_Leadership_Leadership_Insight_No_1.pdf.
2. Bailey A (2016), ‘Culture in Financial Services – a regulator’s perspective’. Bank of England speech. Available at: www.bankofengland.co.uk/publications/Pages/speeches/2016/901.aspx.
3. Davidson J (2016), ‘Getting Culture and Conduct Right - the role of the regulator’. FCA speech. Available at: www.fca.org.uk/news/speeches/getting-culture-and-conduct-right-role-regulator.
4. ‘Antony Jenkins to staff: adopt new values or leave Barclays’, The Daily Telegraph, 27 January, 2017. Available at: www.telegraph.co.uk/finance/newsbysector/banksandfinance/9808042/Antony-Jenkins-to-staff-adopt-new-values-or-leave-Barclays.html.
5. Gandz J at al. (2010), Leadership on Trial: a manifesto for leadership development. Ivey School of Business


Anthony Fitzsimmons is joint author, with the late Professor Derek Atkins, of "Rethinking Reputational Risk: How to Manage the Risks that can Ruin Your Business, Your Reputation and You"
  
This article was first published in the June/July 2017 edition of Financial World






Friday, 12 May 2017

WanaCrypt0r 2.0 Virus infects NHS and more

Large sections of the UK's National Health Service (NHS) were hit by a ransomware attack as were many other organisations worldwide.

According to the Financial Times, the virus was a weaponised development of the US National Security Agency's 'Eternal Blue' tool, part of a "highly classified NSA arsenal of digital weapons leaked online last year by a group called the Shadowbrokers".

WanaCrypt0r seems to have been distributed by the common route of an attachment to emails which were opened by numerous recipients who did not identify the attachments as suspicious.

The Guardian reported
"Many NHS trusts still use Windows XP, a version of Microsoft’s operating system that has not received publicly available security updates for half a decade, and even well-patched operating systems cannot help users who are tricked into running software deliberately."
and later:
"It’s our governments, via the intelligence agencies, that share a responsibility for creating vulnerabilities in our communication networks, surveilling our smart phones and televisions and exploiting loopholes in our operating systems,” said Dr Simon Moores, chair of the International eCrime Congress."
In an interview with Andrew Marr,
"Michael Fallon [was] forced to defend the Government's decision not to fund crucial updates for NHS computer systems, leaving them vulnerable to a global cyber attack which caused chaos at hospitals across the country."
The cost saving was apparently a £5.5m saving by Central Government  that could have been spent on keeping in place national support for XP in the NHS.  Apparently there had been repeated warnings of the risks of running systems on an unsupported XP operating system, including a warning by Microsoft two months ago.


of Microsoft wrote:
"Repeatedly, exploits in the hands of governments have leaked into the public domain and caused widespread damage. An equivalent scenario with conventional weapons would be the U.S. military having some of its Tomahawk missiles stolen. And this most recent attack represents a completely unintended but disconcerting link between the two most serious forms of cybersecurity threats in the world today – nation-state action and organized criminal action."
According to Keren Elazari, the sectors where unsupported software systems are most prevalent are those where safety matters:
"healthcare, energy and transport; as well as finance and other industries where computer systems provide the foundations for modern functionality."

Assuming early reports are broadly correct, this attack raises behavioural, organisational, leadership and reputational risk issues.

Why are parts of the NHS using outdated, unsupported Windows XP? 

The obvious answer is cost-cutting by people who do not understand the consequences, in this case the risks of running out-dated, unsupported operating systems.  This now seems to include a Government minister who did not listen to advice on a subject he did not understand.

If so this is a classic case of cost-cutting to produce a short term gain at the cost of a systemic weakness that goes on to cause great pain when the risk eventually manifests.  Cost-cutting in ignorance of the consequences is a risk that typically emanates from the highest levels of leadership anbd it regularly causes failures.

Why do NHS staff lack the training needed to operate an outdated, unsupported operating system?

It seems that NHS staff lacked the training manually to identify suspicious emails.  Candidates as causes of this state of affairs include:
  • Ignorant leaders did not realise that cost-cutting on operating systems created cyber risks to which training might provide a partial solution. 
  • Leaders who recognised the risks but would not provide training, for example because it would cost money they were not prepared to spend;
  • That no amount of training would be sufficient - but leaders either did not know this or did not care.  
Leadership ignorance is an organisaitonal and leadership risk that regularly causes failure.

Who else is using unsupported software in systemically important systems?  

These include supply chains for cash, food, power, water and the internet itself.  What potential consequences might there be for the public?

Intelligence agencies

The UK intelligence agency GCHQ, backed by the UK Home Office under Theresa May, have already inserted backdoors into many encryption systems and recently gained statutory authority to demand backdoors into encryption and other systems including computers, phones and TVs and anything else containing sortware.  It has statutory authority to hack into computers and other devices worldwide and there can be little doubt that they, like the NSA, developed tools to achieve this years ago.  They also stockpile vulnerabilities in operating systems, preventing companies like Microsoft from dealing with them.  As Brad Smith, Microsoft president’s and chief legal officer, said,
An equivalent scenario with conventional weapons would be the US military having some of its Tomahawk missiles stolen.”

No organisation can guarantee the security of valuable tools such as these against a determined external attacker or internal leaker.  These risks will always be greater than zero.

If surveillance and cyber-warfare tools escape into the hands of criminals or hostile state actors, the potential for harm will broadly be in proportion to the versatility of the tools and the creativity and motivation of users.  There can be no doubt that a determined, skilled and motivated group of hackers could design an event to cause great harm and outrage, just as Al Quaeda did with its carefully designed and planned "9/11" attack on the USA.  These are perfect weapons for the weak.

Given that there is a finite risk of cyber-warfare tools 'escaping', the question is whether intelligence agencies, and the politicians who ultimately control them, have considered the risks and consequences of the tools they develop being turned against their own countries and allies.  Even if the probability of theft of the tools is thought very low, a foolhardy assumption, the potential for harm to the public is unknowably great.

This is yet another example of the risks of balancing short term gains against the long term consequences of systemic weaknesses.  The problem with this balancing act is that it is rarely possible to quantify the consequences of systemic weaknesses, especially where deliberately caused harm is involved.  History shows that it is easy to overlook or underestimate them.  The problem is exacerbated by leaders' tendency to give more weight to imminent than to distant consequences.

As to the security services, the likelihood is that current cyber attack will come to be seen as small beer.  When that happens, the reputation, and licence to operate, of the security agency concerned whose software has been turned against its own state or a friendly state, will be balanced on a knife edge.  Other security agencies will be at risk of collateral damage.

As to the NHS, a series of scandals of incompetence, catalogued by Richard Bacon in his book "Conundrum", has left the NHS and its leaders with a poor reputation for competence when it comes to IT.  If it eventually emerges that the NHS IT system had weaknesses that left it vulnerable to this attack, its reputation for competence will be damaged further.   Evidence emerging suggests that it will also leave the reputation of the minister who cancelled the IT support contract in tatters.

Background reading:  You can read more about how behavioual, organisational and leadership risks cause immense harm to seemingly solid organisations in 'Rethinking Reputational Risk: How to Manage the Risks that can Ruin Your Business, Your Reputation and You".    Lord David Owen wrote of it:
"An exceptional book for learning at every level – whether you are a business school student or a chief executive; Prime Minister or a new recruit into the civil service."
You can read reviews of the book here.


Anthony Fitzsimmons
Reputability LLP
London

www.reputability.co.uk
www.rethinkingreputationalrisk.com

Wednesday, 4 January 2017

Financial Times reviews 'Rethinking Reputational Risk'

Stefan Stern has reviewed 'Rethinking Reputational Risk' for the Financial Times.

Introducing his review, Stern wrote:

"Th[is] book offers a thorough analysis of the many ways in which apparently unexpected crises can destroy businesses and reputations. Boards, chief executives and their managers may believe they have a firm grip on the risks they face. They should think again."

He continued:
"The book contains a series of detailed case studies of some of the best-known corporate crises of recent years .... The authors draw more than 30 lessons from their schadenfreude-free research."
before concluding:
"Businesses and executives are therefore vulnerable on a number of levels. They would do well to reflect on the serious messages contained in this well-argued book."
You can read more reviews here.

You can read more about 'Rethinking Reputational Risk' here.

You can buy copies from the publishers here.

Thursday, 3 November 2016

Rethinking Reputational Risk



For too long, there has been an unspoken assumption in traditional risk management and regulation that organisations are quasi-mechanical and that decision making is essentially rational.  The implication is that you if can devise the right rules, risks will disappear because people will respond to them logically.

In truth, all organisations consist of real people who exhibit the range of normal human feelings, emotions and behaviours and have individual characters.  These, and well-understood mental short-cuts and biases, are as important as strict logic in making decisions in the real world.  Real people constantly react to real life in ways that, whilst predictable, are not strictly rational.  It is those who lack these feelings and emotions who are unusual, not those who exhibit them.  

You visit the baker to be faced with an aromatic array of fresh bread.  Do you rigorously compare the nutritional content of each loaf, run quality tests and carry out a price and product comparison with other bakers in the vicinity (not forgetting transport and opportunity costs) that, if you are strictly rational, you ought to consider?
Of course you don’t.   You follow your eyes, nose and feelings rapidly to choose what you feel is the best choice: today a bag of bagels; tomorrow scented spelt scones; and if you like sweet things you may scoff the scrummy sugared doughnut you know you should shun.  If you stuck to strict logic, the baker’s shelves would be empty by the time you made your decision.

Feelings and emotions are an important element in normal decision-making, and this is true of all normal people in all contexts – including the most intelligent and respected business leaders in their work.  Unfortunately the emphasis on ‘homo economicus’, economists’ rational, benefit-maximising model of man  leads many leaders to assume this crude model represents reality, a double danger if they do not realise the extent to which their own decision-making depends on feelings and emotions. 

Real people use what behavioural economists and psychologists call heuristics and biases in making decisions.

Heuristics are mental short cuts that we all use to simplify decision-making.  There are dozens of them, working beneath our consciousness.  For example there is evidence that where we recognize one of a number of choices but have no better information, we are likely to put a lower store on the option we do not recognize.  That is the recognition heuristic. 
Then there are biases. An important bias is the ‘optimistic bias’.  As healthy humans we tend to delude ourselves that bad events are less likely to happen than good ones.  And we tend to attribute positive events to our skill and adverse ones to ‘them’ or bad luck: the ‘self-serving bias’. 
Many more heuristics and biases provide the numerous unrecognised assumptions and short-cuts that make the life of a normal person – well normal.

This matters.  People and the way they behave is what can make organisations great.  But one of the insights to emerge from our work on “Roads to Ruin” the 2011 Cass Business School report for Airmic (we were two of the report’s four authors), is that people are almost always at the root of why organisations are derailed.  They are implicated twice over: first because individual and collective human behaviour, most of it perfectly normal and predictable, lies at the root cause of most crises; and then because it regularly tips potentially manageable crises in to unmanageable reputational calamities.   

We have since established that seniority amplifies the consequences of behaviour for good or ill, so that other things being equal, behavioural and organisational risks related to leaders typically have far more serious consequences than an analogous errors lower down the hierarchy.

Unfortunately this area of risk is not systematically recognised by classical risk management.  Some areas of people risk are captured by looking at process safety, but this leaves huge gaps.  And a restricted view of reputational risk has left large areas of risks to reputation doubly unprotected.  Leaders and risk professionals have a structural blind spot that leaves the organisation – and its leaders – predictably vulnerable.

We have solved the problem by rethinking reputation and reputational risk – and so can you.  The Financial Times lexicon defines reputation as: “Observers’ collective judgments of a corporation based on assessments of financial, social and environmental impacts attributed to the corporation over time”; and there is much bickering over the nature of reputational risk.   

Whilst the FT definition is good in parts, it is too narrow.  We prefer the deceptively simple: 

“Your reputation is the sum total of how your stakeholders perceive you.”

Think about it and you will find its hidden depths.  One is that you lose your reputation when stakeholders come to believe, rightly or wrongly, that you are not as good as, or are worse than, they previously believed you to be.  That leads to our definition of reputational risk: 

“Reputational risk is the risk of failure to fulfil the expectations

of your stakeholders in terms of performance and behaviour”

Many ‘performance’ failures are caught by enterprise risk management; but few risks from behaviour or organisation are captured.  The result is that risks that both cause crises and destroy reputations are not captured, so they remain unmanaged. Worse, the research shows that behavioural and organisational risks can take many years to emerge.  In the meantime, leaders think all is well when, helped by the self-serving bias, they have been fooled into complacency by what is, in truth, a run of good luck; and they have lost the opportunity to deal with potentially lethal unrecognised risks before they cause harm.
 
And as Richard Feynman, the late lamented Nobel laureate who uncovered the people risks that caused NASA’s Challenger disaster, said: “The first principle is that you must not fool yourself; and you are the easiest person to fool.”

Professor Derek Atkins
Anthony Fitzsimmons


Rethinking Reputational Risk: How to Manage the Risks that can Ruin Your Business, Your Reputation and You” will be published on 3 January.  You can read reviews of the book at www.koganpage.com/reputational-risk   For a limited time you can (pre-)order the book there at a 20% discount: use code  ABLRRR20

This blog is based on an article first published in Management Today.

 

Friday, 18 December 2015

Blind Reliance on Models: a Recipe for Trouble


 



We are delighted that Professor John Kay has allowed us to reprint this column, on the dangers of relying on models to predict what is beyond their power to predict.
 





As the global financial crisis began to break in 2007, David Viniar, then chief financial officer of Goldman Sachs, reported in astonishment that his firm had experienced “25 standard deviation events, several days in a row”. Mr Viniar’s successor, Harvey Schwartz, has been similarly surprised. When the Swiss franc was unpegged last month, he described Goldman Sachs’ experience as a “20-plus standard deviation” occurrence.


Assume these experiences were drawn from the bell-shaped normal distribution on which such claims are generally based. If I were to write down as a percentage the probability that Goldman Sachs would encounter three 25 standard deviation events followed by a 20 standard deviation event, the next 15 lines of this column would be occupied by zeros. Such things simply do not occur. So what did?

The Swiss franc was pegged to the euro from 2011 to January 2015. Shorting the Swiss currency during that period was the epitome of what I call a “tailgating strategy”, from my experience of driving on European motorways. Tailgating strategies return regular small profits with a low probability of substantial loss. While no one can predict when a tailgating motorist will crash, any perceptive observer knows that such a crash is one day likely.

Some banks were using “risk models” in which volatility was drawn from past daily movements in the Swiss franc. Some even employed data from the period during which the value of the currency was pegged. The replacement of common sense by quantitative risk models was a contributor to the global financial crisis. And nothing much, it seems, has changed.

It is true that risk managers now pay more attention to “long-tail” events. But such low-probability outcomes take many forms. The Swiss franc revaluation is at one end of the spectrum — a predictable improbability. Like the tailgater’s accident this is, on any particular day, unlikely. Like the tailgater’s accident, it has not been observed in the historical data set — but over time the cumulative probability that it will occur becomes extremely high. At the other end of the spectrum of low-probability outcomes is Nassim Taleb’s “black swan” — the event to which you cannot attach a probability because you have not imagined the event. There can be no such thing as a probability that someone will invent the wheel because to conceive of such a probability is to have invented the wheel.

But most of what is contingent in the world falls somewhere in between. We can describe scenarios for developments in the stand-off between Greece and the eurozone, or for the resolution of the crisis in Ukraine, but rarely with such precision that we can assign numerical probabilities to these scenarios. And there is almost zero probability that any particular scenario we might imagine will actually occur.

What Mr Viniar and Mr Schwartz meant — or should have meant — is that events had occurred that fell outside the scope of their models. When the “off-model” event was the breakdown of parts of the wholesale money market in 2007, their surprise was just about forgivable: in the case of the Swiss revaluation, to have failed to visualise the possibility is rank incompetence.

Extremes among observed outcomes are much more often the product of “off-model” events than the result of vanishingly small probabilities. Sometimes the modellers left out something that plainly should have been included. On other occasions they left out something no one could have anticipated. The implication, however, is that most risk models — even if they have uses in every­day liquidity management — are unsuitable for the principal purpose for which they are devised: protecting financial institutions against severe embarrassment or catastrophic failure.

First published in the Financial Times.

Tuesday, 17 November 2015

The Rise and Fall of BP

The tale of BP is a story of a sleepy British corporate, transformed by Lord Browne into one of the world's largest and apparently most successful oil companies, only to be cut down to size by a series of tragedies.

When it emerged that BP's apparent success was built on foundations of charisma, a flawed safety culture, cost-cutting and lost internal expertise, its reputation and its market value, were destroyed.  It became a pariah, publicly berated by a US President, downgraded to BBB by Fitch and discussed as a take-over target.  It was probably saved from takeover by its toxic litigation legacy, only recently resolved at a cost estimated at more than $50 billion.

Journalists have feasted on stories from the Texas City explosion and the Deepwater Horizon disaster.  We deconstructed the root causes from reports internal and external, to extract more lessons from these stories than from almost any other company in crisis.

But to our delight, Professor Andrew Hopkins has done better.  He taught the US Chemical Safety and Hazard Investigation Board, charged with investigating the Texas City refinery explosion, about culture, safety cultures and learning cultures.  His insider knowledge of the investigation and track record of thinking and teaching about oil industry disasters put him in a good position to write about both disasters.  But he also identified, which we had not, a treasure trove of material: depositions and internal BP documents on a US website devoted to the Texas City disaster by Eva Rowe in memory of two of its victims, her parents James and Linda Rowe. 

The result is two exceptional books.  The first, 'Failure to Learn' takes apart the story of Texas City with a confidence that comes from direct contact with the evidence of witnesses and other primary sources.  Based on these sources he devotes a chapter to the failure of BP's top leadership

'Disastrous decisions' the second of this pair of books, deconstructs the Deepwater Horizon disaster.  It is worth quoting its opening words, which we endorse:
The blowout in the Gulf of Mexico on the evening of 20 April 2010 caught everyone by surprise although it shouldn't have."
Before the Deepwater Horizon disaster, BP was sufficiently riddled with systemic behavioural and organisational risks and reputational risks that a bad accident was to be expected even it was not possible to predict timing or the precise accident.  This is not an uncommon situation: systemic risks typically lie latent for years, sometimes decades, before causing what can be catastrophic damage.  In the meantime, the absence of a catastrophe leads insiders, particularly leaders, to be complacent, believing that "it won't happen to us" because it hasn't - yet. Outsiders, and frequently lower level staff, know otherwise.  But no-one listens to them until luck takes a day off.

Whereas the special focus of 'Failure to learn' is leadership, Hopkins uses 'Disastrous decisions' to explain and illustrate the power of error management and root cause analysis to find out why accidents happen so that their root causes can be dealt with before they cause harm. He also focuses on decision-making by middle-mangers, using the community of engineers to illustrate his points.

Both books make striking contributions to the literature.  Their additional sources have reinforced our analysis with new and strong evidence. Both should be compulsory reading for business leaders, board members and everyone with risk responsibilities.

And unlike most required reading, they tell a captivating story too.

Anthony Fitzsimmons
Reputability LLP
London
www.reputability.co.uk






Friday, 23 October 2015

Why Government Repeats Mistakes - and What to Do about it


We are delighted to be able to post, with his permission, an essay by Richard Bacon MP.   Richard is Member of Parliament for South Norfolk, Deputy Chairman of the Public Accounts Committee, its most long-serving member and, with Christopher Hope, the author of Conundrum.   Richard has probably been involved in the study of more governmental mishaps than any other parliamentarian. 





Sir Michael Barber once observed that the “How” question is relatively neglected in the writing of history and politics. A textbook would say of some medieval king that “he gathered an army and hastened north” without pausing to consider just how difficult that was to do. Yet when governments embark on anything  new, it is quite normal for things not to turn out as planned – and the problems are nearly always to do with the “How” question.

We have seen an NHS dental contract which left large numbers of people without a dentist; a new system for marking school tests where up to three quarters of the marking was wrong; a pension regulatory body which had no objectives; and an urban regeneration project which had no budget. People have died because flawed hospital computer systems meant they were not told about their next vital check-up until it was too late. Holidays have been ruined because the Passport Office couldn’t issue passports on time. Failed asylum applicants with no right to be in the country – who happened to be murderers, kidnappers and rapists – have been released from jail to wander free in our community because no one could be found to deport them.

Farmers have committed suicide because of the Kafkaesque horrors of the Rural Payments Agency. The NHS mismanaged its recruitment of junior doctors so badly that medics – whose training had been paid for by British taxpayers – were forced to flee abroad in search of work, only to be urged to return soon afterwards, at the highest agency rates, due to a government-induced shortage of doctors. Some failures are so infamous they have become household words – the Child Support Agency or the Criminal Records Bureau (CRB) – even surviving Orwellian rebranding efforts to stamp out memories of a fiasco; no one I know calls the CRB the “Disclosure and Barring Service”.

Ministers routinely enter office with no knowledge of why things have gone wrong so often in the past. Few civil servants are around long enough to tell them. After only eighteen months as an education minister in charge of academies policy, Andrew Adonis found he had been in post longer than any of the officials who were supposed to be advising him. The Department for Transport somehow managed to have four permanent secretaries in two years. Given the track record, one might expect the quality of government spending to be a matter of sustained national concern. One can’t say “Oh, that’s management” and expect someone else to do it. It turns out that the “How” question can seriously affect the “What” question or even “Whether” anything happens at all.

The case for examining much more closely the quality of what we are doing has never been stronger. In a rapidly changing world there is an almost perfect storm of problems. As we get better at keeping people alive longer, we face inexorable rises in the cost of pensions and healthcare systems. As our population gets older and the tax base shrinks, our need to invest in better infrastructure – including better broadband connections, roads, railways and airports – only grows more urgent. We have an ongoing skills crisis. Our people need to be more numerate, literate and IT-savvy. We need to produce more housing but we have a dysfunctional model that fluctuates between near-stasis and a market bubble. Across the globe we face a burgeoning population and the need to produce more food on less land with much less water. We also know that if we can’t help the world’s people in situ they will instead come to us, compounding the pressures we already face. And we grapple with all these problems while struggling under a growing mountain of public debt, because successive governments seem quite unable to live within their means.

Squeezing much more out of the lemon is simply essential. We know that our governments must cost us less while being much more efficient and effective, to help us to deliver the changes we need. All this is probably common ground among most political parties, but the truth is that we are very bad at learning from our mistakes. Many politicians, civil servants and journalists are more interested in getting on with the next policy initiative, the next project or the next story.

Who is responsible for all this failure? Many screw-ups are plainly the result of poor decisions by ministers, who either try to do things too quickly or who won’t listen. Officials advising ministers on the Common Agricultural Policy were explicit that using the “dynamic hybrid” method for calculating single farm payments would be “madness” and a “nightmare” to administer; ministers chose it anyway.

The big regional contracts in the NHS’s National Programme for IT were agreed at indecently high speed – and duly signed before the NHS knew what it wanted to buy and the suppliers knew what was expected of them – because of pressure from Downing Street; the result was an expensive catastrophe. Tax credits still cause misery for thousands of low income families who have been overpaid, because HMRC demands repayments they cannot afford; the policy was Gordon Brown’s from its inception.

But what about civil servants? When managers at the Learning and Skills Council failed to count the money for the FE Colleges building programme while handing it out – thus pledging billions of pounds which they didn’t have – the Innovation and Skills Secretary John Denham said grimly that “there was a group of people that we might have expected to know what was going on who did not themselves have a full grasp of it”. In the InterCity West Coast franchising competition, the officials in charge at the Department for Transport were unaware of advice from external lawyers that the Department’s actions were unlawful. And even in the case of the Rural Payments Agency, where decisions were very ministerially driven, the choice of the “dynamic hybrid” method for determining single farm payments was made – as Dame Helen Ghosh, the Permanent Secretary, eventually told MPs – because “ministers were being told it was possible when it was not in fact possible.”

The reality is that there is more than enough blame to go around. We need to spend less time blaming and more time seeking to understand what is going on. In recent decades there has been a whole string of attempts to reform the Civil Service, including Continuity and Change, the Citizen’s Charter and Taking Forward Continuity and Change. Then came Modernising Government and Civil Service Reform: Delivery and Values. Imaginatively, this was followed by Civil Service Reform: Delivery and Values – One Year on, which in turn was followed by the “Capability Reviews”, then Putting the Frontline First: Smarter Government and The Civil Service Reform Plan. Now we have The Civil Service Reform Plan – One Year on. That’s roughly one white paper or major initiative every two years for twenty years. And eight years after the Capability Reviews – more than the time required to fight the Second World War – the Government launched the Civil Service Capabilities Plan. A year later the new head of the Major Projects Authority identifies that there is “a lack of distributed capability around delivery across Government”. The problem is not a lack of "to do" lists.

For sure, it is down to the Civil Service and its accounting officers to make sure there is a system that works. As Richard Heaton, Head of the Cabinet Office put it: “It is our job, without ministerial pushing, to create a civil service that has the capabilities that the Government need”. But what should a civil servant do when a powerful minister is on the rampage and demanding the impossible? The epic scale of the failures should tell us that the problem is systemic. As the former Head of Tesco Sir Terry Leahy put it: “Management and democratic process are not a good mix”. But we will only solve the problem when we stop looking in the wrong place. As Bill Clinton nearly said: “It’s behaviour, stupid.”

Of course, influencing behaviour is almost a new Holy Grail among policymakers. We are told it will help us reduce crime, tackle obesity, ensure environmental sustainability and make sure people pay their taxes on time. It works – and it’s not that new. Making unleaded petrol cheaper than the leaded stuff sees more people buying it. Making it easier for people to recycle achieves better results than moral hectoring.

But what about the behaviour of civil servants and ministers? And the behaviour of Parliamentarians? What about the behaviour of suppliersto government such as big IT firms, who – unsurprisingly – have a preference for large IT projects regardless of what might actually be best for taxpayers. What if you have a civil servant running an IT project whom no one dares challenge? Or a team of civil servants foisted on a project without the right skills? What should you do when you have a permanent secretary and a Cabinet minister who barely talk to each other for months? Just as in Margaret Atwood’s novel The Handmaid’s Tale, this has all actually happened, somewhere, sometime.

As HM Treasury’s Permanent Secretary, Sir Nicholas Macpherson, has observed: “I have worked under Tory governments where Chancellor and Chief Secretary weren’t really speaking to each other. I have certainly worked under Labour governments where that was the case”. Many billions of pounds have been squandered this way. If we really want better outcomes, then understanding this – and changing it – is much more important even than policymakers’ efforts at “influencing” the behaviour of citizens.

Economics has seen a big shift towards studying how people actually behave, rather than how they are supposed to behave. We need a similar shift inside government and politics. The London 2012 Olympics showed we can get it right. The outstanding feature of the Olympics, as Head of Programme Control David Birch put it, was that “we worked hard to generate and recognise one source of truth”.

The world’s most successful organisations, whether in manufacturing or in services, spend a disproportionate amount of time and effort developing people. Our governments need to do the same. MPs are among the most determined people you will meet – otherwise they would rarely have become MPs – but as a class they need much better preparation for ministerial office. In the British Civil Service we have one of the world’s best talent pools but we don’t get the best out of them. Instead of incessant exhortation, we need to think harder about what makes people tick. Sir Ken Robinson, a teacher renowned worldwide in the development of creativity, wrote that “human resources, like natural resources, are often buried deep. In every organisation there are all sorts of untapped talents andabilities”. Don’t we need every hand on deck in order to get out of the mess we have landed ourselves in? It is always sensible to make the most of what you have. The answer is to look more closely at ourselves and our nature – and to act on what we find.

This essay was first published by Reform in "How to run a country: a collection of essays".  

You will find our blog on the competence of civil servants here