About Me

My photo
Reputability LLP are thought leaders in the field of reputational risk and its root causes, behavioural risk and organisational risk. Our book 'Rethinking Reputational Risk' received excellent reviews: see www.rethinkingreputationalrisk.com We help business leaders to find these widespread but hidden risks that regularly cause reputational disasters. We also teach leaders and risk teams about these risks. Here are our thoughts, and the thoughts of our guest bloggers, on some recent stories which have captured our attention. We are always interested to know what you think too.

Sunday, 12 November 2017

Do Boards Understand Behavioural Risks to Reputation?

As regular readers know, we have analysed the annual reports of about 40 FTSE100 companies.  Our aim is to ascertain the extent to which boards and their companies demonstrate a good understanding of reputational risk, behavioural risk, organisational risk and cultural risk together with the extent to which the companies show an understanding of learning from errors and experience.  This approach is derived from our research insights summarised in "Rethinking Reputational Risk: How to Manage the Risks that can Ruin Your Business, Your Reputation and You"

We have extracted their performance using five criteria.












Regular readers will recall our scoring system:















These disappointing results will represent a combination of accurate reporting of reality and poor reporting of better quality work.  We believe the former is much more likely than the latter: boards that understand these areas and their importance are unlikely to hide their company's strengths.

The least disappointing results emerge from 'cultural risk' with an average score of 2.8 and median of 2.5.  Given the strong emphasis given to culture by politicians and regulators, it is perhaps not surprising that culture has produced the least bad performance.  There is considerable room for improvement.

The worst results, sharing a disappointing average score of 1.7, were 'reputational risk' and 'learning from errors and experience'. Their medians were 1.5 and 2 respectively.

'Learning from errors and experience' was highly skewed: eleven companies scored zero whilst two scored 4 and three scored 3.5. This kind of learning is critical to long term success and stability.  A company has to get many underlying behavioural, organisational and cultural factors right to achieve a justified high score.  That makes this measure a particularly powerful pointer that regulators, investors and D&O insurers can use to differentiate between fragile companies and those that are systemically resilient.

These results also suggest widespread board skill gaps in this risk area.  The FRC anticipated this when it added behavioural, organisational and reputational risks to boards' explicit responsibilities.   The Risk Guidance provides that boards should consider:

whether it, and any committee or management group to which it delegates activities, has the necessary skills, knowledge, experience, authority and support to enable it to assess the risks the company faces and exercise its responsibilities effectively. Boards should consider specifically assessing this as part of their regular evaluations of their effectiveness
 These include the explicitly added areas of behavioural, organisational and reputational risk.

 The FRC also recommends that the board should:
"satisfy itself that [its] sources of assurance [on risk] have sufficient authority, independence and expertise to enable them to provide objective information and advice to the board."
Where shortcomings are found, the remedy is clear: arrange board education from people with "authority, independence and expertise".

In the meantime, we are extending our cohort to include regulators while we watch for correlations between bad scores and disastrous performance.  We shall report on results as they emerge.

Anthony Fitzsimmons
Reputability LLP
London


Tuesday, 31 October 2017

Finding Future Failures

Our research, summarised in 'Rethinking Reputational Risk', shows that behavioural, organisational and board risks are the root causes of most major crises.  These systemic risks typically lie latent for years encouraging complacency before they trigger a major reputational crisis that typically takes leaders by surprise, shreds shareholder value and often damages careers. 

We have long known that it is possible to identify, in advance, organisations that have systemic weaknesses that make them more likely to fail in this way.  Preliminary findings from our latest research provide indications of a new analytical approach.

Regular readers will recall that recent rules from the UK Financial Reporting Council require companies it regulates to report clearly on important reputational, behavioural and organisational risks.  We have analysed the Annual Reports of about half of the FTSE 100 constituents to discover how they were getting on.

We used a simple scale to score their reporting performance on five axes

The scoring system awarded from 0 to 5 points on this scale.




The five (inevitably overlapping) axes we chose were:
  1. Behavioural risk
  2. Organisational risk
  3. Cultural risk
  4. Reputational risk
  5. Learning from errors and experience
We added these scores to produce an averaged composite score for each company, also with a maximum of 5.

By way of example our cohort included seven financial services companies.  Plotting their results on a chart reveals the picture below.


 

This picture is revealing even without more information.  You can separate financial services (FS) companies that talk about learning from mistakes from those that do not; evidence of the Financial Conduct Authority and Prudential Reguation Authority campaigns to improve culture is ubiquitous; and two seeming weaklings emerge: FS6 and FS7 with averaged composite scores of 1.6 and 1.1.

To give a little perspective, FS2 was the top scoring company across our entire survey with a composite score of 3.6 that leaves plenty of room for improvement. The bottom company managed to score a zero on all five dimensions.

Annual Reports may portray a company as better or worse than it actually is.  A poor score may reflect poor risk management or inadequate reporting by the board.  Contrariwise a higher score may represent better risk management or exaggeration by the board.  At present we suspect the former more than the latter.

This analysis provides a new and solid starting point for identifying UK companies that are particularly vulnerable to unpleasant surprises.  Huge volumes of differentiating information exist in the public domain.  Our experience is that, with a suitable analytical framework and methodology, this yields revealing and predictive insights into the extent of a company's vulnerability to crises and the nature of its fault lines.  The framework can equally be used to compare and rank companies, identifying both which companies are more and less accident-prone and which are more, or less, likely to survive a reputational crisis.

Our methodology is obviously relevant to leaders of companies both in reducing the risk of being held responsible for the unexpected sudden collapse of their company and to ensure that outsiders gain a fair perspective on risk management in these areas.  With access to inside information the analysis can be made far more granular, robust and revealing, supporting improvements in both risk management risk reporting.  We are talking to a number of companies about this.

Our research insights and methodology are also relevant to:
  • Investors who wish to avoid unpleasant surprises;
  • D&O insurers ranking board risks;
  • General liability insurers ranking operational risks;
  • Banks assessing credit risks.
These groups have access to public information.  Armed with a suitable analytical framework, they can ask questions to probe areas that they regard as particularly important.

We plan to report further on our findings in the coming months.

In the meantime you can learn more about reputational, behavioural and organisational risks in "Rethinking Reputational Risk: How to Manage the Risks that can Ruin Your Business, Your Reputation and You" written by the late Professor Derek Atkins and me.  Publishers Kogan Page offer a 20% discount  using code BBLRRR20 to our readers.

Anthony Fitzsimmons
Reputability LLP
London
www.reputability.co.uk
@reputability


Friday, 29 September 2017

Learning from mistakes: the key to flying high



“He hit me first,” whines the indignant two-year-old. We learn the ‘blame game’ young. With luck it develops into asking “why (did he hit me/take my sweets…etc.)?” 

Inadequate investigations

“Why?” is a powerful question. Inexpertly used it leads to quick but superficial attribution of causes: “Why did the rogue trader emerge?” “Because he was bad.” In times past, air accident investigations often concluded that the tragedy was caused by ‘pilot error’. But as Stanley Roscoe, a pioneering aviation psychologist of the 1980s pithily put it, blaming an accident on ‘pilot error’ was “the substitution of one mystery for another”.

At the time air accidents remained uncomfortably frequent with deaths running at around a thousand per year. Roscoe’s insight was a key to transforming aviation from the somewhat hazardous to an activity so safe that the prospect of an aircraft crashing onto London as it approaches Heathrow has barely featured in the debate over a new runway for London’s airports. Terrorism apart, air accidents on western-built aircraft globally killed about 300 people per year in the decade to 2015, by which time the number of flights had more than doubled. For comparison, over 1800 were killed on UK roads in 2013 alone (over 34,000 on US roads).

Aviators learnt to learn better

The transformation was no accident. The airline industry foresaw that growth in flying might lead to a monthly air disaster featured on all front pages if they could not improve safety. As aviation investigators and academics dug deeper into the causes of accidents, asking “why?”, significant themes emerged.

Digging deeper uncovers system failures

One concerned communication failures. High workloads played a part but some were due to hierarchies. A co-pilot needed to tell his commander (in those days pilots were always men) something was going wrong but the difference in status led him to mince words in a way that masked the message; or the message was clear but his commander was unable to absorb information that did not fit his expectations. Sometimes the co-pilot said nothing at all because a challenge was socially unthinkable even when the alternative was imminent death.

The Kegworth crash


The problem grew worse as the gap in status increased, with an even higher barrier between the flight deck crew and the cabin crew even though the latter might have really important information. When the commander of the aircraft that crashed at Kegworth in 1989 announced to all that there was a problem with the right engine, which he was shutting down, many in the cabin could see that it was the left, not right, engine that was on fire. Whilst some cabin crew were too pre-occupied with their emergency duties notice the announcement, there was no attempt to tell the flight crew that the left, not right engine seemed to be on fire. The aircraft crashed just short of the runway when the functioning right engine was shut down; and the left engine’s fire was made worse when extra fuel was pumped into it. 47 died and of 79 survivors, 74 suffered serious injuries.

The pilot who was sucked out of the cockpit

Another theme was system failures. When accidents are investigated there is of course an immediate cause. Soon after a BAC 1-11 aircraft took off from Birmingham airport in 1990, there was a loud bang as a newly installed cockpit windscreen disappeared at 17,000 feet. The co-pilot, who had undone his safety harness, was sucked out of the aircraft and left hanging on by his knees. He was saved by cabin crew holding his legs as the pilot regained control of the aircraft and landed it safely.

The immediate cause was that the windscreen had been installed using bolts that were a mixture of too small in diameter and too short. The next deeper level of causes included a fundamental design error in the windscreen and a mechanic deprived of sleep. But even this was not enough for the investigators who identified fundamental system failings, including that “the number of errors perpetrated on the night of this job came about because procedures were abused, 'short-cuts' employed and mandatory instructions ignored. Even when doubt existed about the correct size of bolt to use, the authoritative documents were not consulted.”

The airline had failed to detect the slipped standards because they did not monitor more senior mechanics. It did not help that their procedure for gathering feedback about the effectiveness of the maintenance system was not working properly: the AAIB estimated that the ratio of near misses to serious accidents might be as high as 600 to one so successful detection of system failures depends on reporting a substantial proportion of near misses.

What aviators learnt

The success of commercial aviation in flight safety is built on two pillars:

  • Analysis of accidents and near misses to their root causes, including system failures including effects of human psychology and behaviour at all levels;
  • Remedying systemic weaknesses and managing behavioural and psychological issues uncovered.

These systemic issues include systemic weaknesses caused by human behaviour: aviators have overcome the idea, common elsewhere, that systems just means processes. Systems do include processes, but recognising that humans are an integral part of their systems, aviators treat normal, predictable human behaviour as an integral part of the flight safety problem and integrate lessons about human behaviour into flight safety.

Practical lessons for all

Thus even the most experienced pilots are taught to listen to subordinates and welcome challenge. Everyone is trained to challenge whenever necessary and ensure they are heard. All are trained to listen to each other and to cooperate, especially under stress. And through what is known as “just culture” the whole commercial aviation system encourages even self-reporting of near-misses and errors as well as accidents so they can be analysed to root causes and the lessons fed back to all. The deal is spelt out on the CAA website:

“Just culture is a culture that is fair and encourages open reporting of accidents and incidents. However, deliberate harm and wilful damaging behaviour is not tolerated. Everyone is supported in the reporting of accidents and incidents.”

This is not whistleblowing to bypass belligerent bosses: it is a routine system that applies to everyone, every day and at whatever level. It applies to all directly involved in flight operations including leaders on aircraft and those who lead the manufacture, maintenance and support of aircraft and the systems that keep them flying. No-one in the system is above it; and the CAA statement of the just culture is endorsement of the flight safety culture from aviation’s highest level: its regulator.

Everyone in the system now accepts it, though it was initially resisted just as Professor Atul Gawende’s surgery checklists were initially resisted by some surgeons. It was no surprise to psychologists that most of the minority who resisted Gawende’s checklists thought that, though they did not need to use checklists, any surgeon operating on them should use one.

The story of flight safety illustrates how carefully thought through culture change has brought about a system so safe that few even think about flight safety. Aviation has achieved this despite the system’s complexity, which includes legions of organisations, huge and small, worldwide.

Applying the lessons beyond aviation

Can it be replicated elsewhere? The fact that airlines – such as British Airways’ recent IT failure – can have serious failures beyond flight safety confirms that the cultural transition between flight safety and the rest of the business is not automatic – even where the group chief executive was once a pilot.

There can be no doubt that senior UK financial regulators understand that cultural, management and leadership failures in and around finance are among the root causes of the 2007/8 financial crisis. Some of these roots – such as the accumulation and promotion of undesirable character traits among staff hired primarily for greed and aggression – go deep. But many, even if not transient, are less deep-rooted.

A better culture, and the incentives and other drivers to support it, can be designed and launched surprisingly fast though embedding it will take longer. Incorporating a culture of learning from errors, near-misses as well as the serious failings in conduct, will help identify systemic weak spots so they can be remedied.

But just as it was crucial that even the most senior pilots learned to welcome analysis and challenge of their actions, so too must business leaders. Their perceived character, culture, incentives and behaviour are crucial models for their subordinates. And just as the CAA overtly underpins aviation’s culture of learning from error, so regulators, and their political masters, must embrace the importance of an open, analytical - and forgiving - attitude to honest mistakes.

Anthony Fitzsimmons
Reputability LLP
London
www.reputability.co.uk

Anthony Fitzsimmons is joint author, with the late Professor Derek Atkins, of "Rethinking Reputational Risk: How to Manage the Risks that can Ruin Your Business, Your Reputation and You"   www.rethinkingreputationalrisk.com

This article was first published in the August/September 2017 edition of Financial World.




Thursday, 27 July 2017

What is wrong with “efficiency”? Plenty.






We are delighted to welcome a guest post from Professor Henry Mintzberg, a prolific writer on management issues including The Rise and Fall of Strategic Planning and Managers Not MBAs  which outlines what he believes to be wrong with modern management education.




 


Efficiency is like motherhood. It gets us the greatest bang for the buck, to use an old military expression. Herbert Simon, winner of one of those non-Nobel prizes in economics, called efficiency a value-free, completely neutral concept. You decide what benefits you want; efficiency gets you them at the least possible cost. Who could possibly argue with that?

Me, for one.

I list below a couple of things that are efficient. Ask yourself what am I referring to—the first words that pop into your head.

A restaurant is efficient.

Did you think about speed of service? Most people do. Few think about the quality of the food. Is that the way you chose your restaurants?

My house is efficient.

Energy consumption always comes out way ahead. Tell me: who ever bought a house for its energy consumption, compared with, say, its design, or its location?

What’s going on here? It’s quite obvious as soon as we realize it. When we hear the word efficiency we zero in―subconsciously―on the most measurable criteria, like speed of service or consumption of energy. Efficiency means measurable efficiency. That’s not neutral at all, since it favors what can best be measured. And herein lies the problem, in three respects:

1. Because costs are usually easier to measure than benefits, efficiency often reduces to economy: cutting measurable costs at the expense of less measurable benefits. Think of all those governments that have cut the costs of health care or education while the quality of those services have deteriorated. (I defy anyone to come up with an adequate measure of what a child really learns in a classroom.) How about those CEOs who cut budgets for research so that they can earn bigger bonuses right away, or the student who found all sorts of ways to make an orchestra more efficient. 

2. Because economic costs are typically easier to measure than social costs, efficiency can actually result in an escalation of social costs. Making a factory or a school more efficient is easy, so long as you don’t care about the air polluted or the minds turned off learning. I’ll bet the factory that collapsed in Bangladesh was very efficient.  

3. Because economic benefits are typically easier to measure than social benefits, efficiency drives us toward an economic mindset that can result in social degradation. In a nutshell, we are efficient when we eat fast food instead of good food.

So beware of efficiency, and of efficiency experts, as well as of efficient education, heath care, and music, even efficient factories. Be careful too of balanced scorecards, because, while inclusion of all kinds of factors may be well intentioned, the dice are loaded in favor of those that can most easily be measured.

By the way, twitter is efficient. Only 140 characters! This blog is less so.

References

Herbert A. Simon Administrative Behavior: Second Edition (Macmillan, 1957, page 14).

This TWOG derives from my article “A Note on the Dirty Word Efficiency”, Interfaces (October, 1982: 101-105)

 This blog was first published by Henry Mintberg's own blog at http://www.mintzberg.org
 

Monday, 10 July 2017

Intelligent Dissent

On 13 May, 1940, Sergeant Walther Rubarth was in the vanguard of the German army invading France. His company had survived a hail of French machine gun fire as it crossed the River Meuse and fought through French defences.

Having reached his objective his orders were to dig in, but he was surprised to find that a key part of the battlefield was undefended – for the time being. He saw a golden opportunity to further the army’s overall goal and advance, but to exploit it he had to disobey his orders. As he pondered the options, an officer arrived and ordered him to dig in. Rubarth challenged the order and won the argument. His subsequent actions went on to create “such destructive chaos that it unlocked the heart of the French defences and had decisive operational significance”.

This was not extraordinary. For decades, the German army had cultivated a culture of “intellectual development through curiosity, critical thinking, imagination and openmindedness”, according to Professor Lloyd Clark,(1) that permitted and encouraged considered dissent underpinned by a clear articulation of overall objectives. It was an essential element of what the Germans call auftragstaktik (mission-orientated command).

Adopted by the German army in the nineteenth century, it is widely used in the British and US armies today. To work, it requires a clear culture shared across the organisation, well-defined goals and mutual trust. Execution is delegated to subordinates, working within the ethos and culture they have been trained to share. Intelligent dissent is encouraged.

Provided you have a good enough reason, and stay within the cultural rules, you can disobey orders to achieve the overall goal. Culture is, therefore, a central pillar supporting leaders as they exert control over their military machine. The feedback provided by intelligent dissent is essential to keeping it in good working order and using its innate intelligence to the full.

Fast forward 76 years to the City of London in 2016. Andrew Bailey, then leading the Prudential Regulation Authority and now chief executive of the Financial Conduct Authority (FCA), recognised the crucial effect of culture on outcomes that matter to regulators. His assessment (2) of recent failures was damning of management and leadership. He said:
“There has not been a case of a major prudential or conduct failing in a firm which did not have among its root causes a failure of culture as manifested in governance, remuneration, risk management or tone from the top.”
So culture sowed the seeds of disasters,
“for instancewhere management are so convinced of their rightness that they hurtle for the cliff without questioning the direction of travel”.
People find it easy to discuss the familiar, such as market, credit, liquidity or conduct risk, but are reluctant to talk about risks from individual behaviour, let alone the behaviour of their leaders. Most people find it embarrassing, dangerous, or both, to raise such subjects.  Bailey did not mince his words, continuing:
“You can add to that [list], hubris risk, the risk of blinding over-confidence. If Imay say so, it is a risk that can be magnified by broader social attitudes. Ten years ago, there was considerable reverence towards, and little questioning of, the ability of banks and bankers to make money or of whether boards demonstrated a sufficient diversity of view and outlook to sustain challenge.How things have changed. Healthy scepticism channelled into intelligent and forceful questioning of the self-confident can be a good thing.”

 A central aim of the FCA is to drive fair treatment of customers through a culture that puts customers first and a system that allocates responsibility unambiguously. Who can argue with its requirement that managers communicate that aim to staff? Or with the responsibility placed on managers, via the senior managers regime, to put customers at the heart of strategy, staff training, reward or controls? (3) But is that enough?

The FCA’s themes are sound. Allocating responsibility clearly ensures that all know who is  in charge of what. The FCA understands that culture is rooted in history and can take years to change. It recognises that bad cultures from the past leave toxic legacies that endure. A  company or industry that has recruited, rewarded and promoted on aggression, self-confidence and greed for decades has a problem that will take decades, or a cull, to fix.  Antony Jenkins, the former chief executive of Barclays, saw the enormity of the problem he faced when he wrote:
“There might be some who don’t feel they can fully buy into an approach which so squarely links performance to the upholding of our values. My message to those people is simple: Barclays is not the place for you.” (4)
The FCA emphasises tone from the top. How you behave matters even more than what you say. But in an industry that, for years or decades, has recruited and promoted for what are now seen as undesirable character and behavioural traits, where do you find leaders who combine technical competence with the traits, attitudes and values now required?

The answer is in the question. Desirable character traits should become an explicit part of the specification of every leader and potential leader and be given at least equal weight with skills, knowledge and experience in recruitment and promotion. As Canada’s respected Ivey Business School explained, good leaders balance confidence with humility; aggressiveness with patience; analysis with intuition; principle with pragmatism; deliberation with  decisiveness; candour with compassion. (5) Organisations that dig more deeply may be pleasantly surprised to discover seams of people who were previously overlooked as potential leaders, including women and minorities of many kinds, with both technical skills and desirable character traits.

Any potentially risky aspects of leaders’ characters should be discussed openly by boards and regulators. Those of senior leaders should feature prominently on the risk register. There are advantages in an enthusiastic, forceful or charismatic chief executive, but the corresponding risks should be recognised and managed. I was surprised when I first heard of a chief executive whose “dominant” character featured in the company’s risk register; but its presence there made it possible for his dominant tendencies to be managed in normal polite discussion.

Another aspect of tone is the company’s strategy and how it is expressed: not just what you are trying to achieve but also how you manage clashes between objectives and principles and with what consequences. This feeds through to reward patterns.

Of course bonuses matter because you can expect to get more of what you reward – although you should take care what you wish for. Bonuses drove payment protection insurance sales that produced pain. The same applies to other kinds of reward, from a pat on the back through public praise to promotion. These patterns determine who leaves, who stays and who rises as particular character traits are encouraged and a culture built and reinforced.

Most telling is how you respond when objectives clash with principles. How do you deal with someone who gets the right result by crossing your red lines? And what about someone who forgoes a deal because they would not cross them?

But let us move into your office, today. What do you do when faced with a rule that does not work in your real world of work? Do you shrug, obey the rule and achieve the wrong result? Do you “work around” or disregard the rule, perhaps after discussing the problem with colleagues? Or do you tell your superiors that the rule needs to change and why? My experience suggests that more people take the first two options than the third. These undermine the ground rules – risking serious breaches – whereas feedback from intelligent dissent reinforces and improves them.

Another question: what happens if something goes wrong? Not so badly that it is obvious to your boss, but bad enough to need fast or fancy footwork. Do you tell your superiors? Analyse what went wrong and why? Make sure weaknesses are fixed and lessons learned widely? More likely the problem is discussed locally, if at all, then buried; yet mishaps  without bad consequences provide valuable feedback as to how well the system is working, or not. They are often symptoms of systemic weaknesses where a bad outcome has been prevented by a mixture of luck and crisis management. When luck runs out, something far nastier happens. Consequences can be personal, painful and protracted.

Part of the reason for the persistence of risk areas is that leaders have not created psychologically safe spaces where subordinates, let alone leaders, can admit to mistakes and deal with them. Some leaders lack the humility and selfconfidence to cope with contradiction, let alone regular intelligent dissent. The penal aspects of the UK senior managers regime, imposed on financial regulators may play a part, by causing leaders to see admitting errors as a weakness rather than a strength and an opportunity to learn from mistakes. Whatever the cause, the result is that rules are undermined and organisations fail to learn, leaving systemic weaknesses unresolved until something blows up.

Putting your customers first will please the FCA. But a more comprehensive route to sustainable success is to adapt auftragstaktik and intelligent dissent to achieve a culture that learns and repairs itself. It will also put your trusted team’s expensively bought brainpower to more productive use.

Anthony Fitzsimmons
Chairman,
Reputability LLP
London

Endnotes

1. Clark L (2017), ‘The Intelligently Disobedient Soldier’. Centre for Army Leadership. Available at www.army.mod.uk/documents/general/Centre_For_Army_Leadership_Leadership_Insight_No_1.pdf.
2. Bailey A (2016), ‘Culture in Financial Services – a regulator’s perspective’. Bank of England speech. Available at: www.bankofengland.co.uk/publications/Pages/speeches/2016/901.aspx.
3. Davidson J (2016), ‘Getting Culture and Conduct Right - the role of the regulator’. FCA speech. Available at: www.fca.org.uk/news/speeches/getting-culture-and-conduct-right-role-regulator.
4. ‘Antony Jenkins to staff: adopt new values or leave Barclays’, The Daily Telegraph, 27 January, 2017. Available at: www.telegraph.co.uk/finance/newsbysector/banksandfinance/9808042/Antony-Jenkins-to-staff-adopt-new-values-or-leave-Barclays.html.
5. Gandz J at al. (2010), Leadership on Trial: a manifesto for leadership development. Ivey School of Business


Anthony Fitzsimmons is joint author, with the late Professor Derek Atkins, of "Rethinking Reputational Risk: How to Manage the Risks that can Ruin Your Business, Your Reputation and You"
  
This article was first published in the June/July 2017 edition of Financial World






Friday, 12 May 2017

WanaCrypt0r 2.0 Virus infects NHS and more

Large sections of the UK's National Health Service (NHS) were hit by a ransomware attack as were many other organisations worldwide.

According to the Financial Times, the virus was a weaponised development of the US National Security Agency's 'Eternal Blue' tool, part of a "highly classified NSA arsenal of digital weapons leaked online last year by a group called the Shadowbrokers".

WanaCrypt0r seems to have been distributed by the common route of an attachment to emails which were opened by numerous recipients who did not identify the attachments as suspicious.

The Guardian reported
"Many NHS trusts still use Windows XP, a version of Microsoft’s operating system that has not received publicly available security updates for half a decade, and even well-patched operating systems cannot help users who are tricked into running software deliberately."
and later:
"It’s our governments, via the intelligence agencies, that share a responsibility for creating vulnerabilities in our communication networks, surveilling our smart phones and televisions and exploiting loopholes in our operating systems,” said Dr Simon Moores, chair of the International eCrime Congress."
In an interview with Andrew Marr,
"Michael Fallon [was] forced to defend the Government's decision not to fund crucial updates for NHS computer systems, leaving them vulnerable to a global cyber attack which caused chaos at hospitals across the country."
The cost saving was apparently a £5.5m saving by Central Government  that could have been spent on keeping in place national support for XP in the NHS.  Apparently there had been repeated warnings of the risks of running systems on an unsupported XP operating system, including a warning by Microsoft two months ago.


of Microsoft wrote:
"Repeatedly, exploits in the hands of governments have leaked into the public domain and caused widespread damage. An equivalent scenario with conventional weapons would be the U.S. military having some of its Tomahawk missiles stolen. And this most recent attack represents a completely unintended but disconcerting link between the two most serious forms of cybersecurity threats in the world today – nation-state action and organized criminal action."
According to Keren Elazari, the sectors where unsupported software systems are most prevalent are those where safety matters:
"healthcare, energy and transport; as well as finance and other industries where computer systems provide the foundations for modern functionality."

Assuming early reports are broadly correct, this attack raises behavioural, organisational, leadership and reputational risk issues.

Why are parts of the NHS using outdated, unsupported Windows XP? 

The obvious answer is cost-cutting by people who do not understand the consequences, in this case the risks of running out-dated, unsupported operating systems.  This now seems to include a Government minister who did not listen to advice on a subject he did not understand.

If so this is a classic case of cost-cutting to produce a short term gain at the cost of a systemic weakness that goes on to cause great pain when the risk eventually manifests.  Cost-cutting in ignorance of the consequences is a risk that typically emanates from the highest levels of leadership anbd it regularly causes failures.

Why do NHS staff lack the training needed to operate an outdated, unsupported operating system?

It seems that NHS staff lacked the training manually to identify suspicious emails.  Candidates as causes of this state of affairs include:
  • Ignorant leaders did not realise that cost-cutting on operating systems created cyber risks to which training might provide a partial solution. 
  • Leaders who recognised the risks but would not provide training, for example because it would cost money they were not prepared to spend;
  • That no amount of training would be sufficient - but leaders either did not know this or did not care.  
Leadership ignorance is an organisaitonal and leadership risk that regularly causes failure.

Who else is using unsupported software in systemically important systems?  

These include supply chains for cash, food, power, water and the internet itself.  What potential consequences might there be for the public?

Intelligence agencies

The UK intelligence agency GCHQ, backed by the UK Home Office under Theresa May, have already inserted backdoors into many encryption systems and recently gained statutory authority to demand backdoors into encryption and other systems including computers, phones and TVs and anything else containing sortware.  It has statutory authority to hack into computers and other devices worldwide and there can be little doubt that they, like the NSA, developed tools to achieve this years ago.  They also stockpile vulnerabilities in operating systems, preventing companies like Microsoft from dealing with them.  As Brad Smith, Microsoft president’s and chief legal officer, said,
An equivalent scenario with conventional weapons would be the US military having some of its Tomahawk missiles stolen.”

No organisation can guarantee the security of valuable tools such as these against a determined external attacker or internal leaker.  These risks will always be greater than zero.

If surveillance and cyber-warfare tools escape into the hands of criminals or hostile state actors, the potential for harm will broadly be in proportion to the versatility of the tools and the creativity and motivation of users.  There can be no doubt that a determined, skilled and motivated group of hackers could design an event to cause great harm and outrage, just as Al Quaeda did with its carefully designed and planned "9/11" attack on the USA.  These are perfect weapons for the weak.

Given that there is a finite risk of cyber-warfare tools 'escaping', the question is whether intelligence agencies, and the politicians who ultimately control them, have considered the risks and consequences of the tools they develop being turned against their own countries and allies.  Even if the probability of theft of the tools is thought very low, a foolhardy assumption, the potential for harm to the public is unknowably great.

This is yet another example of the risks of balancing short term gains against the long term consequences of systemic weaknesses.  The problem with this balancing act is that it is rarely possible to quantify the consequences of systemic weaknesses, especially where deliberately caused harm is involved.  History shows that it is easy to overlook or underestimate them.  The problem is exacerbated by leaders' tendency to give more weight to imminent than to distant consequences.

As to the security services, the likelihood is that current cyber attack will come to be seen as small beer.  When that happens, the reputation, and licence to operate, of the security agency concerned whose software has been turned against its own state or a friendly state, will be balanced on a knife edge.  Other security agencies will be at risk of collateral damage.

As to the NHS, a series of scandals of incompetence, catalogued by Richard Bacon in his book "Conundrum", has left the NHS and its leaders with a poor reputation for competence when it comes to IT.  If it eventually emerges that the NHS IT system had weaknesses that left it vulnerable to this attack, its reputation for competence will be damaged further.   Evidence emerging suggests that it will also leave the reputation of the minister who cancelled the IT support contract in tatters.

Background reading:  You can read more about how behavioual, organisational and leadership risks cause immense harm to seemingly solid organisations in 'Rethinking Reputational Risk: How to Manage the Risks that can Ruin Your Business, Your Reputation and You".    Lord David Owen wrote of it:
"An exceptional book for learning at every level – whether you are a business school student or a chief executive; Prime Minister or a new recruit into the civil service."
You can read reviews of the book here.


Anthony Fitzsimmons
Reputability LLP
London

www.reputability.co.uk
www.rethinkingreputationalrisk.com