About Me

My photo
Reputability LLP are pioneers and leaders globally in the field of reputational risk and its root causes, behavioural risk and organisational risk. We help business leaders to find these widespread but hidden risks that regularly cause reputational disasters. We also teach leaders and risk teams about these risks. Here are our thoughts, and the thoughts of our guest bloggers, on some recent stories which have captured our attention. We are always interested to know what you think too.

Friday, 29 September 2017

Learning from mistakes: the key to flying high



“He hit me first,” whines the indignant two-year-old. We learn the ‘blame game’ young. With luck it develops into asking “why (did he hit me/take my sweets…etc.)?” 

Inadequate investigations

“Why?” is a powerful question. Inexpertly used it leads to quick but superficial attribution of causes: “Why did the rogue trader emerge?” “Because he was bad.” In times past, air accident investigations often concluded that the tragedy was caused by ‘pilot error’. But as Stanley Roscoe, a pioneering aviation psychologist of the 1980s pithily put it, blaming an accident on ‘pilot error’ was “the substitution of one mystery for another”.

At the time air accidents remained uncomfortably frequent with deaths running at around a thousand per year. Roscoe’s insight was a key to transforming aviation from the somewhat hazardous to an activity so safe that the prospect of an aircraft crashing onto London as it approaches Heathrow has barely featured in the debate over a new runway for London’s airports. Terrorism apart, air accidents on western-built aircraft globally killed about 300 people per year in the decade to 2015, by which time the number of flights had more than doubled. For comparison, over 1800 were killed on UK roads in 2013 alone (over 34,000 on US roads).

Aviators learnt to learn better

The transformation was no accident. The airline industry foresaw that growth in flying might lead to a monthly air disaster featured on all front pages if they could not improve safety. As aviation investigators and academics dug deeper into the causes of accidents, asking “why?”, significant themes emerged.

Digging deeper uncovers system failures

One concerned communication failures. High workloads played a part but some were due to hierarchies. A co-pilot needed to tell his commander (in those days pilots were always men) something was going wrong but the difference in status led him to mince words in a way that masked the message; or the message was clear but his commander was unable to absorb information that did not fit his expectations. Sometimes the co-pilot said nothing at all because a challenge was socially unthinkable even when the alternative was imminent death.

The Kegworth crash


The problem grew worse as the gap in status increased, with an even higher barrier between the flight deck crew and the cabin crew even though the latter might have really important information. When the commander of the aircraft that crashed at Kegworth in 1989 announced to all that there was a problem with the right engine, which he was shutting down, many in the cabin could see that it was the left, not right, engine that was on fire. Whilst some cabin crew were too pre-occupied with their emergency duties notice the announcement, there was no attempt to tell the flight crew that the left, not right engine seemed to be on fire. The aircraft crashed just short of the runway when the functioning right engine was shut down; and the left engine’s fire was made worse when extra fuel was pumped into it. 47 died and of 79 survivors, 74 suffered serious injuries.

The pilot who was sucked out of the cockpit

Another theme was system failures. When accidents are investigated there is of course an immediate cause. Soon after a BAC 1-11 aircraft took off from Birmingham airport in 1990, there was a loud bang as a newly installed cockpit windscreen disappeared at 17,000 feet. The co-pilot, who had undone his safety harness, was sucked out of the aircraft and left hanging on by his knees. He was saved by cabin crew holding his legs as the pilot regained control of the aircraft and landed it safely.

The immediate cause was that the windscreen had been installed using bolts that were a mixture of too small in diameter and too short. The next deeper level of causes included a fundamental design error in the windscreen and a mechanic deprived of sleep. But even this was not enough for the investigators who identified fundamental system failings, including that “the number of errors perpetrated on the night of this job came about because procedures were abused, 'short-cuts' employed and mandatory instructions ignored. Even when doubt existed about the correct size of bolt to use, the authoritative documents were not consulted.”

The airline had failed to detect the slipped standards because they did not monitor more senior mechanics. It did not help that their procedure for gathering feedback about the effectiveness of the maintenance system was not working properly: the AAIB estimated that the ratio of near misses to serious accidents might be as high as 600 to one so successful detection of system failures depends on reporting a substantial proportion of near misses.

What aviators learnt

The success of commercial aviation in flight safety is built on two pillars:

  • Analysis of accidents and near misses to their root causes, including system failures including effects of human psychology and behaviour at all levels;
  • Remedying systemic weaknesses and managing behavioural and psychological issues uncovered.

These systemic issues include systemic weaknesses caused by human behaviour: aviators have overcome the idea, common elsewhere, that systems just means processes. Systems do include processes, but recognising that humans are an integral part of their systems, aviators treat normal, predictable human behaviour as an integral part of the flight safety problem and integrate lessons about human behaviour into flight safety.

Practical lessons for all

Thus even the most experienced pilots are taught to listen to subordinates and welcome challenge. Everyone is trained to challenge whenever necessary and ensure they are heard. All are trained to listen to each other and to cooperate, especially under stress. And through what is known as “just culture” the whole commercial aviation system encourages even self-reporting of near-misses and errors as well as accidents so they can be analysed to root causes and the lessons fed back to all. The deal is spelt out on the CAA website:

“Just culture is a culture that is fair and encourages open reporting of accidents and incidents. However, deliberate harm and wilful damaging behaviour is not tolerated. Everyone is supported in the reporting of accidents and incidents.”

This is not whistleblowing to bypass belligerent bosses: it is a routine system that applies to everyone, every day and at whatever level. It applies to all directly involved in flight operations including leaders on aircraft and those who lead the manufacture, maintenance and support of aircraft and the systems that keep them flying. No-one in the system is above it; and the CAA statement of the just culture is endorsement of the flight safety culture from aviation’s highest level: its regulator.

Everyone in the system now accepts it, though it was initially resisted just as Professor Atul Gawende’s surgery checklists were initially resisted by some surgeons. It was no surprise to psychologists that most of the minority who resisted Gawende’s checklists thought that, though they did not need to use checklists, any surgeon operating on them should use one.

The story of flight safety illustrates how carefully thought through culture change has brought about a system so safe that few even think about flight safety. Aviation has achieved this despite the system’s complexity, which includes legions of organisations, huge and small, worldwide.

Applying the lessons beyond aviation

Can it be replicated elsewhere? The fact that airlines – such as British Airways’ recent IT failure – can have serious failures beyond flight safety confirms that the cultural transition between flight safety and the rest of the business is not automatic – even where the group chief executive was once a pilot.

There can be no doubt that senior UK financial regulators understand that cultural, management and leadership failures in and around finance are among the root causes of the 2007/8 financial crisis. Some of these roots – such as the accumulation and promotion of undesirable character traits among staff hired primarily for greed and aggression – go deep. But many, even if not transient, are less deep-rooted.

A better culture, and the incentives and other drivers to support it, can be designed and launched surprisingly fast though embedding it will take longer. Incorporating a culture of learning from errors, near-misses as well as the serious failings in conduct, will help identify systemic weak spots so they can be remedied.

But just as it was crucial that even the most senior pilots learned to welcome analysis and challenge of their actions, so too must business leaders. Their perceived character, culture, incentives and behaviour are crucial models for their subordinates. And just as the CAA overtly underpins aviation’s culture of learning from error, so regulators, and their political masters, must embrace the importance of an open, analytical - and forgiving - attitude to honest mistakes.

Anthony Fitzsimmons
Reputability LLP
London
www.reputability.co.uk

Anthony Fitzsimmons is joint author, with the late Professor Derek Atkins, of "Rethinking Reputational Risk: How to Manage the Risks that can Ruin Your Business, Your Reputation and You"   www.rethinkingreputationalrisk.com

This article was first published in the August/September 2017 edition of Financial World.




Thursday, 27 July 2017

What is wrong with “efficiency”? Plenty.






We are delighted to welcome a guest post from Professor Henry Mintzberg, a prolific writer on management issues including The Rise and Fall of Strategic Planning and Managers Not MBAs  which outlines what he believes to be wrong with modern management education.




 


Efficiency is like motherhood. It gets us the greatest bang for the buck, to use an old military expression. Herbert Simon, winner of one of those non-Nobel prizes in economics, called efficiency a value-free, completely neutral concept. You decide what benefits you want; efficiency gets you them at the least possible cost. Who could possibly argue with that?

Me, for one.

I list below a couple of things that are efficient. Ask yourself what am I referring to—the first words that pop into your head.

A restaurant is efficient.

Did you think about speed of service? Most people do. Few think about the quality of the food. Is that the way you chose your restaurants?

My house is efficient.

Energy consumption always comes out way ahead. Tell me: who ever bought a house for its energy consumption, compared with, say, its design, or its location?

What’s going on here? It’s quite obvious as soon as we realize it. When we hear the word efficiency we zero in―subconsciously―on the most measurable criteria, like speed of service or consumption of energy. Efficiency means measurable efficiency. That’s not neutral at all, since it favors what can best be measured. And herein lies the problem, in three respects:

1. Because costs are usually easier to measure than benefits, efficiency often reduces to economy: cutting measurable costs at the expense of less measurable benefits. Think of all those governments that have cut the costs of health care or education while the quality of those services have deteriorated. (I defy anyone to come up with an adequate measure of what a child really learns in a classroom.) How about those CEOs who cut budgets for research so that they can earn bigger bonuses right away, or the student who found all sorts of ways to make an orchestra more efficient. 

2. Because economic costs are typically easier to measure than social costs, efficiency can actually result in an escalation of social costs. Making a factory or a school more efficient is easy, so long as you don’t care about the air polluted or the minds turned off learning. I’ll bet the factory that collapsed in Bangladesh was very efficient.  

3. Because economic benefits are typically easier to measure than social benefits, efficiency drives us toward an economic mindset that can result in social degradation. In a nutshell, we are efficient when we eat fast food instead of good food.

So beware of efficiency, and of efficiency experts, as well as of efficient education, heath care, and music, even efficient factories. Be careful too of balanced scorecards, because, while inclusion of all kinds of factors may be well intentioned, the dice are loaded in favor of those that can most easily be measured.

By the way, twitter is efficient. Only 140 characters! This blog is less so.

References

Herbert A. Simon Administrative Behavior: Second Edition (Macmillan, 1957, page 14).

This TWOG derives from my article “A Note on the Dirty Word Efficiency”, Interfaces (October, 1982: 101-105)

 This blog was first published by Henry Mintberg's own blog at http://www.mintzberg.org
 

Monday, 10 July 2017

Intelligent Dissent

On 13 May, 1940, Sergeant Walther Rubarth was in the vanguard of the German army invading France. His company had survived a hail of French machine gun fire as it crossed the River Meuse and fought through French defences.

Having reached his objective his orders were to dig in, but he was surprised to find that a key part of the battlefield was undefended – for the time being. He saw a golden opportunity to further the army’s overall goal and advance, but to exploit it he had to disobey his orders. As he pondered the options, an officer arrived and ordered him to dig in. Rubarth challenged the order and won the argument. His subsequent actions went on to create “such destructive chaos that it unlocked the heart of the French defences and had decisive operational significance”.

This was not extraordinary. For decades, the German army had cultivated a culture of “intellectual development through curiosity, critical thinking, imagination and openmindedness”, according to Professor Lloyd Clark,(1) that permitted and encouraged considered dissent underpinned by a clear articulation of overall objectives. It was an essential element of what the Germans call auftragstaktik (mission-orientated command).

Adopted by the German army in the nineteenth century, it is widely used in the British and US armies today. To work, it requires a clear culture shared across the organisation, well-defined goals and mutual trust. Execution is delegated to subordinates, working within the ethos and culture they have been trained to share. Intelligent dissent is encouraged.

Provided you have a good enough reason, and stay within the cultural rules, you can disobey orders to achieve the overall goal. Culture is, therefore, a central pillar supporting leaders as they exert control over their military machine. The feedback provided by intelligent dissent is essential to keeping it in good working order and using its innate intelligence to the full.

Fast forward 76 years to the City of London in 2016. Andrew Bailey, then leading the Prudential Regulation Authority and now chief executive of the Financial Conduct Authority (FCA), recognised the crucial effect of culture on outcomes that matter to regulators. His assessment (2) of recent failures was damning of management and leadership. He said:
“There has not been a case of a major prudential or conduct failing in a firm which did not have among its root causes a failure of culture as manifested in governance, remuneration, risk management or tone from the top.”
So culture sowed the seeds of disasters,
“for instancewhere management are so convinced of their rightness that they hurtle for the cliff without questioning the direction of travel”.
People find it easy to discuss the familiar, such as market, credit, liquidity or conduct risk, but are reluctant to talk about risks from individual behaviour, let alone the behaviour of their leaders. Most people find it embarrassing, dangerous, or both, to raise such subjects.  Bailey did not mince his words, continuing:
“You can add to that [list], hubris risk, the risk of blinding over-confidence. If Imay say so, it is a risk that can be magnified by broader social attitudes. Ten years ago, there was considerable reverence towards, and little questioning of, the ability of banks and bankers to make money or of whether boards demonstrated a sufficient diversity of view and outlook to sustain challenge.How things have changed. Healthy scepticism channelled into intelligent and forceful questioning of the self-confident can be a good thing.”

 A central aim of the FCA is to drive fair treatment of customers through a culture that puts customers first and a system that allocates responsibility unambiguously. Who can argue with its requirement that managers communicate that aim to staff? Or with the responsibility placed on managers, via the senior managers regime, to put customers at the heart of strategy, staff training, reward or controls? (3) But is that enough?

The FCA’s themes are sound. Allocating responsibility clearly ensures that all know who is  in charge of what. The FCA understands that culture is rooted in history and can take years to change. It recognises that bad cultures from the past leave toxic legacies that endure. A  company or industry that has recruited, rewarded and promoted on aggression, self-confidence and greed for decades has a problem that will take decades, or a cull, to fix.  Antony Jenkins, the former chief executive of Barclays, saw the enormity of the problem he faced when he wrote:
“There might be some who don’t feel they can fully buy into an approach which so squarely links performance to the upholding of our values. My message to those people is simple: Barclays is not the place for you.” (4)
The FCA emphasises tone from the top. How you behave matters even more than what you say. But in an industry that, for years or decades, has recruited and promoted for what are now seen as undesirable character and behavioural traits, where do you find leaders who combine technical competence with the traits, attitudes and values now required?

The answer is in the question. Desirable character traits should become an explicit part of the specification of every leader and potential leader and be given at least equal weight with skills, knowledge and experience in recruitment and promotion. As Canada’s respected Ivey Business School explained, good leaders balance confidence with humility; aggressiveness with patience; analysis with intuition; principle with pragmatism; deliberation with  decisiveness; candour with compassion. (5) Organisations that dig more deeply may be pleasantly surprised to discover seams of people who were previously overlooked as potential leaders, including women and minorities of many kinds, with both technical skills and desirable character traits.

Any potentially risky aspects of leaders’ characters should be discussed openly by boards and regulators. Those of senior leaders should feature prominently on the risk register. There are advantages in an enthusiastic, forceful or charismatic chief executive, but the corresponding risks should be recognised and managed. I was surprised when I first heard of a chief executive whose “dominant” character featured in the company’s risk register; but its presence there made it possible for his dominant tendencies to be managed in normal polite discussion.

Another aspect of tone is the company’s strategy and how it is expressed: not just what you are trying to achieve but also how you manage clashes between objectives and principles and with what consequences. This feeds through to reward patterns.

Of course bonuses matter because you can expect to get more of what you reward – although you should take care what you wish for. Bonuses drove payment protection insurance sales that produced pain. The same applies to other kinds of reward, from a pat on the back through public praise to promotion. These patterns determine who leaves, who stays and who rises as particular character traits are encouraged and a culture built and reinforced.

Most telling is how you respond when objectives clash with principles. How do you deal with someone who gets the right result by crossing your red lines? And what about someone who forgoes a deal because they would not cross them?

But let us move into your office, today. What do you do when faced with a rule that does not work in your real world of work? Do you shrug, obey the rule and achieve the wrong result? Do you “work around” or disregard the rule, perhaps after discussing the problem with colleagues? Or do you tell your superiors that the rule needs to change and why? My experience suggests that more people take the first two options than the third. These undermine the ground rules – risking serious breaches – whereas feedback from intelligent dissent reinforces and improves them.

Another question: what happens if something goes wrong? Not so badly that it is obvious to your boss, but bad enough to need fast or fancy footwork. Do you tell your superiors? Analyse what went wrong and why? Make sure weaknesses are fixed and lessons learned widely? More likely the problem is discussed locally, if at all, then buried; yet mishaps  without bad consequences provide valuable feedback as to how well the system is working, or not. They are often symptoms of systemic weaknesses where a bad outcome has been prevented by a mixture of luck and crisis management. When luck runs out, something far nastier happens. Consequences can be personal, painful and protracted.

Part of the reason for the persistence of risk areas is that leaders have not created psychologically safe spaces where subordinates, let alone leaders, can admit to mistakes and deal with them. Some leaders lack the humility and selfconfidence to cope with contradiction, let alone regular intelligent dissent. The penal aspects of the UK senior managers regime, imposed on financial regulators may play a part, by causing leaders to see admitting errors as a weakness rather than a strength and an opportunity to learn from mistakes. Whatever the cause, the result is that rules are undermined and organisations fail to learn, leaving systemic weaknesses unresolved until something blows up.

Putting your customers first will please the FCA. But a more comprehensive route to sustainable success is to adapt auftragstaktik and intelligent dissent to achieve a culture that learns and repairs itself. It will also put your trusted team’s expensively bought brainpower to more productive use.

Anthony Fitzsimmons
Chairman,
Reputability LLP
London

Endnotes

1. Clark L (2017), ‘The Intelligently Disobedient Soldier’. Centre for Army Leadership. Available at www.army.mod.uk/documents/general/Centre_For_Army_Leadership_Leadership_Insight_No_1.pdf.
2. Bailey A (2016), ‘Culture in Financial Services – a regulator’s perspective’. Bank of England speech. Available at: www.bankofengland.co.uk/publications/Pages/speeches/2016/901.aspx.
3. Davidson J (2016), ‘Getting Culture and Conduct Right - the role of the regulator’. FCA speech. Available at: www.fca.org.uk/news/speeches/getting-culture-and-conduct-right-role-regulator.
4. ‘Antony Jenkins to staff: adopt new values or leave Barclays’, The Daily Telegraph, 27 January, 2017. Available at: www.telegraph.co.uk/finance/newsbysector/banksandfinance/9808042/Antony-Jenkins-to-staff-adopt-new-values-or-leave-Barclays.html.
5. Gandz J at al. (2010), Leadership on Trial: a manifesto for leadership development. Ivey School of Business


Anthony Fitzsimmons is joint author, with the late Professor Derek Atkins, of "Rethinking Reputational Risk: How to Manage the Risks that can Ruin Your Business, Your Reputation and You"
  
This article was first published in the June/July 2017 edition of Financial World






Friday, 12 May 2017

WanaCrypt0r 2.0 Virus infects NHS and more

Large sections of the UK's National Health Service (NHS) were hit by a ransomware attack as were many other organisations worldwide.

According to the Financial Times, the virus was a weaponised development of the US National Security Agency's 'Eternal Blue' tool, part of a "highly classified NSA arsenal of digital weapons leaked online last year by a group called the Shadowbrokers".

WanaCrypt0r seems to have been distributed by the common route of an attachment to emails which were opened by numerous recipients who did not identify the attachments as suspicious.

The Guardian reported
"Many NHS trusts still use Windows XP, a version of Microsoft’s operating system that has not received publicly available security updates for half a decade, and even well-patched operating systems cannot help users who are tricked into running software deliberately."
and later:
"It’s our governments, via the intelligence agencies, that share a responsibility for creating vulnerabilities in our communication networks, surveilling our smart phones and televisions and exploiting loopholes in our operating systems,” said Dr Simon Moores, chair of the International eCrime Congress."
In an interview with Andrew Marr,
"Michael Fallon [was] forced to defend the Government's decision not to fund crucial updates for NHS computer systems, leaving them vulnerable to a global cyber attack which caused chaos at hospitals across the country."
The cost saving was apparently a £5.5m saving by Central Government  that could have been spent on keeping in place national support for XP in the NHS.  Apparently there had been repeated warnings of the risks of running systems on an unsupported XP operating system, including a warning by Microsoft two months ago.


of Microsoft wrote:
"Repeatedly, exploits in the hands of governments have leaked into the public domain and caused widespread damage. An equivalent scenario with conventional weapons would be the U.S. military having some of its Tomahawk missiles stolen. And this most recent attack represents a completely unintended but disconcerting link between the two most serious forms of cybersecurity threats in the world today – nation-state action and organized criminal action."
According to Keren Elazari, the sectors where unsupported software systems are most prevalent are those where safety matters:
"healthcare, energy and transport; as well as finance and other industries where computer systems provide the foundations for modern functionality."

Assuming early reports are broadly correct, this attack raises behavioural, organisational, leadership and reputational risk issues.

Why are parts of the NHS using outdated, unsupported Windows XP? 

The obvious answer is cost-cutting by people who do not understand the consequences, in this case the risks of running out-dated, unsupported operating systems.  This now seems to include a Government minister who did not listen to advice on a subject he did not understand.

If so this is a classic case of cost-cutting to produce a short term gain at the cost of a systemic weakness that goes on to cause great pain when the risk eventually manifests.  Cost-cutting in ignorance of the consequences is a risk that typically emanates from the highest levels of leadership anbd it regularly causes failures.

Why do NHS staff lack the training needed to operate an outdated, unsupported operating system?

It seems that NHS staff lacked the training manually to identify suspicious emails.  Candidates as causes of this state of affairs include:
  • Ignorant leaders did not realise that cost-cutting on operating systems created cyber risks to which training might provide a partial solution. 
  • Leaders who recognised the risks but would not provide training, for example because it would cost money they were not prepared to spend;
  • That no amount of training would be sufficient - but leaders either did not know this or did not care.  
Leadership ignorance is an organisaitonal and leadership risk that regularly causes failure.

Who else is using unsupported software in systemically important systems?  

These include supply chains for cash, food, power, water and the internet itself.  What potential consequences might there be for the public?

Intelligence agencies

The UK intelligence agency GCHQ, backed by the UK Home Office under Theresa May, have already inserted backdoors into many encryption systems and recently gained statutory authority to demand backdoors into encryption and other systems including computers, phones and TVs and anything else containing sortware.  It has statutory authority to hack into computers and other devices worldwide and there can be little doubt that they, like the NSA, developed tools to achieve this years ago.  They also stockpile vulnerabilities in operating systems, preventing companies like Microsoft from dealing with them.  As Brad Smith, Microsoft president’s and chief legal officer, said,
An equivalent scenario with conventional weapons would be the US military having some of its Tomahawk missiles stolen.”

No organisation can guarantee the security of valuable tools such as these against a determined external attacker or internal leaker.  These risks will always be greater than zero.

If surveillance and cyber-warfare tools escape into the hands of criminals or hostile state actors, the potential for harm will broadly be in proportion to the versatility of the tools and the creativity and motivation of users.  There can be no doubt that a determined, skilled and motivated group of hackers could design an event to cause great harm and outrage, just as Al Quaeda did with its carefully designed and planned "9/11" attack on the USA.  These are perfect weapons for the weak.

Given that there is a finite risk of cyber-warfare tools 'escaping', the question is whether intelligence agencies, and the politicians who ultimately control them, have considered the risks and consequences of the tools they develop being turned against their own countries and allies.  Even if the probability of theft of the tools is thought very low, a foolhardy assumption, the potential for harm to the public is unknowably great.

This is yet another example of the risks of balancing short term gains against the long term consequences of systemic weaknesses.  The problem with this balancing act is that it is rarely possible to quantify the consequences of systemic weaknesses, especially where deliberately caused harm is involved.  History shows that it is easy to overlook or underestimate them.  The problem is exacerbated by leaders' tendency to give more weight to imminent than to distant consequences.

As to the security services, the likelihood is that current cyber attack will come to be seen as small beer.  When that happens, the reputation, and licence to operate, of the security agency concerned whose software has been turned against its own state or a friendly state, will be balanced on a knife edge.  Other security agencies will be at risk of collateral damage.

As to the NHS, a series of scandals of incompetence, catalogued by Richard Bacon in his book "Conundrum", has left the NHS and its leaders with a poor reputation for competence when it comes to IT.  If it eventually emerges that the NHS IT system had weaknesses that left it vulnerable to this attack, its reputation for competence will be damaged further.   Evidence emerging suggests that it will also leave the reputation of the minister who cancelled the IT support contract in tatters.

Background reading:  You can read more about how behavioual, organisational and leadership risks cause immense harm to seemingly solid organisations in 'Rethinking Reputational Risk: How to Manage the Risks that can Ruin Your Business, Your Reputation and You".    Lord David Owen wrote of it:
"An exceptional book for learning at every level – whether you are a business school student or a chief executive; Prime Minister or a new recruit into the civil service."
You can read reviews of the book here.


Anthony Fitzsimmons
Reputability LLP
London

www.reputability.co.uk
www.rethinkingreputationalrisk.com

Wednesday, 4 January 2017

Financial Times reviews 'Rethinking Reputational Risk'

Stefan Stern has reviewed 'Rethinking Reputational Risk' for the Financial Times.

Introducing his review, Stern wrote:

"Th[is] book offers a thorough analysis of the many ways in which apparently unexpected crises can destroy businesses and reputations. Boards, chief executives and their managers may believe they have a firm grip on the risks they face. They should think again."

He continued:
"The book contains a series of detailed case studies of some of the best-known corporate crises of recent years .... The authors draw more than 30 lessons from their schadenfreude-free research."
before concluding:
"Businesses and executives are therefore vulnerable on a number of levels. They would do well to reflect on the serious messages contained in this well-argued book."
You can read more reviews here.

You can read more about 'Rethinking Reputational Risk' here.

You can buy copies from the publishers here.

Thursday, 3 November 2016

Rethinking Reputational Risk



For too long, there has been an unspoken assumption in traditional risk management and regulation that organisations are quasi-mechanical and that decision making is essentially rational.  The implication is that you if can devise the right rules, risks will disappear because people will respond to them logically.

In truth, all organisations consist of real people who exhibit the range of normal human feelings, emotions and behaviours and have individual characters.  These, and well-understood mental short-cuts and biases, are as important as strict logic in making decisions in the real world.  Real people constantly react to real life in ways that, whilst predictable, are not strictly rational.  It is those who lack these feelings and emotions who are unusual, not those who exhibit them.  

You visit the baker to be faced with an aromatic array of fresh bread.  Do you rigorously compare the nutritional content of each loaf, run quality tests and carry out a price and product comparison with other bakers in the vicinity (not forgetting transport and opportunity costs) that, if you are strictly rational, you ought to consider?
Of course you don’t.   You follow your eyes, nose and feelings rapidly to choose what you feel is the best choice: today a bag of bagels; tomorrow scented spelt scones; and if you like sweet things you may scoff the scrummy sugared doughnut you know you should shun.  If you stuck to strict logic, the baker’s shelves would be empty by the time you made your decision.

Feelings and emotions are an important element in normal decision-making, and this is true of all normal people in all contexts – including the most intelligent and respected business leaders in their work.  Unfortunately the emphasis on ‘homo economicus’, economists’ rational, benefit-maximising model of man  leads many leaders to assume this crude model represents reality, a double danger if they do not realise the extent to which their own decision-making depends on feelings and emotions. 

Real people use what behavioural economists and psychologists call heuristics and biases in making decisions.

Heuristics are mental short cuts that we all use to simplify decision-making.  There are dozens of them, working beneath our consciousness.  For example there is evidence that where we recognize one of a number of choices but have no better information, we are likely to put a lower store on the option we do not recognize.  That is the recognition heuristic. 
Then there are biases. An important bias is the ‘optimistic bias’.  As healthy humans we tend to delude ourselves that bad events are less likely to happen than good ones.  And we tend to attribute positive events to our skill and adverse ones to ‘them’ or bad luck: the ‘self-serving bias’. 
Many more heuristics and biases provide the numerous unrecognised assumptions and short-cuts that make the life of a normal person – well normal.

This matters.  People and the way they behave is what can make organisations great.  But one of the insights to emerge from our work on “Roads to Ruin” the 2011 Cass Business School report for Airmic (we were two of the report’s four authors), is that people are almost always at the root of why organisations are derailed.  They are implicated twice over: first because individual and collective human behaviour, most of it perfectly normal and predictable, lies at the root cause of most crises; and then because it regularly tips potentially manageable crises in to unmanageable reputational calamities.   

We have since established that seniority amplifies the consequences of behaviour for good or ill, so that other things being equal, behavioural and organisational risks related to leaders typically have far more serious consequences than an analogous errors lower down the hierarchy.

Unfortunately this area of risk is not systematically recognised by classical risk management.  Some areas of people risk are captured by looking at process safety, but this leaves huge gaps.  And a restricted view of reputational risk has left large areas of risks to reputation doubly unprotected.  Leaders and risk professionals have a structural blind spot that leaves the organisation – and its leaders – predictably vulnerable.

We have solved the problem by rethinking reputation and reputational risk – and so can you.  The Financial Times lexicon defines reputation as: “Observers’ collective judgments of a corporation based on assessments of financial, social and environmental impacts attributed to the corporation over time”; and there is much bickering over the nature of reputational risk.   

Whilst the FT definition is good in parts, it is too narrow.  We prefer the deceptively simple: 

“Your reputation is the sum total of how your stakeholders perceive you.”

Think about it and you will find its hidden depths.  One is that you lose your reputation when stakeholders come to believe, rightly or wrongly, that you are not as good as, or are worse than, they previously believed you to be.  That leads to our definition of reputational risk: 

“Reputational risk is the risk of failure to fulfil the expectations

of your stakeholders in terms of performance and behaviour”

Many ‘performance’ failures are caught by enterprise risk management; but few risks from behaviour or organisation are captured.  The result is that risks that both cause crises and destroy reputations are not captured, so they remain unmanaged. Worse, the research shows that behavioural and organisational risks can take many years to emerge.  In the meantime, leaders think all is well when, helped by the self-serving bias, they have been fooled into complacency by what is, in truth, a run of good luck; and they have lost the opportunity to deal with potentially lethal unrecognised risks before they cause harm.
 
And as Richard Feynman, the late lamented Nobel laureate who uncovered the people risks that caused NASA’s Challenger disaster, said: “The first principle is that you must not fool yourself; and you are the easiest person to fool.”

Professor Derek Atkins
Anthony Fitzsimmons


Rethinking Reputational Risk: How to Manage the Risks that can Ruin Your Business, Your Reputation and You” will be published on 3 January.  You can read reviews of the book at www.koganpage.com/reputational-risk   For a limited time you can (pre-)order the book there at a 20% discount: use code  ABLRRR20

This blog is based on an article first published in Management Today.

 

Friday, 18 December 2015

Blind Reliance on Models: a Recipe for Trouble


 



We are delighted that Professor John Kay has allowed us to reprint this column, on the dangers of relying on models to predict what is beyond their power to predict.
 





As the global financial crisis began to break in 2007, David Viniar, then chief financial officer of Goldman Sachs, reported in astonishment that his firm had experienced “25 standard deviation events, several days in a row”. Mr Viniar’s successor, Harvey Schwartz, has been similarly surprised. When the Swiss franc was unpegged last month, he described Goldman Sachs’ experience as a “20-plus standard deviation” occurrence.


Assume these experiences were drawn from the bell-shaped normal distribution on which such claims are generally based. If I were to write down as a percentage the probability that Goldman Sachs would encounter three 25 standard deviation events followed by a 20 standard deviation event, the next 15 lines of this column would be occupied by zeros. Such things simply do not occur. So what did?

The Swiss franc was pegged to the euro from 2011 to January 2015. Shorting the Swiss currency during that period was the epitome of what I call a “tailgating strategy”, from my experience of driving on European motorways. Tailgating strategies return regular small profits with a low probability of substantial loss. While no one can predict when a tailgating motorist will crash, any perceptive observer knows that such a crash is one day likely.

Some banks were using “risk models” in which volatility was drawn from past daily movements in the Swiss franc. Some even employed data from the period during which the value of the currency was pegged. The replacement of common sense by quantitative risk models was a contributor to the global financial crisis. And nothing much, it seems, has changed.

It is true that risk managers now pay more attention to “long-tail” events. But such low-probability outcomes take many forms. The Swiss franc revaluation is at one end of the spectrum — a predictable improbability. Like the tailgater’s accident this is, on any particular day, unlikely. Like the tailgater’s accident, it has not been observed in the historical data set — but over time the cumulative probability that it will occur becomes extremely high. At the other end of the spectrum of low-probability outcomes is Nassim Taleb’s “black swan” — the event to which you cannot attach a probability because you have not imagined the event. There can be no such thing as a probability that someone will invent the wheel because to conceive of such a probability is to have invented the wheel.

But most of what is contingent in the world falls somewhere in between. We can describe scenarios for developments in the stand-off between Greece and the eurozone, or for the resolution of the crisis in Ukraine, but rarely with such precision that we can assign numerical probabilities to these scenarios. And there is almost zero probability that any particular scenario we might imagine will actually occur.

What Mr Viniar and Mr Schwartz meant — or should have meant — is that events had occurred that fell outside the scope of their models. When the “off-model” event was the breakdown of parts of the wholesale money market in 2007, their surprise was just about forgivable: in the case of the Swiss revaluation, to have failed to visualise the possibility is rank incompetence.

Extremes among observed outcomes are much more often the product of “off-model” events than the result of vanishingly small probabilities. Sometimes the modellers left out something that plainly should have been included. On other occasions they left out something no one could have anticipated. The implication, however, is that most risk models — even if they have uses in every­day liquidity management — are unsuitable for the principal purpose for which they are devised: protecting financial institutions against severe embarrassment or catastrophic failure.

First published in the Financial Times.