About Me

My Photo
Reputability LLP are pioneers and leaders globally in the field of behavioural risk and organisational risk. We help business leaders to find the widespread but hidden behavioural and organisational risks that regularly cause reputational disasters. We also teach leaders and risk teams about these risks. Here are our thoughts, and the thoughts of our guest bloggers, on some recent stories which have captured our attention. We are always interested to know what you think too.

Wednesday, 6 May 2015

Reputational risk

Reputations are univerally seen as valuable, but reputation risk is poorly understood.  As a result, reputations are left unnecessarily at risk.

Historically, risk managers and internal auditors struggled to define reputational risk. Some saw it as the ultimate result of the failure of the organisation to manage other risks properly. Others saw reputational risk as being a separate category of risk in its own right.  What united both groups, and business leaders, was the view that reputational risk was the most serious risk facing their organisation; and that they had to avoid the kinds of outcomes that had regularly plagued and destroyed reputations in the past.

As an example, experience has shown that if a clothing company sources stock from a company that uses child labour, pays what consumers see as exploitative rates of pay or provides dangerous working conditions, the company's reputation will be at risk when consumers and their proxies the media find out.  Companies that might face this or analagous problems regularly recognise this kind of source of reputational risk.

'Roads to Ruin' the Cass Business School report for Airmic shows that this approach is fundamentally inadequate.  Reputational damage does indeed happen when an organisation fails to manage other risks properly. But when root causes are considered, the deeper insight is that reputations are usually lost when stakeholders come to believe that the organisation is not as “good” as they previously thought.

So what is reputational risk?  To arrive at a sound answer, we need first to ask what reputation is.  A useful working definition is:

"Your reputation is is the sum total of how your stakeholders perceive you"
This definition emphasises four points.
  • Your reputation is about how you are perceived, which is not necessarily the same as how you really are;
  • Your reputation is not about how you perceive yourself; it is about how your stakeholders perceive you; and
  • As it is your stakeholders who hold that critical perception, if your stakeholders come to perceive you in another way, your reputation changes. 
  • That 'sum total' may vary depending on which stakeholders are most influential at any particular time.
The definition does not help as to what perceptions matter; but the research clearly illustrates that reputations are lost when stakeholders come to think that you are not as 'good' as they thought you were.  When it comes to organisations and their leaders, what matters most is characteristics such as ethos, culture, trustworthiness, honesty, humanity and competence as well as whether the organisation itself is coherent or dysfunctional.

In our experience, a good working definition of reputational risk is therefore:
“Reputational risk is the risk of failure to fulfil the reasonable expectations of all of your stakeholders in terms of performance and behaviour”
This definition emphasises the root causes of reputational damage, which are all to do with performance and behaviour.

Thus the damage in the clothing company example may, superficially, be due to child labour, exploitative rates of pay or dangerous working conditions.  But looking through those immediate causes to root causes, the use of very cheap labour may emerge from the strategy of the company (e.g. buy as cheaply as we can) the ethos of the company (e.g. source cheaply and don’t ask questions), internal incentives (we encourage saving cost above ethicality), a leadership which doesn’t think about ethicality at all or other individual or collective behaviours or features of the way the organisation is put together.  Understanding those root causes, and dealing with them, will not just prevent a recurrence of the same problem but will prevent new problems with similar root causes.  That is how aviators have made commercial aviation so safe that the most dangerous leg of a long overseas trip is the journey to the airport.

This insight is now widely recognised.  It lies at the root of the latest Financial Reporting Council Guidance on Risk; and at the root of the growing emphasis by financial reglators on human behaviour as the origin of all financial failures.

The challenge is for organisations to find these often deep-rooted risks before they cause harm.

Our experience is that most business leaders are unaware of these risks and their implications.  So too are many in risk teams.  This is because behavioural and organisational risks are recent additions to the risk lexicon and not all risk professionals yet understand them yet.

That is why the latest FRC risk guidance explicitly sets out to ensure that boards and risk teams learn about these risks as a prelude to finding and dealing with them.  With the right kind of education and evaluation, these lethal but under-recognised vulnerabilities can be understood, found and fixed  before they cause harm.

Anthony Fitzsimmons
Chairman
Reputability LLP
London
www.reputability.co.uk





Monday, 13 April 2015

Vulnerability Evaluations in Investment


Is it useful for investment managers to know which companies are intrinsically vulnerable to crises?

Asked to answer "yes or no" there can only be one answer.  But posing the question so narrowly hides more interesting issues.  How might an investment manager use the knowledge?  And more fundamentally, is it really possible to estimate corporate vulnerability from the outside? 




Usefulness was recently highlighted by the FT's Judith Evans.  In practice the answer varies depending on whether the funds are active or passive.

Active managers are typically driven by investment returns.  A recent FT Opinion asserted:
"On average, 74 per cent of remuneration is paid in cash, and tied to outperforming an annual stock market benchmark. The result is an obsession with next quarter’s earnings rather than the next 10 years"
Helena Morrissey of Newton Investment Management spotted weakness at Northern Rock and sold out before it crashed, a natural reaction for a manager driven by short term horizons.

Since then, John Kay's Review of Equity Markets concluded that promoting good corporate governance is a core function of equity markets.  Morrisey buys this; but it left her wondering whether she should have tried to fix, not ditched, Northern Rock.  Either way Morrisey would undoubtedly use information about corporate vulnerability.

Selling out is one thing.  Stock selection is another.  Here we have encountered growing interest in the idea of screening companies for behavioural and organisational vulnerabilites and strengths that are visible, if not widely recognised, from the outside.  Many fund managers would like to understand, in advance, who seems most vulnerable to a bad fall from grace.

The externally apparent vulnerability of BP in the years before Deepwater Horizon blew up makes a telling example.  Examples of companies with hidden strengths are less common, perhaps because leaders typically trumpet strengths but gloss over, or can't see, weaknesses.

An example reported in the press recently concerns the way that Ardevora, an investment boutique, screens investments to buy or short by looking at behavioural insights such as
"favouring companies whose chief executives have little scope to meddle in the belief that [CEOs] are prone to excessive risk taking due to their "egotistical" personality type and skewed remuneration structures".

Index funds are in a different position.  As Vanguard's Glenn Booraem has pointed out:
"We have no choice but to stick with the companies we invest in because they are in the indices we track.  We can’t sell like an active manager can. We are permanent shareholders. It is like riding in a car that we can’t get out of.”
Different incentives lead to different behaviour.  Index fund managers, savers and pensioners, have a greater interest in long term performance of their portfolios and investors may come to see encouragement of good management as a differentiator between passive funds.  This is probably one reason for Booraem's active stance on the long term:
"We want to make sure that the vast majority of boards and management teams are focused on long-term value objectives, like we are. As permanent shareholders, we have a key role to play in stewardship and responsible investing."
But what of the fundamental question: is it possible to evaluate corporate vulnerability from the outside?

Our process for evaluating corporate vulnerability from the inside begins with a preliminary but systematic evaluation based on publicly available sources.  It looks for symptoms of the behavioural and organisational risks that regularly tip crises into reputational calamities as well as causing crises in the first place.

Whilst the confidence level attached to such an evaluation is inevitably lower than for an evaluation carried out with extensive inside knowledge, our experience is that an external evaluation is both possible and useful.  It can provide detached insights into the company and identify important risk areas where information is lacking.

With an external evaluation in hand, its recipient can watch for information that confirms, modifies or fills gaps in the initial evaluation.  And a major shareholder, such as a Vanguard or a Newton, can systematically probe areas of uncertainty as opportunities arise in discussions with the CEO and board members. 

Vulnerability evaluations are potentially useful investment tools for both active and passive fund managers.  Whilst their use is in its infancy, we believe it is set to grow.

Anthony Fitzsimmons
Reputability LLP
London
www.reputability.co.uk











 






Thursday, 9 April 2015

John Kay on Learning from Mistakes



We are delighted that Professor John Kay has allowed us to reprint this column, first published in the Financial Times, which discusses an important organisational risk:- inability and unwillingness to learn from mistakes.

Having just booked a flight to Berlin in June with Germanwings, I thought it might be useful to explain why.

Air travel is extraordinarily safe. About 1,300 people died in commercial aviation accidents last year, the highest figure in a decade. Almost half of these were victims of the two incidents involving Malaysia Airlines. Tyler Brûlé has described the fear of flying that led him to abandon a flight to London after hearing about last week’s Germanwings Airbus crash in the French Alps. The chances of a man of Mr Brulé’s age, 46, dying in any two-hour interval is about one in 1m. There is an additional one in 5m chance of being killed during a two-hour flight. On the other hand, sitting in an aircraft protects you from many more common causes of death, such as a car crash or a fall down stairs.

Despite the continued growth in traffic, aviation deaths have been declining. Improvements in aircraft design have reached a stage at which it is almost inconceivable that a major incident will be the result of a mechanical failure. Modern Airbus and Boeing models “fly by wire”, which means that every action by a pilot is mediated by a computer and most of the time aircraft literally fly themselves. Chesley Sullenberger’s 2009 “miracle on the Hudson” landing was an exceptional feat of skilled aviation — but as his Airbus landed on the river it was the machine, not the pilot, that had selected the gliding speed and angle.

The dangerous moments on board an aircraft are when the pilot is overriding the electronic systems. They may do so for good reason but with bad outcomes, as when the crew of Air France 447 from Rio to Paris misjudged their response to adverse weather conditions and lost the plane in the Atlantic; or with malevolent intent, as in the Germanwings incident. Passengers should worry, not that the crew are not in control but that they are.

But another reason modern air travel is reassuringly safe is that investigation into accidents is honest and thorough. Airlines and aircraft manufacturers do not like the public exposure of defects in their products; but they have tended to respond by addressing the defects rather than resisting the exposure. And generally accident investigators have been allowed to do a thorough job without political interference.

The most notable exception was the attempt by Egypt, under the dictatorship of Hosni Mubarak, to influence the findings of US investigators into the loss of EgyptAir 990 in 1999. The passenger jet disappeared into the north Atlantic under the control, like the Germanwings aircraft, of a pilot widely thought to have been suicidal . And the full truth of the downing of Malaysia Airlines MH17 over Ukraine is never likely to emerge. But when French President François Hollande, German Chancellor Angela Merkel and Spanish Prime Minister Mariano Rajoy travelled to the crash area last week, they went to show concern and establish what had happened rather than to deflect blame.

This is the behaviour we are entitled to expect in any industry: but it is not what we generally see. The bodies that regulate drug safety do not enjoy the same protection from lobbying as air accident investigators. And so the pharmaceutical industry has largely lost the public trust achieved by the “just culture” of the airline industry, which is more concerned to encourage openness than to attribute blame.

And the contrast with finance could hardly be greater. It is unimaginable that we might have had a dispassionate investigation of the financial crash of 2008. Nobody died in that crash — but to avoid mistakes in the future it is first necessary, in any given situation, to undertake honest assessment of the mistakes of the past. That is why our planes are growing safer and our finances are not.

First published in the Financial Times on 1 April 2015. 
© John Kay 2015  http://www.johnkay.com


Monday, 16 March 2015

De-risking Bonuses

Matching long term incentives to long term risks sounds easy until you consider that long term risks can fester for decades whereas a typical long term incentive has run its course within a few years.

Banks have started to address the issue.  Goldman Sachs requires top management to hold up to 75% of bonuses as share awards until they leave the company, with many senior staff similarly obliged to hold 25% of any bonus award as shares until retirement. This leaves leaders with the temptation to time their departure as they are free to realise their career-long accumulation of shares as son as they have left.

HSBC has followed Goldman’s example, requiring bonus shares to be held to retirement with a clawback arrangement.  UBS not only has a bonus/malus system but pays most senior bonuses in bonds and shares and requires Executive Board members to hold at least 350,000 shares and the CEO to hold 500,000. At about twenty swiss francs apiece this means holding from 7 to 10 million swiss francs in UBS shares. And Credit Suisse pays part of bonuses in "bail-in-able” bonds that have to be held for 3 years and can be converted to equity or wiped out in the case of trouble.

But none of these arrangements matches the incentive time scale to the period at risk after top management leave, a time when longer term risks can still come home to roost.

In their insightful new book "Risky Rewards", Professor Andrew Hopkins and Sarah Maslen have proposed a solution that includes a 'malus' element that can be set to run for a period after departure to match the duration of post-departure risks.

The core of the scheme is a unitised trust fund into which a proportion of all senior staff bonuses, long or short term, are paid.  On payment in, participants receive units in the fund at their current value.  In the meantime the fund is invested in assets that are not connected with the employer.

The employee or former employee may only cash in their units after the deferral period.  This is done at the unit value prevailing at the time.  However the first charge on the trust fund is to pay the company compensation in the event of defined catastrophic events. 

The authors plausibly suggest that such a scheme would create a highly motivated corps of trusted, knowledgeable and experienced former senior staff keen to provide practical advice to current decision-makers.  Those people would also know where any bodies were buried.

Such a scheme should also reduce sharply any temptation felt by senior executives to focus on short term profit (for example by cutting investment or maintenance or turning a blind eye to unethical sales) at the expense of the long term success of the firm.  This balance between long term success and and short term profit can become even more acute as a leader approaches departure.

In scheme of this kind, the deferral period  can be matched to the influence and seniority of the individual.  Thus a less senior employee with less influence might have a smaller proportion of his bonuses paid into the scheme and the deferral period might be shorter.

In contrast a highly influential individual severely exposed to the temptation to cut corners or save cost to garner short term profit (and a bigger bonus) at the expense of exposing the organisation to long term catastrophic risks might have a substantial portion of her bonuses paid into the scheme and payment out might be deferred for a long period, which could run until several years after his retirement.  The ideal deferral period would depend on the possible latency period of potentially catastrophic risks to which the company might be exposed. 

Another practicality is the issue of compensation for the company if something goes wrong.  This has two parts: defining 'what goes wrong' and quantifying compensation.

As to defining the event, Hopkins and Maslen were writing in the context of the oil industry.  There the major accident they most had in mind was perhaps some kind of fire or explosion.  However, taking that example, oil companies do other things: for example many have trading arms which might give birth to a rogue trader whose ability to operate can be as much due to a systemic risk as any oil accident - perhaps neglecting cuolture or cutting corners on control systems.  This illustrates that the 'event' needs to be defined broadly if it is to capture all potentially catastrophic events driven by systemic forces.

As to compensation, this is perhaps more complicated than Hopkins and Maslen envisage.  BP, with whose travails Hopkins is particularly familiar, is unusual in that it did not buy insurance.  The result was that all losses resulting from the systemic failures at BP at Texas City and in the Gulf of Mexico were paid for by BP and thus suffered by its shareholders.

More typically, the consequences of a major physical accident will be insured at least in part, with only a modest part of the costs falling on the company.  However, reputational damage is a widespread consequence of catastrophic events.  It may not be insurable let alone insured but it is most acutely felt by shareholders in the share price.  If a company comes to be seen as dysfunctional or its management comes to be seen as ineffective, the share price will fall.  The scheme would need to reflect this kind of loss to shareholders.  Making good shareholders' loss through a payment to the company confers a benefit directly to the company and indirectly to shareholders.

Indeed such a fund could come to be seen as the indirect insurer of damage to the reputation of the company resulting from the actions or inactions of senior management, important behavioural and organisational risks with which readers of this blog will be very familiar.  Insurance of reputational risk is something of a 'Holy grail' for many corporate leaders and this could provide a practical solution to which insurers might then be prepared to add greater depth.

This interesting idea needs more thought and development before it, or something like it, becomes a practical proposition. We would welcome readers' comments on how it might be improved and, importantly, made attractive to Chief Executives and Finance Directors.

In the meantime we welcome "Risky Rewards".  It is a readable, research-based analysis and explanation of many risks inherent in incentive and bonus schemes.  It ranges over subjects as diverse as whether and when bonuses and other incentives actually work, how to target them effectively and how to avoid unintended consequences.  And while its roots are in the world of physical accidents, its insights are relevant to all organisations that are run by people. 


Anthony Fitzsimmons
Chairman
Reputability LLP
London
www.reputability.co.uk




Sunday, 15 March 2015

Latency in Systemic Risks

Most behavioural, organisational and process risks typically lie latent for years to decades before they erupt to cause a serious accident or reputational damage.  This is the finding of the latest analysis from Reputability.

We analysed two sets of data to ascertain how long the root cause lay latent before it caused harm or damage.

The first set of data was taken from 24 accidents and crises analysed in 'Roads to Ruin' the Cass business School  report for Airmic.  Almost all the events analysed occurred between 1999 and 2009 and almost all had their origins in what we now call behavioural and organisational risks. 

To this we added a set of 12 major UK accidents analysed by Turner and Pigeon in their seminal book "Man-made Disasters" (1997).  These occurred between 1966 and 1975 and included the Aberfan disaster, the Coldharbour Hospital fire,  the Hixon Level Crossing disaster, the Summerland Night Club disaster, the 1973 London Smallpox outbreak and the Flixborough fire all of which counted behavioural or organisational risks among their root causes.

Since accidents and disasters rarly have a single cause, we and Turner and Pigeon considered when the root causes began to develop and accumulate, un-recognised by those in authority.  This inevitably involves estimates.  Turner and Pigeon analysed their series of accidents into a series of bands ranging from "less than one month" through "3 to 8 years" to "about 80 years".  We did likewise.

The graph below does not show the last two categories, of incidents where the root causes had been at work for more than 20 years.  Thus the graph ends with 94% of events having emerged after 20 years.

Cumulative percentage emerged by N months


The results show:
  • Only 45% of crises had manifested within 3 years;
  • 30% emerged within 3 to 8 years;
  • 25% took longer than 8 years to emerge and
  • 6% had yet to emerge after 20 years' incubation.


These very long incubation periods for what are mainly, possibly all, unrecognised behavioural and organisational risks have important implications for risk mangers and boards.

First, the long delay before emergence of damage from these slowly incubating risks allows an organisation to appear successful for long periods.  The unfortunate truth is that the organisation is suffering from the delusion that all is well because nothing yet appears to have gone badly wrong.  Boards and risk managers should avoid this risk of complacency that our recent research has illustrated.

The emergence of child abuse in churches and other respected institutions around the world and the Deepwater Horizon explosion are more recently emerged examples of this delusion that all is well and under control. 

Second, this long latency period has implications for how incentives, particularly so-called long term incentives, are structured.  We have discussed that problem here.


Anthony Fitzsimmons
Reputablity LLP
London
www.reputability.co.uk




Monday, 2 February 2015

Complacency - a Behavioural and Organisational Risk

Robert Shrimsley, one of the FT's satirical columnists (when he isn't managing FT.com) wrote about a recent Mayfair dinner hosted, I suspect, by Edelmans to promote their Global Trust Barometer.  The Barometer is a valuable institution that has been going for 15 years.  We have written about it in the past and we expect to return to it.

The dinner was attended by former ministers, former and current CEOs and senior journalists (such as him).  They made a well-educated, well-heeled cohort of 'serious concerned people'.

One of the Barometer's findings was a drop in trust in leaders, institutions and elites.  The decline in trust in the CEO as a credible spokesperson continued for the third consecutive year, with trust levels now at 31 percent in developed markets. Globally, CEOs (43%) and government officials and regulators (38%) continue to be the least credible sources for information.  CEOs lagged far behind academic or industry experts (70%) or "people like me" (60%).

The reaction of those diners is interesting.  Shrimsley did not mention anxious discussion of the possibility that the Barometer might be onto something fundamental - for example that elites' collective behaviour leads outsiders to regard elites as untrustworthy - let alone soul-searching to understand why people like those present are seen as so untrustworthy.  On the contrary, Shrimsely summed up the event as
"an evening of high-level hand-wringing, the kind of self-reinforcing event that goes on all over London most weeks."
We recently carried out a poll on perceptions of behavioural and organisational risk.  The cohort consisted of almost 100 company secretaries and senior in-house lawyers, who had recently completed an intense training session we had run to introduce them to behavioural and organisational risk and its tragic reputational consequences.  These were people who often know more about 'where bodies are buried' than most.

We asked two pairs of questions, separating them as much as we could (which wasn't much) to reduce the effect of 'anchoring bias'. 

The first pair of questions concerned the extent to which behavioural and organisational risks were understood across business generally and in their own organisation. 

The results clearly imply that those present thought that their own organisation deals with behavioural and organisational risks better than others could.

Our second pair of questions produced a similar pattern.  

Whilst not suggesting a solid working knowledge, those present clearly thought that their own board understood these risks rather better than boards in general.

In his second 2015 Reith Lecture, Atul Gawande looked at analogous behaviour in a different context: surgery.  After introducing a checklist system for surgeons, modelled on those used by airline pilots for procedures routine and emergency, he surveyed surgeons on their attitudes.

Whilst most surgeons had become very happy to use the checklist system, about 20% really disliked it even after three months' use.  So he asked those who really disliked the system whether, if they were to have an operation, they would wish their surgeon to use such checklists.  94% wanted their surgeon to use the checklists!  The implication was clear: I don't need such a system but everyone else sure does!

Drivers fit a similar pattern: Apparently almost all of us think we are above average drivers! 

This is a very common behaviour.  Shrimsley mentioned confirmation bias.  He might also have mentioned the availability heuristic, optimism bias and above all superiority bias and the overconfidence effect.  All are widespread behavioural phenomena and all lead to a corresponding behavioural risk that we can summarise as complacency risk.

Add herding behaviour and the effects of social norms, and 'groupthink', a dangerous organisational risk, is the inevitable result. 

At a dinner party, the consequences are likely to be dull and self-reinforcing conversation because participants' knowledge and experiences all come from the same box.  Trapped inside, they may be unable to see they are in a box, let alone see the box from the outside; still less can they examine the beliefs and assumptions in and around the box.

Translated to leadership teams of companies, governments and other organisations,the effect of these relatively homogeneous groups (that probably see themselves as diverse) is as predictable as it is devastating.  Whilst these behavioural and organisational risks are predictable, and blindingly obvious to an ontsider given access to insiders' knowledge, they commonly lie unrecognised and untreated for long periods before they blow up.  We plan a blog with our latest research on this, but you already know why this is so.

The challenge for leaders is to understand not just how others see them but how others would see them if they had an insider's knowledge. 

Or as Robert Burns eloquently put it:

O wad some Pow'r the giftie gie us
To see oursels as others see us!
It wad frae mony a blunder free us,
An' foolish notion!
Anthony Fitzsimmons
Reputability LLP
London