The late Andrew Grove, a founder of Intel, was well-known for his guiding motto that ‘only the paranoid survive’ and published a management book with the same title (1996). Grove is famously quoted for saying that “Success breeds complacency. Complacency breeds failure. Only the paranoid survive”.
What is complacency? How does paranoia help? And how do these concepts relate to major incidents such as Nimrod?
Following two serious in-flight anomalies during the launch of a Space Shuttle in July 1999, a Shuttle Independent Assessment Team (SIAT) was commissioned to review various aspects of the Shuttle program. This review was several years after the loss of the NASA Shuttle Challenger, which broke apart 73 seconds into its flight 30 years ago, on 28 January 1986. The report of the team in March 2000 identified significant problems that they considered must be addressed in order to maintain an effective Shuttle program.
In their report, the assessment team used the phrase ‘success-engendered safety optimism‘ and stated that the Shuttle program “must rigorously guard against the tendency to accept risk solely because of prior success” (SIAT, 2000, p.2). The team noted several instances of an environment that accepted known risks, largely because many successful Shuttle flights created a false sense of security (despite these known risks).
This independent assessment of the Shuttle program not only discussed the culture in NASA since Challenger, but also highlighted an issue that was to become one of the key factors in the loss of another Shuttle and her crew of seven.
Less than three years after the Shuttle Independent Assessment Team issued their report, NASA experienced another Shuttle tragedy when Columbia disintegrated during re-entry on 1 February 2003. As I discuss in my article on the Columbia incident, the damage from foam debris on take-off that led to the loss of Columbia was seen on 79 of 113 Shuttle flights from 1981 to 2003. However, over the years, the Shuttles continued to arrive home safely and this damage was seen as a maintenance issue – something to fix when the Shuttle returned successfully. But Shuttle Columbia flight STS-107 never returned. Although the immediate causes of these two Shuttle disasters were quite different, the organisational and cultural failures were eerily similar. ‘Success-engendered safety optimism’ will forever be associated with the organisational culture at NASA.
The Nimrod Review discusses a similar issue in relation to the Nimrod Safety Case. Like the Shuttle program, the track record of the Nimrod aircraft led to a high level of confidence in the safety of the fleet. Those involved in writing, reviewing and accepting the Safety Case were somewhat blinded by the assumption that, based on 30 years of operating experience, the Nimrod was ‘safe’. This led production of the Safety Case to become a paperwork exercise, rather than an opportunity to challenge. It was an archaeological activity, simply documenting what was already ‘known’ about the aircraft:
“Unfortunately, the Nimrod Safety Case was a lamentable job from start to finish. It was riddled with errors. It missed the key dangers. Its production is a story of incompetence, complacency and cynicism. The best opportunity to prevent the accident to XV230 was, tragically, lost” (Nimrod Review, 2009, p.161).
Several other high-profile events highlight where past success and a failure to effectively act on warnings contributed to disaster. In the escalator fire on the London Underground at King’s Cross station (1987) in which 31 people died, fires were seen as inevitable. In fact, there had been at least 46 fires on the wooden escalators between 1956 and 1988. In 32 instances the cause was attributed to smokers’ materials. In considering the potential for fires, London Underground management focused their attention on damage to escalators and disruption to services, rather than the danger to passenger safety. In all previous events, fires had been relatively small, contained, and no passengers had ever been burned or suffered the effects of smoke inhalation. This previous history determined the management response (or lack of it).
Other investigations have made links between success, such as a lack of process safety incidents, and complacency. For example, the independent safety review panel (Baker Panel) that audited all five of BP’s US refineries following the Texas City disaster, discussed the need to remain vigilant in order to counter complacency:
“Preventing process accidents requires vigilance. The passing of time without a process accident is not necessarily an indication that all is well and may contribute to a dangerous and growing sense of complacency. When people lose an appreciation of how their safety systems were intended to work, safety systems and controls can deteriorate, lessons can be forgotten, and hazards and deviations from safe operations can be accepted. Workers and supervisors can increasingly rely on how things were done before, rather than rely on sound engineering principles and other controls“ (The Baker Panel, 2007, p.i).
The review found complacency toward serious process safety risks at all of the five BP US refineries, which had perhaps drifted into a position where they could no longer see risks. Note that the intention isn’t to use complacency in an attempt to explain the behaviours of individuals; but rather to understand organisational failures.
On 20 April 2010, the day of the blowout on the Deepwater Horizon drilling rig, (leading to 11 fatalities and the massive oil spill in the Gulf of Mexico), company executives were visiting offshore to recognise the rig’s excellent total recordable injury rate. So, a good safety record one day doesn’t necessarily prevent disaster the next (particularly if the definition of ‘safety’ isn’t appropriate, but that’s something for another blog. . .).
In a short phrase that nicely sums up the failures of safety leadership, The Baker Panel concluded that “People can forget to be afraid” (2007, p.i).
So, how do we remain safe? How do we turn the words of Andrew Grove (‘only the paranoid survive’) into something that we can apply in practice? And what does it mean for an organisation to be ‘mindful’?
A phrase that has become popular is the need to exhibit ‘chronic unease’ – despite all being well for some time, to continually be aware that things could still go wrong, to actively seek out the bad news and do something about it. Chronic unease can help an organisation to be afraid. Similar to Grove’s paranoia, it can help to address optimism based on previous successes and help to inoculate organisations against complacency. James Reason (1997) explains chronic unease as assuming that each day will be a bad day and acting accordingly. He states that “If external vigilance is the price of liberty, then chronic unease is the price of safety” (1997, p.37).
The Nimrod Review stated that “The good track record of the Nimrod led to the prevailing ‘high level of confidence’ in the safety of the fleet. The view that the Nimrod was ‘safe anyway’ and ‘acceptably safe to operate’ blinded many of those involved in the Nimrod Safety Case” (Nimrod Review, 2009, p.452). Asking the following questions of yourself, your managers/leaders and your organisation may help to prevent your past successes from creating blind spots in how you assess the future. These questions are signposts to mindful leadership and becoming a mindful organisation.
Success and failure
- Is the leadership aware of the potential for failure? Do they understand the key risks?
- What are the potential warning signs that all is not well?
- Are these warning signs collated, assessed and understood?
- Does everyone involved in the activity or project understand what could go wrong, why, and what you’re doing to ensure that it won’t?
- When things are going well, do you ask more questions (rather than fewer)?
Risk and assumptions
- What level of risk is your organisation comfortable accepting? Has this comfort level changed recently?
- What assumptions are you making on each key activity or risk assessment?
- Do senior managers challenge these assumptions?
- Does anyone ask ‘what if . . . ?’, or act as a devil’s advocate?
Priorities
- How much time do senior managers and Executives spend on safety-related activities? (this may be related to their sense of ‘worry’ or unease)
- Do senior leaders and Executives only focus on health and safety after there has been an accident?
Audit
- Do your audits simply seek assurance that all is well, or do they aim to seek out (and expect to find) significant issues?
- Do your leaders challenge audits that suggest all is well?
- If you assign traffic lights (Green, Amber, Red) to your issues or indicators, do leaders ‘challenge the greens’?
Paper safety versus real safety
- Do senior managers understand the gap between ‘work as imagined’ (what they expect will happen) and ‘work as done’ (what actually happens in practice)? Do they know how that gap is bridged?
- Do high-level decisions (such as cost-reductions) suggest a lack of understanding of the realities of work activities?
- Are staff encouraged to raise concerns or doubts, and are these taken seriously?
You may remember how Inspector Jacques Clouseau in the 1970’s Pink Panther films was randomly attacked by his personal assistant Cato, in order to improve his martial arts skills. Inspector Clouseau stressed that you must learn to “expect the unexpected”. Perhaps that’s not a bad company motto.