Normalisation of deviance

Normalisation of deviance

One of the parallels identified in The Nimrod Review between the organisational causes of the loss of Nimrod XV230 and the loss of the NASA Space Shuttle Columbia was something called the “normalisation of deviance“. The term was made popular in the analysis of the 1986 Shuttle Challenger incident by Diane Vaughan. This post explains what the term means, why it’s important and how it can be countered. Attempting to understand this concept also leads us into some interesting discussions of what ‘safe’ actually means in practice.

In relation to the Nimrod XV230 incident, this wasn’t the first occasion that a fuel leak had been experienced. Fuel leaks became to be seen as inevitable:

“There was (and remains) a prevailing attitude that leaks in aviation fuel systems are an inevitable fact of life” (The Nimrod Review, 2009, p.83).

A report in 1988 noted that the fuel leak rate for the Nimrod was believed to be 30 leaks per year. The Nimrod Review were informed by an engineer that he estimated a leak every fortnight. However, it’s not necessarily the absolute number of leaks that is key, but the increase in leak rates over time. The Nimrod fleet experienced a four-fold increase in fuel coupling leaks during the period 1983–2006. As leaks were expected, almost routine, the focus was on eliminating sources of ignition. Unfortunately, the belief that aircraft were ‘leak tolerant’ (i.e. all potential ignition sources were eliminated) was an incorrect assumption.

In the case of Shuttle Columbia (2003), a large piece of insulating foam came off the external fuel tank shortly after lift-off, striking the Shuttle’s left wing. This damaged the thermal protection system on the leading edge of the wing, allowing superheated air to penetrate during re-entry 16 days later. This intense heat penetrating the wing melted the aluminium structure, causing Columbia to break-up.

Had this happened before – and what were the consequences?

Foam impacts had occurred on many of the previous 113 Shuttle missions. The average number of debris ‘hits’ in the thermal protection system over the life of the Shuttle program was 143 per mission. Impacts from foam debris had become routine, inevitable even, just like the fuel leaks on Nimrod aircraft. The organisation had become conditioned over time not to regard foam debris as a flight safety issue. Foam shedding and the resulting debris impacts therefore became ‘normal’.

This finding from the Columbia incident is eerily similar to the findings of The Nimrod Review, which was informed that leaks “were seen more as an operational issue as opposed to a flight safety issue” (2009, p. 84).

Let’s wind the clock back to the design of the Space Shuttle. Early in the program, foam loss was considered a dangerous problem. Design engineers were extremely concerned about potential damage to the fragile thermal protection system. In fact, the design specification stated:

3.2.1.2.14 Debris Prevention: The Space Shuttle System, including the ground systems, shall be designed to preclude the shedding of ice and/or other debris from the Shuttle elements during prelaunch and flight operations that would jeopardize the flight crew, vehicle, mission success, or would adversely impact turnaround operations.
3.2.1.1.17 External Tank Debris Limits: No debris shall emanate from the critical zone of the External Tank on the launch pad or during ascent except for such material which may result from normal thermal protection system recession due to ascent heating.

Despite these original design requirements that the external tank should not shed debris and that debris should not hit the Shuttle, NASA came to accept that events precluded in the original design could in fact be tolerated.

And so we could define “normalisation of deviance” as being when deviations become incorporated into the routine. Small changes, slight deviations from the norm, gradually become the norm. In other words, what is ‘safe’ or ‘acceptable risk’ are in part socially constructed. Individuals, teams and organisations repeatedly drift from agreed design standards or working practices until the drift becomes normal. Past decisions and behaviours (such as risk assessments prior to Shuttle launches) are assumed to have been made for rational reasons based on sound judgment; these precedents (the corporate memory) help to support future deviations and make them legitimate.

This ‘new normal’ then allows further deviance to become acceptable, a new baseline is created and the organisation shifts what it perceives to be acceptable. And so the cycle of drift continues. With this explanation, we can perhaps understand how organisations such as NASA or the Ministry of Defence come to accept warning signals, and act as though nothing is wrong, when in fact, something is gravely wrong. The boundaries of what is acceptable risk gradually expand with time and the comfort zone widens.

In her analysis of Shuttle Challenger, Diane Vaughan summed up this phenomenon perfectly when she stated that “what began as a break in the pattern becomes the pattern”.

Although decisions appear to be deviant with the benefit of hindsight; they are not necessarily seen as deviant to the individuals and organisations at the time. The deviance is institutionalised and becomes part of the culture, the way of doing things. This can make normalisation of deviance difficult to address before disaster.

To help identify and manage deviations before they become the new normal, you may wish to consider the following:

  • What behaviours, working practices or conditions do you accept today that you would not have previously accepted?
  • What standards are routinely not observed, or what ‘short-cuts’ are taken on a daily basis?
  • Do you operate with safety critical equipment not working or in a degraded state?
  • Are systems operated in a significantly different manner than originally intended?
  • Do you change the rules of what is acceptable in order to allow the deviations which experience tells you can be tolerated?
  • What rules are routinely broken by the majority, in order to ‘get the job done’?
  • Are certain alarms or warnings routinely ignored, perhaps even seen as ‘nuisance alarms’?
  • Do you make key decisions after in-depth analysis and objective assessment, or do you use past successes to redefine what is acceptable?
  • Are there any slices of cheese in your Swiss cheese model (i.e. your barriers) that you routinely neglect?
  • How does your management system detect patterns of abnormal conditions or working practices before they are accepted as the norm?
  • How does your organisation treat individuals who raise concerns?

Can this happen in your organisation? James Reason said of safety culture: “… it is worth pointing out that if you are convinced that your organisation has a good safety culture, you are almost certainly mistaken” (1997, p.220). I’d suggest that the same applies here – if you think that your organisation is not susceptible to normalisation of deviance, then you are probably wrong.