This article was prompted by the 2019 television drama series ‘Chernobyl’, and focuses on human error, safety culture and designing for safety. It provides some discussion points to engage with your key stakeholders and to facilitate safety moments on various human factors topics. We can’t change history, but we can change tomorrow.
I recently watched the five-part historical drama mini-series ‘Chernobyl’, produced by HBO/Sky. This was an extremely well-researched script, with strong performances throughout – I found it both distressing and compelling. The drama follows both the nuclear and political fallout of the disaster. As someone who has spent their career understanding and managing the impact of human failures, I was already pretty familiar with the disaster. However, it has been interesting to discuss this drama with those who know less about the event.
The mini-series largely revolves around two historical figures: Soviet nuclear scientist Valery Legasov and Soviet Deputy Chairman Boris Shcherbina, along with a fictional character – Soviet nuclear physicist Ulana Khomyuk.
General understanding is that the nuclear reactor exploded and unleashed radioactive material – and that ‘human error’ was to blame. So, can a television drama such as the HBO series on Chernobyl play a role in raising awareness of human factors?
I was pleased to see that the dramatisation didn’t just focus on human error, but examined wider organisational, societal and political issues. As a human factors specialist I often stress the limitations of simply blaming an incident on human error. The Chernobyl mini-series has created discussion about these wider issues and provided us with an opportunity to communicate the limitations (and potential misuse) of the “human error” concept. We should therefore be utilising this drama series as a means to engage with, and educate, our key stakeholders on human factors.
The RBMK type of reactors at Chernobyl were unstable during startup and shutdown. At low power, with loss of cooling water, the nuclear chain reaction (that under normal circumstances produces electric power) can accelerate and become uncontrollable. On 26 April 1986, a sudden power surge during systems testing led to a steam explosion that destroyed one of the nuclear reactors (Unit 4) and released massive amounts of radioactive material over the former Soviet Union and much of Europe.
As with any complex event, a variety of factors combined to cause the Chernobyl disaster. There are many books, scientific papers and articles that describe these factors in detail – I’m not going to repeat all that material here. My aim is simply to help you use the themes in the Chernobyl drama to create discussion (and hopefully action) in your organisation – whatever the industry. However, to provide some context, here’s a grossly simplified outline of the main causes.
The incident occurred during experiments to test the operation of the independent power supply, in the event of loss of external power sources.
- The plant was operated in an unstable condition without adequate safety precautions.
- Personnel were inadequately trained (they were unaware of RBMK characteristics that made low power operation extremely hazardous).
- Inadequate containment structures allowed radioactive material into the atmosphere.
- Initial emergency response and countermeasures were inadequate.
The disaster was a product of flawed reactor design, poor safety culture and serious mistakes by Chernobyl staff. When failures in complex technologies and human failures combine, the consequences are often multiplied.
What about ‘human error’?
Many people believe that the disaster was caused by ‘human error’, based on early reports that placed substantial blame on the operators at Chernobyl.
A report by the International Atomic Energy Agency (IAEA), published in Sept. 1986, placed considerable emphasis on the role of plant operators. This report was based upon a meeting in Vienna in August 1986 organised by the IAEA, attended by international nuclear experts, who discussed the causes of the disaster and lessons learnt. At this meeting was the first official presentation of the investigation by experts from the Soviet Union, including nuclear scientist Valery Legasov.
The IAEA 1986 report (produced by the International Nuclear Safety Advisory Group and known as INSAG-1) stated that:
“… the accident was caused by a remarkable range of human errors and violations of operating rules in combination with specific reactor features which compounded and amplified the effects of the errors and led to the reactivity excursion”.
The HBO Chernobyl drama includes a scene where Legasov meets Charkov, a senior KGB official, in the back of Charkov’s limo. They discuss a newspaper report of the IAEA Vienna meeting. Charkov says “I think it’s fair to say you made an excellent impression at the conference. It turns out you’re quite good at this” (‘this’ meaning ‘Statecraft’). Charkov continues: “The West is now satisfied that Chernobyl was solely the result of operator error, which it essentially was. We have you to thank for that”.
Prior to the Vienna meeting, in Episode 4 we see Legasov discussing what he will say with Shcherbina and Khomyuk. Whilst Khomyuk wants Legasov to present the reactor design issues, Shcherbina cautions going public with this information. It later seems that Legasov made a deal to omit certain details in his Vienna presentation, and the remaining sixteen RBMK reactors in the Soviet Union would be fixed.
The IAEA updated their conclusions in a 1993 report (INSAG-7). In their own words, new information since the 1986 report led them:
“to shift the emphasis of its conclusions from the actions of the operating staff to faulty design of the reactor’s control rods and safety systems. Deficiencies in the regulation and management of safety matters throughout the Soviet nuclear power industry have also been revealed and are discussed”.
A question that should be asked during investigations is ‘Would a different team of operators have behaved differently’? In many incidents, including for the events on 26 April 1986, there is no evidence that another team would have acted differently. Given this, there is an even greater need to understand what led to their actions. With hindsight, there’s little doubt that people didn’t perform as expected – but unless we understand why, there’s little value to be gained from the label ‘human error’, or from blame.
However, the focus on human error in the early reports (from credible and influential institutions such as the IAEA and the US Nuclear Regulatory Commission), will mean that for many, human error and violation of operating procedures by the Chernobyl plant staff will always be remembered as the main causes of this disaster. This is unfortunate given the wider range of human factors issues at play.
Safety culture (where it all started)
The term ‘safety culture’ was first used in the International Atomic Energy Agency (IAEA) 1986 report on Chernobyl. Safety culture was defined then as follows:
“Safety culture is that assembly of characteristics and attitudes in organizations and individuals which establishes that, as an overriding priority, nuclear plant safety issues receive the attention warranted by their significance” (INSAG-1)
There has been significant research into the safety culture concept since 1986, and it remains a core human factors topic. Across many industries, safety culture is seen as central to safety performance and has been highlighted in major events around the world. A number of factors have been defined that influence safety culture (for example see this page) and upon watching the Chernobyl mini-series, it is clear that there are several improvement opportunities. However, I’d like to stress that thirty years later, incident investigations are still reaching the same conclusions regarding safety culture. Don’t make the mistake of watching the Chernobyl mini-series and simply conclude that this was a problem with the Soviet Union, their nuclear industry or the Chernobyl staff.
Reactor 4 was started up to meet a deadline (a need for energy production) and some commissioning activities were not undertaken. One of these was the test that was rescheduled for 25 April 1986, whilst the reactor was to be shut down for routine maintenance; a test that Legasov later described as:
“like airline pilots experimenting with the engines in flight”.
The aim of the test was to determine whether, in a total loss of normal power, the turbine winding down could provide enough electricity to operate the cooling water pumps until the emergency power supply started. This test should have been completed before the initial start-up of Reactor 4 and the test procedure assumed a totally inactive unit. But that wasn’t the case in April 1986.
The test had been delayed from the day shift, and the night shift appear not to have known about it until the start of their shift. In a series of flashbacks to the incident in Episode 5 we see that they are put under some pressure from Dyatlov, who has great authority over them. In the trial that runs through this final episode of the drama, nuclear physicist Khomyuk likens the test scenario to Yuri Gagarin (the first human in space) not knowing anything of his mission until he was on the launchpad, with a list of instructions that he hadn’t seen before.
But even before the deadlines to commission the reactor, there were pressures at the early design stages – at a phase that is sometimes referred to as Concept Select – when the reactor design was chosen. The choice of reactor type was influenced by construction time, given the country’s ambitious power generation targets. Unfortunately the reactor chosen had several inherent design faults, and sadly, some of these faults were known.
Design, design, design
The HBO Chernobyl drama series makes several references to issues with the reactor design and the balance of responsibility between plant designers and plant operators. Highly-trained and experienced operators can often compensate for a poor design; however, in this case, the designers gave the operators too difficult a task. The design was not at all forgiving of operator mistakes. After watching the drama, you may also argue that it would be difficult for the operators to have compensated for design failures that they did not know about.
Not only were the plant operators unaware of key design weaknesses, so were the designers themselves.
Several issues with the reactor design are discussed in the television series. For example, control rods made of boron are lowered into the reactor core to slow the nuclear reaction. However, the tip of the control rods at Chernobyl were made of graphite, which temporarily increase the reaction as they enter the core. (Graphite is a weaker neutron absorber than water, and as the rods are inserted, they displace water and therefore criticality increases). Under stable conditions, that short burst in fission activity is not a problem.
The emergency ‘scram’ button AZ-5 (mentioned several times in the television drama) reinserts all of the control rods, which shuts down the reaction. When this button was pressed as a last resort by the Chernobyl control room supervisor, the large number of descending graphite tips led to a huge surge in reactor power. Then, as parts of the system ruptured, the control rods were blocked from moving further down and so the graphite tips continued to accelerate the reaction, leading to the inevitable explosion.
In Episode 4 of the drama, Legasov explains this to Shcherbina by comparing it to using a water hose to put out a building fire – but initially, instead of water, the hose sprays petrol onto the fire. The operators at Chernobyl were under the impression that the AZ-5 button was a fail-safe shut-down. They were not aware of this design flaw. If they had understood, it may have influenced some of their decisions.
But were these design issues unknown to everyone? In fact, there were earlier warnings about the design faults, but these did not receive the attention that they deserved. Legasov explains to Shcherbina and Khomyuk that a decade earlier, the issue of the graphite tips was observed during an incident at the RBMK reactor in Leningrad.
In Episode 5, we see Legasov giving expert witness testimony at the July 1987 trial of Chernobyl plant director Viktor Bryukhanov, along with his chief and deputy chief engineers Nikolai Fomin and Anatoly Dyatlov. (Legasov may not have actually given such evidence, but hey, this is a drama). They received sentences of 10 years in labour camps.
During the trial Legasov presents several design failures and is critical of the Soviet nuclear industry. After his presentation at the trial, Legasov is told that his testimony will be erased and he’s warned never to speak of Chernobyl again.
Before criticizing the trial and punishment of Chernobyl staff, we should note that some major companies continue to blame control room operators, pilots, train drivers and offshore drillers, rather than explore design issues or leadership behaviours that set these staff up to fail.
The Chernobyl plant wasn’t just operated by humans, it was also designed by humans; and all humans can make mistakes. Some of those operating the Chernobyl plant on that fateful day were imprisoned for their role in the disaster, whereas the plant designers were not.
Legasov recorded a personal account of the disaster before taking his own life on 26 April 1988, the second anniversary. And this is how the HBO mini-series starts, in Episode 1 (titled “1:23:45“), as Legasov hangs himself in his Moscow apartment at the exact time of the explosion two years earlier.
Stop for a (safety) moment
The purpose of this article is to help you use the Chernobyl mini-series as a way of engaging your colleagues and leaders on the topics of human error, design failures and safety culture. Given the current profile of the Chernobyl drama series, consider facilitating a ‘human factors’ safety moment in your organisation. Here’s a few possible topics to get you started:
- Besides the behaviours of front-line staff, human factors can also refer to flawed design, misplaced priorities and other wider organisational issues. How does your organisation interpret or define human factors?
- What human factors influenced the behaviours of the operators in the control room? What factors might influence the key decisions that you and your team make?
- Does your organisation impose artificial deadlines that may lead to corner-cutting or shortcuts, in order to meet targets?
- Investigations sometimes conclude that an incident is the result of ‘human error’ and blame a few individuals. If we take this approach, how effective will we be at preventing similar events?
- The design features of the RBMK nuclear reactors were known elsewhere, but not communicated to the personnel at Chernobyl. How does your organisation learn lessons? Do you really learn lessons (e.g. change your designs or processes), or do you simply share the messages? Does anyone follow-up that learnings have been embedded?
- Are near-misses investigated and lessons shared within your company (and with the wider industry)? What might prevent such sharing?
- How do you prepare staff for unusual situations, process upsets and emergencies?
- How can you improve the effectiveness of on-site emergency plans (those implemented by the company) and off-site emergency plans (those implemented by the authorities)?
- The criticality accident was not imagined by the designers. How can we identify and manage accident scenarios that are not currently addressed in existing analyses, safety reports or safety cases?
- What are the disadvantages of relying upon operators to follow written instructions? What would be a more reliable approach?
- Chernobyl: The HBO mini-series. Starring Jared Harris, Stellan Skarsgård and Emily Watson, ‘Chernobyl’ dramatizes the story of the 1986 nuclear accident and the sacrifices made to save Europe from unimaginable disaster
- The Chernobyl Podcast, HBO. The official podcast of the mini-series Chernobyl, from HBO and Sky. There is a podcast for each Episode of the mini-series
- INSAG-7 The Chernobyl accident: Updating of INSAG-1, published by the International Nuclear Safety Advisory Group (INSAG) of the International Atomic Energy Agency (IAEA)
Categories: human factors