On 1 June 2009, an Airbus A330 operated by Air France on Flight 447 flying from Rio de Janeiro to Paris, crashed into the Atlantic ocean. The aircraft was fully intact at the time of impact and there was no evidence of fire or explosion. The crash wasn’t survivable and all 228 people onboard were killed. The main wreckage wasn’t found until almost two years later, on 2 April 2011. Until the flight recorders were recovered early May 2011, very little was known about the circumstances of the accident.
On initial analysis of the Cockpit Voice Recorder, a Human Factors working group was established to understand the behaviour of the pilots. The human factors issues largely revolve around technical competence, Non-Technical Skills (such as communications and decision-making) and cockpit design.
The initiating factor was ice crystals disabling the airspeed indicators (obstruction of the pitot probes). There was nothing mechanically wrong with the aircraft, other than these airspeed indicators failing. Due to a lack of airspeed inputs, the autopilot and auto thrust disengaged. The crew reacted incorrectly and led the aircraft into an aerodynamic stall from which they didn’t recover. The crew took the aircraft past its safe operating limits, outside the ‘flight envelope’.
With the autopilot disengaged, the pilots had to manually fly the aircraft, something that they were not used to doing. Also, because the autopilot disengaged, the aircraft controls were more sensitive, and the pilots struggled to keep the aircraft steady, rolling side-to-side as much as 40 degrees. Within one minute of the autopilot disconnecting, the aircraft was outside its flight envelope due to these manual inputs.
The angle of attack was increased, exceeding 40 degrees (i.e. the nose of the aircraft was raised), and the maximum altitude was reached. Although the vertical speed was still high, the flight path speed reduced and the aircraft lost lift, entering what is known as an aerodynamic stall. It is notable that the aircraft’s angle of attack was not directly displayed to the pilots.
There was nothing wrong with the engines, which operated and responded as normal. The aircraft had been supplied new to Air France four years earlier in April 2005. The stall warning sounded in the cockpit repeatedly – announcing in English “STALL”, “STALL”, and over the course of around four minutes, the aircraft literally dropped out of the sky, crashing into the Atlantic Ocean to remain hidden for almost two years.
“The crew never referred either to the stall warning or the buffet that they had likely felt. This prompts the question of whether the two co-pilots were aware that the aeroplane was in a stall situation. In fact the situation, with a high workload and multiple visual prompts, corresponds to a threshold in terms of being able to take into account an unusual aural warning. In an aural environment that was already saturated by the C-chord warning, the possibility that the crew did not identify the stall warning cannot be ruled out”.Bureau d’Enquêtes et Analyses, (BEA, French Civil Aviation Safety Investigation Authority), Final Report (English translation), 2012, p.179
The flight had three crew – a Captain and two co-pilots. It would not be unusual on a long-haul flight for one of the crew to be resting away from the cockpit during the flight, the crew taking turns to sleep. When the autopilot was disconnected, one pilot would take the role of Pilot Flying (PF, manually controlling the aircraft, and the other – the Pilot Not Flying (PNF) – was tasked with trouble-shooting and monitoring the flight path.
Surprised and confused
This incident is an example of how human factors can contribute to a disaster. Communications between the two pilots shows that they were surprised, confused, and not in control at all. The Pilot Flying (PF) and the Pilot Not Flying (PNF) took control from each other frequently and informally. At the time of the autopilot disconnection, the Captain was resting.
Here’s an extract from the Cockpit Voice Recorder:
- “We still have the engines! What the hell is happening? I don’t understand what’s happening“
- “Damn it, I don’t have control of the plane, I don’t have control of the plane at all!“
The Captain was woken from his planned sleep, and when he entered the cockpit he was recorded as saying:
- “What the hell are you doing?“
- “We’ve totally lost control of the plane. We don’t understand at all… We’ve tried everything“
- “What do you think? What do you think? What should we do?“
- “Well, I don’t know!“
At no point did the crew mention the stall warning, and the crew didn’t appear to realise that they had stalled.
The situation deteriorated rapidly. No announcement was made to the passengers and no emergency message was sent by the crew. The oxygen masks were not deployed, the cabin crew were not seated and the lifejackets were not touched. The BEA Director Jean-Paul Troadec stated that “There is no evidence whatsoever that the passengers in the cabin had been prepared for an emergency landing”.
Thinking habits, shortcuts and mental models
A loss of cognitive control of the situation led to a loss of physical control of the aircraft.
The reactions of the pilots seem incredible. However, the concept of ‘mind traps’ can be used to understand their actions. Mind traps are essentially habits of thinking. We take mental short cuts, and rely on models based on previous experience. We see what we want to see and we jump to conclusions. The aircraft was no longer on autopilot, but ironically the crew were.
The crew focused on certain indicators, but ignored others. They expected bad weather, and so they thought that the aircraft shaking was due to turbulence, rather than a stall. This expectation was combined with their mental model that the aircraft would not allow them to cause a stall – but with the autopilot disconnected, different control rules applied and their assumptions were incorrect. In this mode, there are less restrictions on pilot inputs – without an autopilot, it IS possible to stall an airplane.
After losing airspeed, the Pilot Flying thought that they were losing height and pulled the nose up, contributing to the stall. Most pilot training in upsets had involved scenarios where the aircraft is close to the ground and so their automatic response is to pull the aircraft up. Automatic responses are more likely when stressed or under time pressure. The corrective action in this rapidly-deteriorating situation would have been to initiate a descent.
The crew of Flight 447 didn’t challenge their assumptions, or fully reflect on their actions or the actions of the others.
‘Mind traps’ (or cognitive biases) contributed heavily to this disaster. From the Flight Data Recorder and the Cockpit Voice Recorder we can determine the role that these biases played. Here are some of the mind traps that can be identified in this incident:
- Attentional tunnelling: the crew focused on the altitude and electronic alerts
- Selective perception: the crew ignored the clear and repeated “STALL” warning in the cockpit
- Expectation bias: the crew anticipated some turbulence and therefore associated the aircraft shaking and loss of altitude with the expected weather conditions (rather than associating these symptoms with stalling)
- Automatic response: the well-trained response to unusual situations was to apply full ‘nose-up’ (but this response was not applicable to this situation)
- Confirmation bias: discounting conflicting cues, the crew held onto the notion that they had lost all instruments (rather than consider an aerodynamic stall)
- Overconfidence/optimism effect: the crew had undertaken many simulations in training that had positive outcomes
- Reliance on technology: it was assumed that the flight computers would prevent a stall (under normal circumstances this is correct, but it’s not the case when autopilot is disengaged). It was a common belief that the aircraft could not stall.
- Team dynamics: the Pilot Not Flying didn’t want the Pilot Flying to ‘lose face’.
“The PF was also confronted with the stall warning, which conflicted with his impression of an overspeed. The transient activations of the warning after the autopilot disconnection may have caused the crew to doubt its credibility. During previous events studied, crews frequently mentioned their doubts regarding the relevance of the stall warning”.Bureau d’Enquêtes et Analyses, 2012, p.180
Of the three crew, the Pilot Not Flying (PNF) was more experienced on this type of aircraft and the route than both the Captain and the Pilot Flying (PF). The PNF also held a managerial position at Air France’s Operations Control Centre. However, before the Captain took his in-flight rest, he designated the Pilot Flying as the relief Captain. This designation was rather informal and was made whilst the Pilot Not Flying was absent. The Captain left the flight deck for his nap just after 2am, and within 15 minutes the 12 crew and all 216 passengers will be dead.
Despite the PF being relief Captain, the PNF started to take the lead role, changing the hierarchy dynamics in the cockpit. Although the leadership role during the incident was passed to the more experienced PNF, this transfer of command was informal and implicit. The crew’s training did not address the role of a relief Captain and this may have contributed to the team dynamics and role-allocation.
Disengaging the autopilot changes the response of the flight controls, which is why the aircraft rolled left to right several times. Pilots have very little experience of the aircraft’s handling qualities when flying manually at such altitudes. The crew had received no in-flight training on manual aircraft handling at high altitudes. In this event, the pilots were also experiencing turbulence.
A pilot is unlikely to encounter an approach to a stalling situation more than a few times during their career, and therefore may not recall the expected actions when the situation arises. Practical training (on a flight simulator) for responding to a STALL warning was limited to understanding the onset of a stall at low altitudes.
There are several indications that the Non-Technical Skills, although appropriate at the start of the incident, gradually deteriorated. A contributory factor may have been that as the Captain was resting, there were two First Officers on the flight deck, and possibly less structure than with a Captain and one First Officer.
“The failure of both crew members to formalise and share their intentions made the identification and resolution of the problem more difficult”.Bureau d’Enquêtes et Analyses, 2012, p.178
The crew didn’t work together well as a team, changes to the flight path were made without telling each other, and at one point the Pilot Flying was pulling back on the stick, whilst the Pilot Not Flying was pushing forward. The PF did not verbally share his intentions or actions to the PNF; which reduced the situation awareness of the PNF. The inputs applied to a sidestick by one pilot cannot be observed easily by the other one, and it’s possible that the PNF was initially unaware of the aircraft climb and pitch altitude. Some key decisions were made unilaterally and the sharing of tasks became unstructured.
The scenarios demonstrated in simulator training are predetermined, well-known to crews and do not vary significantly – trainees are familiar with the predictable failures to which they must respond. However, during in-flight failures such as in this case, there would have been a significant ‘startle effect’, which would have destabilised the crew and increased their stress levels. Initial or recurrent training did not test the capability of crews to react to the unexpected. Training did not introduce surprises.
“The difficulty, or even the impossibility, of reproducing on a simulator both the complexity and variability of the failure signals, combined with the lack of a startle effect for a known scenario, prevented the training from being appropriate to the situation actually encountered”.Bureau d’Enquêtes et Analyses, 2012, p.186
The PF was possibly overloaded by the combination of attempting to understand the developing situation with the demanding task of handling the aircraft. The oscillations reveal that the handling of the aircraft was clearly very difficult and most likely demanded the PF’s full attention.
The PNF was likely confused with the various information being presented, which may not have made sense to either pilot; and his attention was distracted from the key parameter (the aircraft’s inappropriate pitch altitude). The numerous messages presented by the monitoring system (ECAM) likely contributed to workload:
“The reading of the ECAM by the PNF, and possibly also by the PF, was time-consuming and used up mental resources to the detriment of handling the problem and monitoring the flight path”.Bureau d’Enquêtes et Analyses, 2012, p.188
Each STALL warning sounded for two seconds; however, other warnings sounded continuously for over 30 seconds, saturating the environment and quite possibly masking perception of the briefer STALL warning.
Cockpit design / ergonomics
The loss of airspeed inputs (and the subsequent loss of autopilot and autothrust) led to the aircraft operating in a different configuration (sometimes known as ‘alternate laws’). These alternate laws change how the aircraft behaves and responds to inputs. There was very little explicit indication in the cockpit of the alternate law that the aircraft was operating in; which may have contributed to the crew not identifying that they were approaching a stall situation. When operating in alternate laws, the computers do not prevent pilot actions that may endanger the aircraft.
Once the Pitot probes became obstructed by ice crystals, the aircraft systems identified inconsistencies in speed measurements, and this led to the autopilot and autothrust being disconnected, and the transition to alternate law. The crew were informed of these three consequences, but not the reasons why. Although the crew received several messages on the ECAM monitoring system, no system messages enabled the crew to rapidly diagnose the situation. There were no clear displays of the airspeed inconsistencies that were being received by the aircraft computers. The various aircraft systems had identified the origin of the failures, but didn’t communicate this to the crew.
As aviation has become more automated, some information is ‘hidden’ from the crew, and it becomes more difficult for pilots to take manual control when the automated systems fail. I’ve written about this elsewhere in The Ironies of Automation.
STALL warnings can be triggered easily when at cruising altitudes, particularly if encountering turbulence. In these cases, stall warnings can be transient and sometimes incomplete. Experience shows that crews often treat these warnings as spurious; which may explain the lack of a reaction to this warning by the crew of Flight AF447. The ‘spurious’ warnings experienced in the past may have reduced the credibility of the STALL warning. In this disaster, the STALL warning repeatedly activated and deactivated (as the aircraft airspeed and angle of attack varied), which would hinder the crew diagnosing the situation.
Extreme angles of attack cancelled the STALL warning, which is counter-intuitive – and on a few occasions, a nose-down input from the pilots actually triggered the STALL warning; who then increased the angle of attack. So, unfortunately, actions taken to reduce the aerodynamic stall actually triggered the STALL warning; and so we can only begin to appreciate the crew’s confusion.
Visual indications to support the aural warning are insufficient (on what is described as a very ‘visual’ aircraft type) and would have helped the crew to ‘make sense’ of the warning. Note that this STALL warning is rarely encountered during training.
A few seconds before the impact, the PNF became aware that the PF had been trying to climb throughout the emergency. The Cockpit Voice Recorder reveals that 45 seconds before impact, when the PNF instructs “Climb… climb… climb… climb…” the PF states “But I’ve been at maxi nose-up for a while”. At this point, the PNF realises the error “No, no, no… Don’t climb… no, no” and attempts to descend. However, at that point, approaching the sea at 11,000 feet per minute, there isn’t enough height left to dive and gain airspeed to correct the stall.
The design of the controls is such that the PNF’s control stick remains in a neutral position regardless of the inputs that the PF is making. Throughout the disaster, the PNF did not comprehend that his colleague was continually raising the aircraft nose. Furthermore, as part of the Airbus design, the aircraft has a ‘dual input’ mode. As the PF continued to pull back on the control stick, the PNF is pushing forward; however, in this dual input mode the system averages the control inputs of the two pilots – and the aircraft nose remains too high.
The flight director crossbars appeared and disappeared from the display; and at times the guidance from the flight director system was in contradiction with the required crew inputs.
Investigations and blame
Besides reviewing the official investigation reports, in preparing this short summary of a complex incident, I read many newspaper reports and the words of many commentators. In several of these unofficial reports, there was a suggestion that the pilots were to blame. Although I have summarised some of the key crew behaviours above, we have to remember that the crew’s actions made sense to them at the time (otherwise they would not have acted in that way).
They were faced with an unusual set of circumstances, for which they were ill-prepared; and the design of the instrumentation did not provide the crew with the information that was required to make appropriate decisions. Several articles in the public domain simplify the incident and focus on one or two crew actions. Disasters such as these are arguably too complex to be discussed in a short newspaper article or television news item (both of which are often focused on ‘headlines’).
The official investigation by the French aviation authority BEA is quite mature, and addresses systemic failures, rather than simply focusing on crew behaviours. The final report illustrates the complexity of this disaster. The BEA Director Jean-Paul Troadec said that there was no doubt that this accident could happen to other crews.
In the years before this disaster, Air France pilots reported numerous failures of the airspeed indicators; however these reports were not acted upon. The issues with the pitot tubes on the A330 were well-known by Airbus and Air France.
If we are to learn from this disaster, we have to look behind the simple label of pilot error and explore the underlying factors that are often more difficult to address. You can blame or learn, but you cannot do both.
Related posts and pages
The ‘ironies of automation’ refers to a set of unintended consequences as a result of automation, that could detrimentally affect human performance on critical tasks. Automation might increase human performance issues, rather than eliminate them.…
This article defines Situation Awareness and clarifies what it is, what it isn’t, why it is a useful concept and how it can be misused
Human error is a central concept in ergonomics and human factors. But what is ‘human error’? Is it helping us to improve safety? The language we use may be preventing us from learning or improving.…
BEA Summary report, “Safety Investigation Following the Accident on 1ST June 2009 to the Airbus A300-203, Flight AF 447“, published 5 July 2012