The Ironies of Automation

The ‘ironies of automation’ refers to a set of unintended consequences as a result of automation, that could detrimentally affect human performance on critical tasks. Automation might increase human performance issues, rather than eliminate them.

When working as a human factors consultant on the implementation of human performance aspects in the European railway system in the late 90’s, I was first made aware of a scientific paper called ‘Ironies of Automation‘ by psychologist Lisanne Bainbridge. This work was published in 1983, although Bainbridge had published on this topic previously.

The thoughts of Bainbridge on the topic of automation from nearly forty years ago are just as relevant today, if not more so. As well as applications to aviation and process control, I’ll discuss automation in vehicles such as the Tesla Autopilot.

In aiming to prevent human performance issues, it is often more effective to ‘design out‘ the potential for human failure at the start, rather than address it later with softer controls such as competence or procedures. The discipline of Human Factors Engineering (HFE) provides a suite of tools and guidance to align designs with the capabilities and limitations of humans. There has been a significant increase in the application of Human Factors Engineering in many industries in recent years – particularly nuclear, rail, aviation, healthcare and oil/gas/chemicals. This interest in ‘designing for humans’ partly stems from significant incidents in these industries highlighting failures in previous designs that ‘set people up to fail‘.

In the aviation industry, automated flight systems have been used successfully for many years and have contributed to significant improvements in safety and efficiency. However, incidents show that pilots may rely too much on these automated systems and are often reluctant to intervene; and that errors in the programming of flight management systems continue to occur.

Aviation - humanfactors101.com
As automation increases in the cockpit, the pilot’s role has changed significantly

The incident statistics certainty show that, across most industries, human actions and decisions play a key role. In an attempt to reduce human failures in the operation of a system (such as the control of a railway system, power network or chemical process), some designers may remove the human from the system, as far as possible. Automation can be seen as the solution to human failure – replace human planning, actions and decisions with automatic devices, computer control or artificial intelligence.

Irony: combination of circumstances, the result of which is the direct opposite of what might be expected

Lisanne Bainbridge, 1983

So, what are some of these ‘Ironies of Automation’ that Bainbridge was referring to – and are they still relevant today?

Irony 1: Designers are human too!

One of the ironies of designers attempting to eliminate humans from the system, is that operating problems may occur due to errors introduced by designers. This is sometimes referred to as design-induced error, and may be due to a lack of human factors expertise on design teams. These latent errors may lie dormant for many years (‘mistakes waiting to happen’).

“Rather than being the main instigators of an accident, operators tend to be the inheritors of system defects created by poor design, incorrect installation, faulty maintenance and bad management decisions. Their part is usually that of adding the final garnish to a lethal brew whose ingredients have already been long in the cooking”

James Reason, ‘Human error’, 1990

Irony 2: Tasks that are not automated rely on humans

Once people are eliminated as much as possible in the design, the human operator is left with an arbitrary collection of tasks that the designer could not eliminate. Whatever human tasks remain may not comprise a cohesive, meaningful or satisfying job role. Decisions tend to only consider allocating a task to a human or an automated system; when in fact, a more dynamic allocation of function may be more appropriate (i.e. allocation depending upon the scenario).

“By taking away the easy parts of the task, automation can make the difficult parts of the human operator’s task more difficult”

Lisanne Bainbridge, 1983

Irony 3: The human may have to ‘take over’ if the system fails

When a process has been highly automated, the role of the human often becomes more of a monitoring role. People may have less executive actions to take, but instead oversee that the automated system is performing as intended. If the system is not performing, and the human is required to intervene, there are some concerns:

  • It takes time for a person to shift between activity ‘modes’, such as switching from monitoring to controlling activities.
  • Physical skills deteriorate when they are not used, and so when a person takes over from the automated system or autopilot, their actions may not be as refined as previously.
  • People may be required to take over from the automated system at short notice, and without sufficient time to gain the full context (having been ‘out of the loop’). The person taking control may not have complete or accurate Situation Awareness.
  • When the human has to intervene, there is likely to be something wrong or degraded with the process and so the human may need to be as skilled, experienced and aware as possible.
  • If the personnel monitoring the system did not have practical hands-on experience of the system or this particular scenario prior to automation, they may not have the understanding required to successfully take control.
  • If automation increases the reliability of the system, then those monitoring the system may pay less attention to it.
  • We know that it is impossible for even highly motivated people to maintain vigilance and so people will not be effective at monitoring automated systems for long periods. It is for this reason that Air Traffic Controllers may only work for short shifts at a time.

The Air France AF447 example below outlines what can happen when people are required to intervene when the automatic system fails. In this case, the autopilot was disabled by a minor system failure and the crew were unable to gain control of the aircraft, which crashed following an aerodynamic stall. The cockpit voice recording highlights the crews’ confusion as they try to fly the aircraft manually (something that they rarely did mid-flight).

Flight AF447 - humanfactors101.com
When the autopilot disengaged mid-flight, the pilots had to take over manual control of the aircraft, with disastrous results – crashing into the Atlantic Ocean killing all 228 passengers and crew

Irony 4: Retrofitting automation can add complexity

From aircraft cockpits to process control rooms, the addition of new automated features or retrofits can introduce complexity and confusion. This is particularly the case where a human-centred design process has not been adopted.

Irony 5: Competent, but in what?

As the nature of human activities changes due to automation, the competencies that were required for manual human control may not be the same competencies that are required to work with highly-automated systems. With the introduction of more technology, Non-Technical Skills such as decision-making and communications become more important, and yet many organisations do not help staff to develop these competencies.

In summary – if we fully automate a system, then the human in the system has two key functions:

  1. monitoring an automated system that is designed not to fail, and
  2. taking over control if the system fails.

Unfortunately, humans are not so good at prolonged monitoring tasks (especially if the system rarely fails) and will have difficulty if called to intervene in a demanding situation at short notice.

Tesla Autopilot and ironies of automation

Let’s review the Tesla Autopilot system in relation to these ironies of automation. The system is currently (May 2020) available in two flavours: Autopilot and Full Self-Driving Capability.

Tesla - Autopilot - humanfactors101.com
Tesla promises that the Autopilot features ‘assist you with the most burdensome parts of driving’

The basic Autopilot system automatically controls acceleration and braking, as well as autosteer to keep the car within the current lane. The Full Self-Driving Capability, with auto navigate and auto lane change, enables fully automatic driving on highways (with city driving a future upgrade). When you step out of the car it will find itself a parking space nearby – and then when summoned later, it will come and find you.

However, these Autopilot functions do not make the vehicle autonomous – and still require the ‘driver’ to be in control:

“Autopilot and Full Self-Driving Capability are intended for use with a fully attentive driver, who has their hands on the wheel and is prepared to take over at any moment. The currently enabled features require active driver supervision and do not make the vehicle autonomous”.

tesla.com/support/autopilot

The Tesla Autopilot system has been implicated in several high-profile crashes. The US National Highway Traffic Safety Administration (NHTSA) has a special team to investigate crashes that potentially involve the Autopilot system.

In some fatal crashes involving Tesla vehicles, the NHTSA concluded that a key factor was driver inattention, linked to overconfidence in such systems. The US National Transportation Safety Board (NTSB) raises similar concerns – for example, in its report into the March 2018 crash of an Uber self-driving test vehicle which killed a pedestrian, the NTSB uses the phrase ‘automation complacency‘. Although there is a focus on the vehicle operator (who was distracted on her mobile phone), the NTSB state the underlying issue to be inadequate safety culture within Uber’s Advanced Technologies Group.

The NTSB report of a Tesla vehicle crashing into a stationary fire truck on the highway (California, 22 January 2018) provided the following probable causes:

  • “The Tesla driver’s lack of response to the stationary fire truck due to his inattention and over-reliance on the vehicle’s advanced driver assistance system”
  • “The Tesla Autopilot design, which permitted the driver to disengage from the driving task”
  • “The driver’s use of the system in ways inconsistent with guidance and warnings from the manufacturer”.

In several cases, the issue appears not to be the automated system itself (whether provided by Tesla or other manufacturers), but either (1) the perception by drivers that the system is more capable than it actually is, or (2) the driver’s application of the automated system in conditions that the designer did not intend. For example, Tesla’s Autopilot system is intended to be used on highways with a central divider – however, the Tesla firmware does not restrict use of the system to highways. The driver can choose to operate this system on city streets, or roads with crossing traffic, for which it was not designed.

Despite the use of significant technology and automation in these vehicles, the ironies of automation include that (1) the driver chooses under what conditions to engage certain automated functions – and these conditions may be outside the design envelope; (2) the driver must remain in control of the vehicle and monitor the environment at all times; (3) some of the automated features are not always reliable.

Tesla provides an Autopilot function called ‘Traffic Light and Stop Sign Control (Beta)’. This system identifies stop signs and traffic lights, automatically controlling the speed of the vehicle so that it stops at the intersection. However, Tesla warns that “As with all Autopilot features, you must be in control of your vehicle, pay attention to its surroundings and be ready to take immediate action including braking. This feature is in Beta and may not stop for all traffic controls“.

The more we rely on technology, the more crucial it is that we understand human factors. The challenge for designers is to strike a balance between available automation technologies, whilst continuing to consider the changing role of the human within the system.

I’ll finish with a quote from Peter F. Drucker, often described as the founder of modern management:

“We are becoming aware that the major questions regarding technology are not technical but human questions”

Peter F. Drucker, 1967

Further reading

Ironies of automation, Lisanne Bainbridge, Automatica, Volume 19, Issue 6, November 1983, Pages 775-779

Comments are closed.

Up ↑

Discover more from Human Factors 101

Subscribe now to keep reading and get access to the full archive.

Continue reading