"I'm sorry Dave, I'm afraid I can't do that..."
What can science fiction tell us about how to navigate the race towards automation in the transport sector?
Science fiction and science fact
Over this second COVID affected Christmas in a row I’ll inevitably be sitting down to watch a few science fiction movies. This will be instructive as well as fun as science fiction constantly predicts the future. This is not by accident: Did you know for example that, Elon Musk got the inspiration for Space X from reading Isaac Asimov novels? As the maestro of blending science fiction and science fact, Arthur C Clarke said:
The limits of the possible can only be defined by going beyond them into the impossible.
So, what can we learn from these stories and how can we ensure that we don’t accidentally drift towards the dystopian futures imagined by creators like Charlie Brooker, in the chillingly entertaining ‘Black Mirror’? If you’ll indulge me I thought I’d share some thoughts on a couple of sci-fi films and some thoughts on what they say about a pressing challenge for the transport sector: automation.
Deskilling - a warning from ‘Wall-E’
When I sat down to watch the Pixar classic Wall-E some years ago with my two young boys I was expecting an hour and a half of cheerful cartoon fare. Instead I was greeted with a terrifying and fully realised extrapolation of where the world might drift over time. The story is of humans leaving an environmentally destroyed world and gradually degenerating into obesity due to laziness, completely deskilled by the machinery that caters to their every whim.
So, how credible is this vision? Many driver functions have already been automated from parking sensors to adaptive cruise control and autonomous emergency braking. But these changes are but an incremental step towards the autonomous vehicles that will soon be ubiquitous around us. Gone are the days when people saw driving as a leisure activity in itself. The objective now is to create an environment where the driver can ignore the road and engage fully in something else.
This change is fundamentally altering the nature of the driving task. A key challenge is, of course, will the human be capable of intervening should the system go wrong? Automation in factories generally removed the workforce from harms way, so there was a fundamental safety driver for it to happen. However, transportation has the potential for rare, catastrophic events. People cannot be removed from the vicinity of the technology (or its associated energy) when they are transported in large volumes, often at high speed (or high altitude). So, for the time being at least, the competence of the human must be consiciously maintained in an envoironment with less opportunity to practice.
ArtificiaI intelligence and loss of control
The malign, sentient computer, taking over control from the human is well worn trope in science fiction from the ‘Skynet’ corporation in the ‘Terminator’ films to the rogue ED-209 in ‘Robocop’. But it has never been realised more chillingly or convincingly than in Stanley Kubrick’s ‘2001: A Space Odyssey’ - a story crafted from the mind of Arthur C Clarke himself:
I was reminded of HAL 9000’s refusal to allow ‘Dave’ into the ship when I recently read that in November five hundred Tesla drivers were locked out of their cars after an outage struck the carmaker's app. Kubrick’s film presents a destination we have progressed signifcantly towards since its release in 1968. Just in the last month there have been high profile incidents of automated systems choosing to do someting that is unpredictable and potentially dangerous. In November nearly twelve thousand Tesla vehicles were the subject of a safety recall because it was thought that they might make a false collision warning or an unexpected activation of the emergency brakes. On December 16th an autonomous, electric shuttle vehicle operated by Durham Region Transit in Toronto crashed into a tree, critically injuring the attendant on board.
Road accidents happen regulary with human operators of course, and technology has the potential to significantly improve safety in the long term. However automation pushes safety requirements deeper into the system and its design. When things go wrong this then shifts accountability and blame. Until things settle, and technology is proven virtually infallible, this will create unease and application of the ‘precautionary principle’. On December 16th a taxi firm in Paris suspended the use of Tesla Model 3 cars in its fleet after one of the vehicles was involved in an accident which killed one person and injured 20 others. This was despite the fact that Tesla seemed confident that there was no suggestion the incident was linked to a technical problem with the vehicle.
More technologically advanced regulators will inevitably be needed. An EU agency is being set up to oversee investigations into crashes involving automated systems. And of course ‘intelligence’ doesn’t have to be artificial - cyber threats increasingly allow remote brains access to our critical systems. Expect to see all of these topics rapidly increase in prominence as we all start the transition to the next generation of computerised, environmentally sound, automotive technology.
Learning from fables
As technology continues its relentless advance, science fiction stories can serve as warnings of traps to avoid - like the fables of Aesop or the Brothers Grimm. The key lesson for me is not to ‘over-automate’ without considering the full implications of doing so. For the foreseeable future, people will retain a key role in the delivery of transport technology. Their long-term competence and behaviours need to be thought about seriously in system design. Human input needs to be on engaging tasks, with healthy levels of activity. Ultimately, when unexpacted circumstances ocasionally occur the input of competent people will be needed, if tragedy is to be avoided. To finish, a final quote from the maestro, Sir Arthur C Clarke:
I don't pretend we have all the answers. But the questions are certainly worth thinking about.
In the next issue…
I hope you enjoyed the this edition of Tech Safe Transport. In the next issue I’ll be taking you through yet another topic on the safety of modern transportation. Please subscribe now so you don’t miss it.
Thanks for reading
All views are my own and I reserve the right to change my opinion when the facts change. If you have any thoughts or comments please feel free to send me a message on Twitter. Many thanks again to my editor, Nicola Gray.
I hope you’ve enjoyed the first year of ‘Tech Safe Transport.” Happy holidays, stay safe and expect plenty more in 2022.
George an interesting read and made me wonder if I can claim watching old Sci-Fi movies and reading my books as CPD. Not being an expert in this area, I wonder how much more safe a completely automatic personal transport device would have to be over a human controlled one. to be acceptable. I also wonder at what point a human becomes so stale that no matter how skilled they were to start with they have lost the ability to deal with the routine, let alone an emergency - thinking of emergencies should there be regular testing of the fall back system - ie the human
Another interesting read. It is interesting the generational difference in the response to automation failure. The classic example for me was the airliner over Germany that had the wrong co-ordinates programmed in to the auto-pilot , so it took a left rather than a right turn. While the young co-pilot reacted by typing away at the touch screen and battling the various systems, the older pilot, switched the auto-pilot off and swung the plane through 180 degrees on to what he knew was roughly the right track.....then he joined the battle with the systems.