Will you stay in command?

 

As AI technology advances the final authority of the pilot in command may be reconsidered.

 

Most of us use autonomous vehicles frequently. Suspended hundreds of feet above the ground we put our trust and our life into an elevator control computer, a worn out emergency call button, an interphone line to a call center that is hopefully manned and of course the certificate which supposedly is on file with the city. Elevators are part of everyday life. We step in, press the button and go up or down. High-tech elevators in new buildings, such as the Intercontinental hotel in Los Angeles, even have elevator banks that decide themselves where they go and in what sequence. They accept your wish as entered on the keypad, and then make their own plan.

 

Most elevators were manned by liftboys until the middle of the last century, but even today there are still some elevator operators in older buildings. Other fairly simple vehicles such as airport shuttle trains have been unmanned and following computer programmed routines for decades. There is always a remote control center, but otherwise these systems work quite well and we hear little about them.

 

Autonomous cars are in the real life testing phase on our streets. In Norway, an autonomous container ship is being built. And autonomous aircraft concepts are in the mind of every aspiring aviation engineer.

 

However, we always assume that humans are in final control at all times. An emergency stop button or emergency brake and escape routes are always provided. If the autonomous system runs amok, we just cut it off and disembark. But this concept may be under challenge. In aviation, it may not even be workable as we cannot just stop the aircraft and walk away in midair.

 

As pilots we put our life on the line every duty day as we operate our aircraft. We therefore rightfully think that we should have the final authority over our aircraft and the people on board, whose safety and wellbeing we are entrusted with. The the law is clearly on our side, the pilot in command, or aircraft commander as he is called in Europe, is by law the final authority on board. The buck stops here, at the left seat.

 

But as we read about recent accidents, such as the B737 MAX accidents in Indonesia and Ethiopia, some of us may wonder if technology that we were not told about may limit our ability to control the aircraft. This discussion is not new. When the Airbus A320 was introduced in the late 80s, the fact that there was always a computer between the sidestick inputs of the pilot and the flight control surfaces caused major discussions in the pilot community at the time. Many pilots believed that a true and honest wire between the yoke and the control surface should always be available, even as a last resort if all hydraulics and flight control computer systems fail.

 

There is no way around it: The flight envelope protections Airbus introduced at the time are in fact infractions on the authority of the pilots. By now they are widely accepted by authorities, flightcrews and passengers as they are supposedly protecting the aircraft from stalls and overload stresses. However, some diehard traditionalists will point to the accident history of Airbus fly-by-wire aircraft in the last three decades. The protections clearly don’t always work as intended, in some cases protections even caused accidents. Just think of Air France AF447 stalling over the South Atlantic ocean and dropping to the sea, out of control. All 228 people on board died in this terrible 2009 accident. In light of the Boeing 737 MAX story it is surprising that the entire Airbus fleet was not grounded at the time, as there were other Airbus fly-by-wire accidents with well intended protections at least contributing to disaster. I just mention Air France AF296 (Basel-Mulhouse 1988), Iberia IB1456 (Bilbao 2001), Lufthansa LH2904 (Warsaw 1993), XL Airways 888 (Nice, 2008). There are many more.

 

Despite these problems, there is a good reason for flight envelope protections. I had the opportunity to test out many of them on a live testflight with Airbus Chief Testpilot Jaques Rosay over Toulouse, France in an Airbus A318 and was quite impressed at the time. I could not overspeed or stall the aircraft, everytime I tried ( as instructed by Jaques) the protections took over.

 

Protections are designed and certified to protect the aircraft from handling errors by the pilots. But what if we could design a protection system that shields the aircraft even from the pilots? Should the aircraft be able to take the entire control away from a pilot that does not perform?

 

You may think that this is a far fetched thought. But the idea is closer then you think, and I will explain. And as many things in life today, it has to do with Amazon and Google.

 

The understanding of the meaning of spoken words and sentences by human voice is easy for us as long as we know the language, and sometimes even if we do not know the language. To really correctly understand spoken words is however a major challenge for machines. Gadgets such as Amazon Echo or Google’s Alexa need very sophisticated engineering and programming to enable these machines to read our wishes from our lips. The limitations of the technology are evident and have been documented in countless videos. But a tremendous amount of research effort and funding goes into perfecting the algorithms that try to understand us.

 

Massive amounts of data are collected to enable deep learning by machines, improving the listening skills of our digital assistants. Massive amounts of data are also collected by video surveillance of citizens on city streets, at airports and railway stations such as in China, but also Singapore, London and ever more cities and critical infrastructure locations.

 

If you had to look at each video to find criminal or non-conforming behaviour, you would need many workers in the surveillance control room. In fact, you could probably only observe half the population, as you need the other half to watch.

 

Facial recognition technology combined with machine deep learning is the answer to this challenge. Machines are able to recognize faces already with an astonishing degree of accuracy. The next step is to understand the intentions, mood and behaviour of humans for efficient surveillance and prediction of intent. If you think of the 2002 Spielberg movie “Minority Report” right now you are on the right track.

 

The Boston company Affectiva put speech recognition and facial recognition technologies together with astonishing results. They call their technology Human Perception AI. The website states “Our software detects all things human: nuanced emotions, complex cognitive states, behaviors, activities, interactions and objects people use.”

 

Affectivas patented technology uses deep learning, computer vision and speech processing. The technology is called Emotion AI and can be integrated into apps, games and othe products to measure human emotions and behaviour. The idea is to enrich the digital experience and enable emotion awareness in your digital gadget. As a nice side effect, Affectiva’s big data collection grows as well enabling the selflearning software to become ever more sophisticated and precise.

 

For us pilots the relevant technology is the part that is designed for cars. It is called Automotive AI and monitors the driver and other occupants of the car. With the help of cameras and microphones facial and vocal emotion and cognitive state metrics are gathered from the driver and others in the car. The driver’s state is monitored to improve road safety while mood and reactions of the vehicle's occupants are observed to deliver a personalized transportation experience.

 

This all works by analyzing spontaneous facial expressions that people show in their daily interactions. Computer vision algorithms identify key landmarks on the face –the corners of the eyebrows, the tip of the nose, the corners of the mouth etc. and the machine then analyzes pixels in those regions to classify facial expressions. Combinations of these facial expressions are then translated to emotions.

 

 

 

 

 

 

The facial emotions joy, anger, and surprise are derived from this monitoring, as well as overall positivity or negativity. Drowsiness shows by facial markers such as eye closure, yawning, blink, and blink rate. The head pose is estimated, and the facial expressions such as smile, wide eyes, the raise of a brow, brow furrow, cheek raise, the opening of the mouth, upper lip raise and nose wrinkle. Don’t forget anger and laughter in your voice. Our voice also shows our degree of alertness, exitement and engagement.

 

To develop metrics that provide a deep understanding of the state of occupants in a car large amounts of real-world data were needed. People volunteer their data and emotions in their homes, phones and cars. A broad cross-section of age groups, ethnicities, and gender is represented.

 

Tor us pilots the driver state monitoring is the interesting part. Affectiva states that it analyzes “both face and voice for levels of driver impairment caused by physical distraction, mental distraction from cognitive load or anger, drowsiness and more.” Not only can the car infotainment be designed to take appropriate action by selecting different music. In semi-autonomous vehicles, the machine can also be ensure that it trusts the driver before handing over control. This is called the “handoff” challenge.

 

By monitoring levels of driver fatigue and distraction appropriate alerts and interventions to correct dangerous driving can be activated. And make sure to avoid expletives while driving. Driver anger is also watched closely to avoid road rage. Affectiva states: “When sensing driver fatigue, anger or distraction, the autonomous AI can determine if the car must take over control. And when the driver is alert and engaged, the vehicle can pass back control.”

 

That says it all. If we as drivers (or, a few years further down the road, as pilots) do not behave according to the wishes of the aircraft emotional AI, we will be shut out and not permitted to operate the aircraft anymore until we are nice again and the machine deems us worthy of controlling the aircraft and the fate of its occupants.

 

By the way, the occupants the mood and reactions is monitored as well. Affectiva realizes that: “This becomes critically important in autonomous vehicles, robo-taxis and ridesharing, where passengers are a captive audience in an entertainment hub, selecting transportation brands based on the most optimal and personalized experience.” Levels of comfort and drowsiness are observed, the autonomous driving style may be changed if it makes passengers anxious or uncomfortable.

 

So, here it is. The technology is well underway. In the automotive environment well intended government regulation may require these features in the near future. Is it just a matter of time before these technologies will be required in airplanes as well?

 

That brings us to the very central question of final authority. Should an algorithm, certifed and well intended as it may be, decide about our lives? We have to realize that there is a big difference between a vehicle that just can be stopped and evacuated, such as an elevator, train or car, and a vehicle that has no emergency brake, such as a ship or an aircraft. While ships, depending on the situation, may be abandoned or anchored, aircraft always have to move forward through the air to keep flying, and have to be landed on a safe runway at the end of every flight before people on board can leave the aircraft and separate their destiny from that of the aircraft.

 

Autonomous flight is far away, but semi-autonomous flight under serious discussion already. Ultra long haul flights require an augmented crew. It is not surprising that some in industry would like to reduce the manning in the cockpit during the cruise portion to just one pilot. This is called single-pilot cruise concept.

 

But as the single pilot cruises along, while his buddy is resting in the bunk, he will not be alone. He will be monitored, by the aircraft AI and by a ground control center. How this is all going to work is still quite murky. The ground control center may be the airline operational control center or even ATC. The monitoring of the single pilot by the aircraft itself maybe a system similar to that of Affectiva.

 

But who will be in command? Should the people on board be in control of their own destiny? Or a remote control center with no stakes other then financial liability and reputation? Or the machine itself, with no override once the computer assumed command?

 

If a remote ops center has overriding authority over the aircraft, the human operators at the ops center would have to monitored somehow too. By a machine or other humans. As we know, nothing flies in the air without FAA certification, so the algorithms that control the passenger carrying aircraft and the ops center will have to be certified as well. But is that enough for all possible and seemingly impossible scenarios? Or will we always need the well trained human pilot that can solve the unsolvable, such as Sully did as he landed in the Hudson river, saving all lives on board?

 

The question of final authority on an airplane is philosophical as well as technical. We are just starting the discussion.

 

J. Peter Berendsen