Future for Transit Automation? – Washington, DC Metrorail Crash May Exemplify Automation Paradox

July 1, 2009 at 3:12 pm

(Source:  Washington Post)

Image Courtesy: Gothamist via Apture - DC Metro Crash

Sometime soon, investigators will piece together why one train on Metro’s Red Line hurtled into another last Monday, killing nine people and injuring dozens. Early indications suggest a computer system may have malfunctioned, and various accounts have raised questions about whether the driver of the speeding train applied the brakes in time.

The problem, said several experts who have studied such accidents, is that these investigations invariably focus our attention on discrete aspects of machine or human error, whereas the real problem often lies in the relationship between humans and their automated systems.

Metro officials have already begun a review of the automated control systems on the stretch of track where the crash occurred and have found “anomalies.” While such measures are essential, Lee said, making automated systems safer leads to a paradox at the heart of all human-machine interactions: “The better you make the automation, the more difficult it is to guard against these catastrophic failures in the future, because the automation becomes more and more powerful, and you rely on it more and more.”

Automated systems are often designed to relieve humans of tasks that are repetitive. When such algorithms become sophisticated, however, humans start to relate to them as if they were fellow human beings. The autopilot on a plane, the cruise control on a car and automated speed-control systems in mass transit are conveniences. But without exception, they can become crutches. The more reliable the system, the more likely it is that humans in charge will “switch off” and lose their concentration, and the greater the likelihood that a confluence of unexpected factors that stymie the algorithm will produce catastrophe.

Several studies have found that regular training exercises that require operators to turn off their automated systems and run everything manually are useful in retaining skills and alertness. Understanding how automated systems are designed to work allows operators to detect not only when a system has failed but also when it is on the brink. In last week’s Metro accident, it remains unclear how much time the driver of the train had to react when she recognized the problem.

New cruise-control and autopilot systems in cars and planes are being designed to give better feedback in a variety of ways. When sensors detect another car too close ahead on the road, for example, they make the gas pedal harder to depress. Pilots given auditory warnings as well as visual warnings about impending problems seem to respond better.

One researcher has even found that the manner in which machines provide feedback is important. When they are “polite” — waiting until a human operator has responded to one issue before interrupting with another, for example — improved human-machine relationships produce measurable safety improvements that rival technological leaps.

Click here to read the entire article. (Hat Tip: TheTransitWire.com)