Adoption of AI tech a matter of life and death; scenarios show gravity of maintaining human control

Fourth of four parts

Elaine Herzberg was walking a bicycle across the street one night in Tempe, Arizona, when an Uber vehicle crashed into her and killed her — one of more than 36,000 traffic deaths recorded in 2018.

What made her death different was that the Uber vehicle was part of the company’s self-driving experiment. Herzberg became the first known victim of an AI-powered robot car.

It was seen as a watershed moment, comparable to the first known automobile crash victim in the late 1800s, and making concrete what until then had been primarily hypothetical questions about killer robots.



Five years on, artificial intelligence has gone mainstream, with applications in medicine, the military and other industries. In some quarters, the pace of change and the dangers of runaway AI, as seen in dystopian movies, have produced intense handwringing. Leading technology experts foresee a significant chance that the technology will eradicate humans.

AI is already at work in doctor’s offices, helping with patient diagnosis and monitoring. AI applications can diagnose skin cancer better than a dermatologist, and an app that hit the market this year uses AI to help people with diabetes predict their glucose responses to foods.

In short, AI is already saving countless lives, tipping the balance sheet clearly to the plus side.


SEE ALSO: AI starts a music-making revolution and plenty of noise about ethics and royalties


“We’re far, far in the positive,” said Geoff Livingston, founder of Generative Buzz, which helps companies use AI.

Take traffic, where millions of vehicles already offer driver assistance systems, such as keeping vehicles in a lane, warning of an impending collision and, in some cases, automatically braking. Once most vehicles on the road use the technology, AI could save nearly 21,000 lives and prevent nearly 1.7 million injuries a year in the U.S., according to the National Safety Council.

The benefits may be even more significant in medicine, where AI isn’t so much replacing doctors as assisting them in decision-making — sometimes called “intelligent automation.”

In his 2021 book by that name, Pascal Bornet and his fellow researchers said intelligent drones are delivering blood supplies in Rwanda, and IA applications are diagnosing burns and other skin wounds from smartphone photos of patients in countries with doctor shortages.

Mr. Bornet calculated that intelligent automation could reduce early deaths and extend healthy life expectancy by 10% to 30%. For a global population with some 60 million deaths annually, that works out to 6 million to 18 million early deaths that could be prevented each year.

More minor enhancements in AI can improve home workouts or food safety by flagging harmful bacteria. Scientists see more efficient farming and reductions in food waste. The United Nations says AI has a role in combating climate change by providing earlier warnings of looming weather-related disasters and reducing greenhouse gas emissions.


SEE ALSO: Hype and hazards: Artificial intelligence is suddenly very real


Of course, AI is also being used on the other side of the equation.

Israel is reportedly using AI to select retaliation targets in Gaza after Hamas’ murderous terrorist attack in October. Habsora, which is Hebrew for “the Gospel,” can produce far more targets than human analysts can. It’s a fascinating high-tech response to Hamas’ initial low-tech assault, in which terrorists used paragliders to cross the Israel border.

Go a bit north, and the Russia-Ukraine war has turned into an AI arms race with autonomous Ukrainian drones striking Russian targets. Meanwhile, Russia uses AI to try to win the propaganda battle. Ukraine uses AI in its response.

Devising an exact scorecard for deaths versus lives saved is impossible, experts said, partly because so much of AI use is hidden.

“Frankly, I haven’t a clue how one would do such a tally with any confidence,” one researcher said.

Several agreed with Mr. Livingston that the positive side of AI is winning right now. So why the lingering reticence?

Experts said scary science fiction scenarios have something to do with it. Clashes between AI-powered armies and underdog humans are staples of the genre, though even less apocalyptic versions pose uneasy questions about human-machine interactions.

Big names in technology have fueled the fears with dire predictions.

Elon Musk, the world’s richest man, has been on a doom tour warning that AI could cause “civilization destruction.” At the Yale CEO Summit in June, 42% of chief executives surveyed said AI could eradicate humanity within five to 10 years, according to data shared with CNN.

An incident in May brought home those concerns.

Col. Tucker “Cinco” Hamilton, the Air Force chief of AI test and operations, was delivering a presentation in London on future combat capabilities when he mentioned a simulated test asking an AI-enabled drone to destroy missile sites. The AI was told to give final go/no-go authority to a human but instructed that the missile site destruction was a priority.

After several instances of the human blocking an attack, the AI got fed up with the simulation.

“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Col. Hamilton said.

Fear and outrage ensued. Some outlets seemingly did not care that the colonel said it was a simulation.

The Air Force said it wasn’t a simulation but a “thought experiment” that Col. Hamilton was trying on the audience.

In a follow-up piece for the Royal Aeronautical Society in London, the colonel took the blame and said the story took off because pop culture primed people to expect “doom and gloom.”

“It is not something we can ignore, nor is it something that should terrify us. It is the next step in developing systems that support our progress as a species. It is just software code — which we must develop ethically and deliberately,” he wrote.

He gave an example of the Air Force using AI to help aircraft fly in formation. If the AI ever suggests a flight maneuver that is too aggressive, the software automatically cuts out the AI.

This approach ensures the safe and responsible development of AI-powered autonomy that keeps the human operator as the preeminent control authority.

Lauren Kahn, a senior research analyst at Georgetown University’s Center for Security and Emerging Technology, said she wasn’t shocked but rather relieved when she heard about Col. Hamilton’s presentation.

“While it seems very scary, I thought this would be a good thing if they were testing it,” she said.

The goal, she said, should be to give AI tools increasing autonomy within parameters and boundaries.

“You want something that the human is able to understand how it operates sufficiently that they can rely on it,” she said. “But, at the same time, you don’t want the human to be involved in every step. Otherwise, that defeats the purpose.”

She said the extreme cases are less of a threat than “the very boring real harms it can cause today,” such as bias in algorithms or misplaced reliance.

“I’m worried about, say, if using an algorithm makes mishaps more likely because a human isn’t paying attention,” she said.

That brings us back to Herzberg’s death in 2018.

The National Transportation Safety Board’s review said the autonomous driving system noticed Herzberg 5.6 seconds before the crash but failed to identify her as a pedestrian and couldn’t predict where she was going. Too late, it realized a crash was imminent and relied on the human operator to take control.

Rafaela Vasquez, the 44-year-old woman behind the wheel, had spent much of the Volvo’s ride looking at her cellphone, where she was streaming a television show — reportedly talent show “The Voice” — which was against the company’s rules.

A camera in the SUV showed she was looking down for most of the six seconds before the crash and looked up only a second before hitting Herzberg. She spun the steering wheel just two-hundredths of a second before the crash, and the Volvo plowed into Herzberg at 39 mph.

In a plea deal, Vasquez was convicted of one count of endangerment — the Florida version of culpable negligence — and sentenced to three years of probation.

NTSB Vice Chairman Bruce Landsberg said there was blame to go around, but he was particularly struck by the driver’s complacency in trusting the AI. Vasquez spent more than one-third of the trip looking at her phone and glanced at the device 23 times in the three minutes before the crash.

“Why would someone do this? The report shows she had made this exact same trip 73 times successfully. Automation complacency,” Mr. Landsberg said.

Put another way, the problem wasn’t the technology but the wrongly placed reliance on it.

Mr. Livingston, the AI marketing expert, said that’s the more realistic danger lurking in AI right now.

“The caveat isn’t that the AI will turn on humans; it’s humans using AI on other humans,” he said.

Source: WT