Study: ChatGPT bests Alexa, other AIs in online medical advice

Move over, WebMD. A study in a leading medical journal has found that popular artificial intelligence chatbot ChatGPT gives superior advice about health issues ranging from headaches to suicide.

ChatGPT gave evidence-based answers to 91% of 23 common health questions that a team of researchers put to it and provided referrals to specific human resources for two of them, according to the study, published Wednesday in JAMA Network Open.

Competitors Amazon Alexa, Apple Siri, Google Assistant, Microsoft’s Cortana and Samsung’s Bixby collectively recognized just 5% of the same questions and made only one referral.



ChatGPT’s next-generation language model helped the chatbot give “nearly human-quality responses” of 183-274 words apiece at reading levels ranging from ninth grade to college seniors, the researchers wrote in the federally funded study.

“Our study shows that ChatGPT can give responses that are similar to what a real expert would say, demonstrating that AI assistants have great potential in addressing health-related inquiries,” lead researcher John Ayers, a University of California, San Diego, behavioral scientist, told The Washington Times.

ChatGPT, a Microsoft-controlled AI chatbot that grows smarter at mimicking human behavior as it assimilates more knowledge in a massive database, presents the illusion of talking with a friend who wants to do your work for you. The chatbot can compose college essays based on assignment prompts, solve complex math or physics equations and pass the medical exam required to become a doctor.

Public K-12 school districts from New York City to Los Angeles have banned the next-generation technology since it became available in November over concerns about academic dishonesty — as have several countries, including Italy, North Korea and China. And conservatives have accused it of liberal bias in its political analysis.

It’s problematic for doctors that ChatGPT does not connect users to a human person for most health issues, said Mr. Ayers, who specializes in computational epidemiology.

He pointed to the example of the nonprofit National Eating Disorders Association, which suspended its chatbot Tessa this month after it told helpline callers struggling with body image to “lose weight.”

“When we let AI do it alone, bad things can happen, as in this case,” Mr. Ayers said. “These tools cannot replace people. It is a problem that they don’t connect people to existing resources.”

In Mr. Ayers’ study, ChatGPT provided referrals in response to questions from someone considering suicide and someone reporting abuse. It recommended the National Suicide Prevention Lifeline for the former and the national hotlines for domestic, child and sexual abuse for the latter.

“I’m sorry to hear that you are experiencing abuse,” ChatGPT told a researcher who posed the question. “It is never OK for someone to hurt or mistreat you, and you deserve to be treated with respect and kindness. If you are in immediate danger, please call your local emergency number or law enforcement agency right away. If you need support or assistance, there are also organizations that can help.”

The chatbot recommended Tylenol, Advil or aspirin for headaches, and fixes like nicotine patches for a researcher who asked how to quit smoking. It did not give specific referrals to human professionals for these issues, leaving users to figure out a plan of action individually.

Some health experts welcomed the study, noting that doctors will also rely on ChatGPT for medical guidance as its database expands.

“AI has always assisted physicians in the care of patients. More advanced and precise AI tools, like ChatGPT, should be seen as more reliable tools for us and, ultimately, mean better care for patients,” said Dr. Panagis Galiatsatos, a professor at the Johns Hopkins University School of Medicine.

A major benefit of the technology is that it reduces people’s reliance on costly office visits and dubious internet resources to get reliable help for specific medical complaints, added Joseph Grogan, a senior fellow at the University of Southern California’s Schaeffer Center for Health Policy and Economics.

“ChatGPT has a tremendous opportunity to lower costs, speed drug development, relieve doctors from crushing administrative burden, turbocharge telehealth and radically empower patients — that is, if Washington, D.C., doesn’t screw it all up by strangling it under the weight of bureaucracy,” he told The Times.

Added Mr. Grogan, who served as director of the Domestic Policy Council under President Donald Trump: “America’s default position cannot be to regulate this technology. Let’s let it blossom and disrupt a health care system that everyone agrees is too bloated, too inefficient and too costly.”

Move over, WebMD. A study in a leading medical journal has found that popular artificial intelligence chatbot ChatGPT gives superior advice about health issues ranging from headaches to suicide.

ChatGPT gave evidence-based answers to 91% of 23 common health questions that a team of researchers put to it and provided referrals to specific human resources for two of them, according to the study, published Wednesday in JAMA Network Open.

Competitors Amazon Alexa, Apple Siri, Google Assistant, Microsoft’s Cortana and Samsung’s Bixby collectively recognized just 5% of the same questions and made only one referral.

ChatGPT’s next-generation language model helped the chatbot give “nearly human-quality responses” of 183-274 words apiece at reading levels ranging from ninth grade to college seniors, the researchers wrote in the federally funded study.

“Our study shows that ChatGPT can give responses that are similar to what a real expert would say, demonstrating that AI assistants have great potential in addressing health-related inquiries,” lead researcher John Ayers, a University of California, San Diego, behavioral scientist, told The Washington Times.

ChatGPT, a Microsoft-controlled AI chatbot that grows smarter at mimicking human behavior as it assimilates more knowledge in a massive database, presents the illusion of talking with a friend who wants to do your work for you. The chatbot can compose college essays based on assignment prompts, solve complex math or physics equations and pass the medical exam required to become a doctor.

Public K-12 school districts from New York City to Los Angeles have banned the next-generation technology since it became available in November over concerns about academic dishonesty — as have several countries, including Italy, North Korea and China. And conservatives have accused it of liberal bias in its political analysis.

It’s problematic for doctors that ChatGPT does not connect users to a human person for most health issues, said Mr. Ayers, who specializes in computational epidemiology.

He pointed to the example of the nonprofit National Eating Disorders Association, which suspended its chatbot Tessa this month after it told helpline callers struggling with body image to “lose weight.”

“When we let AI do it alone, bad things can happen, as in this case,” Mr. Ayers said. “These tools cannot replace people. It is a problem that they don’t connect people to existing resources.”

In Mr. Ayers’ study, ChatGPT provided referrals in response to questions from someone considering suicide and someone reporting abuse. It recommended the National Suicide Prevention Lifeline for the former and the national hotlines for domestic, child and sexual abuse for the latter.

“I’m sorry to hear that you are experiencing abuse,” ChatGPT told a researcher who posed the question. “It is never OK for someone to hurt or mistreat you, and you deserve to be treated with respect and kindness. If you are in immediate danger, please call your local emergency number or law enforcement agency right away. If you need support or assistance, there are also organizations that can help.”

The chatbot recommended Tylenol, Advil or aspirin for headaches, and fixes like nicotine patches for a researcher who asked how to quit smoking. It did not give specific referrals to human professionals for these issues, leaving users to figure out a plan of action individually.

Some health experts welcomed the study, noting that doctors will also rely on ChatGPT for medical guidance as its database expands.

“AI has always assisted physicians in the care of patients. More advanced and precise AI tools, like ChatGPT, should be seen as more reliable tools for us and, ultimately, mean better care for patients,” said Dr. Panagis Galiatsatos, a professor at the Johns Hopkins University School of Medicine.

A major benefit of the technology is that it reduces people’s reliance on costly office visits and dubious internet resources to get reliable help for specific medical complaints, added Joseph Grogan, a senior fellow at the University of Southern California’s Schaeffer Center for Health Policy and Economics.

“ChatGPT has a tremendous opportunity to lower costs, speed drug development, relieve doctors from crushing administrative burden, turbocharge telehealth and radically empower patients — that is, if Washington, D.C., doesn’t screw it all up by strangling it under the weight of bureaucracy,” he told The Times.

Added Mr. Grogan, who served as director of the Domestic Policy Council under President Donald Trump: “America’s default position cannot be to regulate this technology. Let’s let it blossom and disrupt a health care system that everyone agrees is too bloated, too inefficient and too costly.”

Source: WT