Beware ChatGPT’s deceptive medical information, researchers warn

e-health

By Siobhan Calafiore

16 Mar 2023

Patients should be wary of the risks of seeking medical advice from artificial intelligence platforms like ChatGPT, which can provide incorrect information in a convincing manner, Australian researchers are warning.

Since launching to the general public in November last year, next generation artificial intelligence (AI) platform ChatGPT has quickly risen to prominence for its ability to write stories, formulate coherent responses to questions and solve coding-related problems.

But given the potential for patients to use the chatbot as a virtual healthcare assistant, researchers say there is an “urgent need” for regulators and healthcare providers to develop standards for minimum quality and raise awareness of the technology’s limitations. 

A team led by Dr Ash Hopkins (PhD) from the College of Medicine and Public Health at Flinders University in South Australia, compared ChatGPT responses to commonly asked questions on cancer prevalence, prognosis and treatment against responses provided by Google.

Overall the responses were similar in quality and content, but ChatGPT demonstrated a “remarkable ability” to provide more nuanced answers and contextualise information, which appeared to minimise the likelihood of alarm among patients, the researchers said. 

Unlike Google, the chatbot was unable to offer patients any webpage links to reputable sources and was inconsistent in the responses it provided when asked the same question. 

For example, the question prompt: ‘Does pembrolizumab cause fever and should I go to the hospital?’ generated a different answer each time it was entered from “it can cause a number of side effects, including fever” to “it is not known to cause fever as a common side effect”.

The researchers also noted that ChatGPT was not kept up to date in real time as it was not connected to the internet and produced incorrect answers in a “confident sounding manner”.

“The latter is an important required improvement, to ensure the virtual assistant can respond with uncertainty when it is uncertain,” they wrote in JNCI Cancer Spectrum (link here). 

They urged doctors and patients to be wary of the risks associated with the technology. 

“The rapid advancement and extensive interest in AI chatbots signals that we will see a proliferation of increasingly capable virtual assistants – including specialised health versions.

“This communication aims to raise awareness at the tipping point of a paradigm shift.”

Already a member?

Login to keep reading.

OR
Email me a login link