Urgent regulation needed on ‘risky’ medical AI

e-health

By Geir O'Rourke

8 Apr 2024

The growing use of artificial intelligence in health care is creating a real risk of harm to patients that warrants strict oversight from Australian regulators, a leading expert warns.

Potential threats include data and privacy breaches, poor transparency around program workings and hidden bias within the ‘black box’ of AI algorithms, none of which are fully encapsulated under current laws.

Beyond that, the rapid pace of change in the field means that governments, including Australia’s, are often unable to keep up, says paediatrician and AI researcher Dr Sandra Johnson.

“We will need to work together across a number of disciplines to bring regulation forward; to deal with the issue of responsibility and accountability and have a rigorous process of compliance,” she told the Annual Medico Legal Congress last month.

“This will need to involve regulation, data documentation, accuracy, security, vigilance and so on. We must set in place guardrails for these systems.”

Dr Johnson, a paediatrician and University of Sydney academic who was 2017-2019 president of Australasian College of Legal Medicine, stressed the medical AI revolution was not without benefits, particularly in the Australian context.

AI programs were already being used in a diverse array of medical applications, from robotic surgery to genomics, and AI apps were becoming increasingly popular with both doctors and patients seeking to diagnose common health problems, she noted.

“Google has a health risks app, for example, and at a clinical level, automation of repetitive administrative tasks is absolutely essential; for all of us into the future,” she said.

“But when you talk about medicine, think about how you write your notes and how you comment on your patient etc.”

“How is the computer going to interpret and process that information? It’s a black box and we must keep in mind the fact that deep learning systems will make predictions but they are not all meaningful.”

While the concept of human bias was well understood, machine bias would be a major concern going forward, particularly in view of the ‘black box’ problem, where end users did not understand the workings of AI algorithms, according to Dr Johnson.

Sources of bias in AI were myriad and started with datasets that machine learning models were trained on. Often sourced from cohorts of well-off US patients, these were often poorly matched to people in other countries, particularly women and non-Caucasians, Dr Johnson said.

Questions should always be asked around how a model was trained, who was included in its data set, and who funded its development.

“If you’re funded by a particular organisation, it is really hard to be independent of what they think,” she said.

“Think about how algorithms can be over- or under-estimating risk. Think about the amplification of any inequalities that may exist in the population. Think about the black box issue, that the process from input to output is not always clear.”

Adding weight to all these concerns was the emerging problem of “confabulation” by predictive text programs like ChatGPT, which tended to do “very badly” when they were confronted with problems outside of their data set, Dr Johnson said.

“AI systems at present time have no reasoning capacity, no human-like problem solving ability. Therefore, as doctors, we have to be very careful when we are using them that we don’t slip into this concept of automation bias and rely on them too much.”

 

 

Already a member?

Login to keep reading.

OR
Email me a login link