ChatGPT warning issued to doctors amid calls for national regulation

By Siobhan Calafiore

5 Jun 2023

Doctors are being warned against the use of artificial intelligence platforms like ChatGPT in their clinical work as the AMA calls for national regulation of the technology in the medical setting to provide better protection of patient privacy.

Healthcare staff at five hospitals across Perth have been told to “cease immediately” using AI chatbots to write medical notes, in an email from Perth’s South Metropolitan Health Service (SMHS) sighted by the limbic.

The email, dated 8 May, followed reports that one doctor had used ChatGPT to write up medical notes, which were later uploaded to the patient record system.

“Crucially, at this stage, there is no assurance of patient confidentiality when using AI bot technology such as ChatGPT, nor do we fully understand the security risks,” wrote SMHS chief executive Paul Forden.

“For this reason, the use of AI technology, including ChatGPT, for work-related activity that includes any patient or potentially sensitive health service information must cease immediately.”

He also said: “While we are always looking for innovative news ways to make administrative work more efficient, we must approach all new technology through the lens of data integrity and patient confidentiality.

“That requires a considered decision and approval at an organisational level regarding the use of any new technology.”

The service stressed there had been no breach of patient confidentiality.

Nevertheless, AMA WA president Dr Mark Duncan-Smith said the case illustrated the “real need for guidelines and regulation of use of AI in the medical setting”.

“The case in Perth… was limited to one doctor using ChatGPT more as an experiment rather than a work tool. It however showed that the health employer had no protocols or guidelines in place at the time,” Dr Duncan-Smith said.

“There is a concern in this fast-moving space that [AI] utilisation could outstrip regulation or guidelines. This then puts in danger patient confidentiality in the short term and other issues in the longer term.”

He said while AI use was not currently widespread in the healthcare setting, there was potential for “huge benefits” including cutting down time and resources.

However, the technology needed to be monitored for effectiveness as well as have quality and safety measures, including “a set of human eyes” to check outcomes.

“Some countries such as Canada already have such regulation in place, and Australia needs to move forward in this space sooner than later,” he said.

The federal AMA also wrote to the Federal Government’s Digital Technology Taskforce [link here] last April, calling for AI regulation to protect patients’ rights.

“Any future regulation of this field will need to ensure that AI and ADM (automated decision making) are utilised only where this will genuinely contribute to improving health outcomes of patients, and ensuring equity by applying adequate ethical principles and protections,” it said.

Already a member?

Login to keep reading.

OR
Email me a login link