Artificial intelligence not yet smart enough to identify diabetic retinopathy

By Michael Woodhead

5 Oct 2018

Efforts to use artificial intelligence to detect diabetic retinopathy have been plagued with high rates of false positive results in a WA pilot trial.

An evaluation of an AI system for grading retinal images obtained by GPs in Perth showed there was potential for such a system but it lacked accuracy due to poor image quality and a low background incidence of the disease in the community.

Published in JAMA Network Open, the study evaluated results from an AI algorithm used to grade retinal images from 193 patients obtained by four primary care physicians using a fundus camera.

The system was deployed for six months in 2017 at a practice in Midland, Perth, at which all patients were invited to have macular-centred images taken. The images were processed by a software algorithm, with those appearing to show diabetic retinopathy referred to an ophthalmologist for further evaluation.

The AI grading system judged 17 patients as having retinopathy of sufficient severity to require referral. On review, the system actually identified two patients with true disease whereas 15 were false positives.

The system was effective in ruling out disease, with a specificity of 92% but the positive predictive value was only 12%, said study lead investigator Dr Yogesan Kangasingam (PhD), Research Director at the Australian e-Health research Centre.

The high rate of false positives was driven by low incidence of diabetic retinopathy (two of 193 people screened) and image quality problems caused by factors such as a dirty camera lens, he said.

“The low incidence rate of diabetic retinopathy we observed in our study is the norm in primary care; therefore, false-positives are likely to be an issue unless the specificity of our system or other systems is much higher.

“Given this limitation, we believe retinopathy images identified as having illness by an AI system should be reviewed by an ophthalmologist before a referral is made,” he concluded.

An accompanying commentary said AI systems may perform well in experimental settings but the study showed the importance of trialling AI systems in real world clinical practice.

“Although multiple studies have demonstrated that AI can perform on par with clinical experts in disease diagnosis, most of these tools have not been evaluated in controlled clinical studies to assess their effect on health care decisions and patient outcomes.

“While AI tools have the potential to improve disease diagnosis and care, premature deployment can lead to increased strain on the health care system, undue stress to patients, and possibly death owing to misdiagnosis.”

Already a member?

Login to keep reading.

OR
Email me a login link