AI spots differences in axSpA images

Spondyloarthritis

By Mardi Chapman

10 Nov 2020

Accurate detection of definite radiographic sacroiliitis relevant for the diagnosis and classification of axial spondyloarthritis (axSpA) is possible using artificial intelligence, research shows.

Speaking at ACR Convergence 2020, Professor Denis Poddubnyy said X-ray of the sacroiliac joints is the first investigation when SpA is suspected, and sometimes the only investigation if MRI is not available.

However evidence suggested there could be huge discrepancies between local and central expert assessment of radiographic sacroiliitis.

“So we thought maybe artificial intelligence is a way to standardise the assessment of radiographic sacroiliitis,” he said.

Professor Poddubnyy, from the Charité University Hospital in Berlin, presented information on the development and validation of an artificial neural network for the detection of radiographic sacroiliitis.

The training and internal validation of the model was performed in the multinational PROOF cohort of more than 1500 patients.

Professor Poddubnyy, a member of the executive committee of the Assessment of SpondyloArthritis International Society (ASAS) said the developed algorithm was then independently validated in the German GESPIC cohort of 525 patients with axSpA.

“We were really surprised by the performance of the network both in the validation and the independent test sets,” he said.

“The artificial neural network achieved an absolute agreement on classification as compared to expert readers of 90%.”

“We think that such an approach could be a useful tool or an additional aid in clinical practice but also for clinical studies where classification should be done in a reliable way.”

“The next step of course would be to develop a similar network for MRI of sacroiliac joints.”

In Australia, the RANZCR has recently released standards of practice for the application of artificial intelligence (AI) in clinical radiology.

RANZCR president Dr Lance Lawler told the limbic that AI was clearly the future and would eventually deliver lower healthcare costs and improved outcomes.

“We have the vision but we are not quite there yet. A lot of the algorithms that are coming out now are very specific – answering a specific clinical question – which is good.”

“And depending on the difficulty of the question, the accuracy of the algorithm and all sorts of other confounding factors, they perform as good as, better, or slightly worse than human experts.”

“It’s the most exciting part of medicine, at the moment. We’ve just got to get it right.”

For example, algorithms had to be appropriate for the specific populations in which they were used.

“You’ve got to be careful you’re not selecting against ethnic groups which may not perform as well under the algorithm. You might be disadvantaging a certain subset of the population because it doesn’t perform as well for that group but you don’t realise that upfront.”

Already a member?

Login to keep reading.

OR
Email me a login link