New AI Approach to Optical Coherence Tomography Images

Laird Harrison

October 30, 2018

CHICAGO — Artificial intelligence programs can be trained to read optical coherence tomography (OCT) images in the same way that humans learn, by looking at the world around them, researchers report.

"The problem with deep learning is that it requires hundreds of thousands or tens of thousands of images," said Michael Goldbaum, MD, from the University of California, San Diego.

It can be difficult to obtain that many images showing a disease, but "we found the system could learn from even a small set," he told Medscape Medical News.

He reported findings from a study of image-based learning here at the American Academy of Ophthalmology (AAO) 2018 Annual Meeting. Results were previously published in Cell (2018;172:1122-1131.e9).

Neural Networks

Computer scientists have been working for decades to create artificial intelligence systems that can match or exceed the volumes of data that can be analyzed by humans. But the systems were limited by the sophistication of the algorithms that computer programmers could create.

In the past decade, though, the increase in computing power opened the door to a new approach: neural networks that resemble the networks of cells in the human brain.

Instead of an algorithm programmed by human beings, these networks teach themselves by trial and error, creating their own algorithms in an approach known as deep learning.

For example, before defeating masters at the game of Go, a Google program played itself over and over, creating strategies not used by masters.

To apply this system to diagnostics, medical experts label a set of images according to whether or not they show a pathology. The artificial intelligence system then looks for patterns in the pixels that correspond to the labels.

The approach has worked so well in fundus photography that one such system, the IDx-DR, is more accurate at identifying diabetic retinopathy than retina specialists. It was approved by the US Food and Drug Administration in April, and is the first device ever approved that makes screening decisions without the involvement of a clinician.

How to Tell a Spaniel From a Stingray

But OCT images have proven more of a challenge. Each is a composite of multiple images and contains much more data than a fundus photo.

So Goldbaum and his colleagues copied the way humans learn. Doctors don't start out in life looking at OCT images of diabetic retinopathy. Instead, they spend years looking at the world around them, learning to distinguish a bicycle from a button or a face from a fish. Only with that background can doctors begin to analyze medical images.

They fed the system hundreds of thousands of readily available images of everyday objects — a ladle, a hatchet, a restaurant, a Blenheim spaniel, a stingray — to prepare it to understand OCT from a set of only 1000 images: 250 of drusen, 250 of diabetic macular edema, and 250 of choroid neovascularization, and 250 labeled normal.

The researchers compared their approach, called transfer learning, with a system trained with 108,312 OCT images: 8617 of drusen, 11,349 of diabetic macular edema, 37,206 of choroid neovascularization, and 51,140 labeled normal.

Getting 108,312 OCT images was not easy; the researchers had to acquire them from three medical institutions in the United States and two in China.

The team assessed diagnoses made by the two systems using a new set of 1000 OCT images.

The gold standard for correct decisions was established by a set of retina experts who independently graded the images. Both artificial intelligence systems were highly accurate — in sensitivity and specificity — in their diagnoses. The area under the receiver operator curve (AUC) — a comprehensive measure of a diagnostic system's precision — was also extremely high.

"Transfer learning keeps promoting the field to be better and better," Michael Chiang, MD, from Stanford University in Palo Alto, California, said during a news conference devoted to artificial intelligence.

To measure the error rates of the two systems, the researchers assigned more weight to a false-negative than to a false-positive result. The classified the drusen images as a routine referral, and the images of diabetic macular edema and choroid neovascularization as an urgent referral.

Although a false positive can cause undue distress or expense, the researchers reasoned, a false negative in a patient with choroidal neovascularization or diabetic macular edema could result in irreversible visual loss.

Table. Effectiveness of the Two Artificial Intelligence Systems
Outcome System Trained With Everyday Images System Trained With OCT Images Only
Sensitivity 96.6 97.8
Specificity 94.0 97.4
AUC 98.8 99.9
Weighted error 12.7 6.6

Six different retina specialists also graded the images. The weighted error for the humans ranged from 0.4% to 10.5% (mean, 4.8%), suggesting that the best artificial intelligence system was less prone to error than some of the human experts. The transfer-learning system wasn't far behind.

The researchers next queried the system trained exclusively with OCT images to see what regions of the images it had picked out to distinguish one image from another images. These areas corresponded to the parts of the images that the experts would identify as most important.

This step is important because some clinicians are uncomfortable trusting artificial intelligence systems to make correct diagnoses without knowing what criteria the systems are using. "This opens up the black box," Goldbaum said.

I foresee that artificial intelligence will help physicians learn how to diagnose better and make better clinical decisions..

In the future, artificial intelligence might find patterns in images that have not occurred to human experts. "I foresee that artificial intelligence will help physicians learn how to diagnose better and make better clinical decisions," he explained.

But Goldbaum and Chiang agree that artificial intelligence will not replace ophthalmologists anytime soon. However, it might relieve them from having to review large volumes of images.

And primary care physicians could image the eyes of apparently healthy patients and then use artificial intelligence to decide who to refer to an ophthalmologist. "Think of AI as an assistant for people who don't have much experience," Chiang said.

Such screenings might be possible even without the assistance of physicians. "You've got cameras coming out that can be used in a nonclinical setting, like a CVS, to screen people," Goldbaum suggested.

Goldbaum and Chiang have disclosed no relevant financial relationships.

American Academy of Ophthalmology (AAO) 2018 Annual Meeting. Presented October 27, 2018.

Follow Medscape on Twitter @Medscape and Laird Harrison @LairdH

Comments

3090D553-9492-4563-8681-AD288FA52ACE
Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.
Post as:

processing....