From the Journals

AI-supported breast screens may reduce radiologist workload


 

FROM LANCET ONCOLOGY

Artificial intelligence (AI)–supported breast cancer screening appears safe and at least as accurate as standard double reading of mammograms by two breast radiologists, according to early results from a large, randomized, population-based cohort study.

The AI-supported screening also reduced radiologist workload by nearly 44%, researchers estimated.

The trial also found a 20% increase in cancer detection using AI support compared with routine double mammography reading, underscoring AI’s potential to improve screening accuracy and efficiency.

The findings, published online in Lancet Oncology, come from a planned interim safety analysis of the Swedish Mammography Screening with Artificial Intelligence (MASAI) trial.

To date, AI has shown promise in mammography screening, with retrospective evidence demonstrating similar accuracy, compared with standard double readings as well as reduced workload for radiologists. Still, randomized trials assessing the efficacy of AI-supported breast screening are needed.

The aim of the current interim randomized analysis was to assess early screening performance, which included cancer detection, recall, and false positive rates as well as cancer type detected and workload.

The MASAI trial randomized 80,033 women, with a median age of 54, to AI-supported screening (n = 40,003) or double reading without AI (n = 40,030).

The AI system provided malignancy risk scores from 1 to 10, with low-risk scores ranging from 1 to 7, intermediate risk from 8 to 9, and high risk at 10. These risk scores were used to triage screening exams to a single radiologist reading (score of 1-9) or double reading (score of 10), given that cancer prevalence “increases sharply” for those with a risk score of 10, the researchers explained. The AI system also provided computer-aided detection marks for exams with risk scores of 8-10 to radiologists.

Among nearly 40,000 women screened with AI support, 244 cancers were detected, including 184 invasive cancers (75%) and 60 in situ cancers (25%), and resulted in 861 recalls. Among 40,024 participants receiving standard screening, radiologists detected 203 cancers, including 165 invasive cancers (81%) and 38 in situ cancers (19%), and resulted in 817 recalls.

Overall, the detection rate using AI support versus standard screening was 6.1 per 1000 screened participants versus 5.1 per 1,000. The recall rates were 2.2% versus 2.0%, respectively.

The false positive rates were the same in both groups (1.5%) while the positive predictive value (PPV) of recall – how likely a recall of a participant ultimately led to a cancer diagnosis – was higher in the AI group: 28.3% versus 24.8%.

The cancer detection rate in the high-risk group – patients with a risk score of 10 – was 72.3 per 1000 participants screened, or one cancer per 14 screening exams. And, overall, 189 of 490 screening exams flagged as extra-high risk by AI (the highest 1% risk) were recalled. Of the 189 recalled participants, 136 had cancer, representing a PPV of recall of 72%.

Overall, “we found that the benefit of AI-supported screening in terms of screen-reading workload reduction was considerable,” the authors said.

Assuming a radiologist can read 50 mammograms an hour, the researchers estimated that a radiologist would take 4.6 fewer months to read more than 46,000 screening exams in the intervention group compared with more than 83,000 in the control group.

Although these early safety results are “promising,” the findings “are not enough on their own to confirm that AI is ready to be implemented in mammography screening,” lead author Kristina Lång, PhD, of Lund (Sweden) University, said in a press release.

“We still need to understand the implications on patients’ outcomes, especially whether combining radiologists’ expertise with AI can help detect interval cancers that are often missed by traditional screening, as well as the cost-effectiveness of the technology,” she said, adding that “the greatest potential of AI right now is that it could allow radiologists to be less burdened by the excessive amount of reading.”

In an accompanying editorial, Nereo Segnan, MD, and Antonio Ponti, MD, both of CPO Piemonte in Torino, Italy, said that the AI risk score for breast cancer in the trial “seems very accurate at being able to separate high-risk from low-risk women.”

However, the potential for overdiagnosis or overdetection of indolent lesions in the intervention group should “prompt caution in the interpretation of results that otherwise seem straightforward in favoring the use of AI,” the editorialists noted.

The authors agreed that increased detection of in situ cancers with AI-supported screening compared with standard screening – 25% versus 19% – “could be concerning in terms of overdiagnosis,” as the risk of overtreatment is more likely with these low-grade cancers.

In the final analysis, Dr. Lång and colleagues plan to characterize the biological features of detected lesions to provide further insight on AI-supported screening, including the risk for overdiagnosis.

In a statement to the U.K.-based Science Media Centre, Stephen Duffy, professor of cancer screening, Wolfson Institute of Population Health, Queen Mary University of London, commented that the “results illustrate the potential for artificial intelligence to reduce the burden on radiologists’ time,” which is “an issue of considerable importance in many breast screening programs.”

The MASAI study was funded by the Swedish Cancer Society, Confederation of Regional Cancer Centres, and government funding for clinical research. Dr. Lång has been an advisory board member for Siemens Healthineers and has received lecture honorarium from AstraZeneca. Dr. Segnan and Dr. Hall reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Next Article: