fbpx

Category: AI MAMMO

June 16, 2025 by g4qwj 0 Comments

Classification of Mammographic Breast Microcalcifications Using a Deep Convolutional Neural Network A BI-RADS–Based Approach

Abstract

Purpose 

The goal of this retrospective cohort study was to investigate the potential of a deep convolutional neural network (dCNN) to accurately classify microcalcifications in mammograms with the aim of obtaining a standardized observer-independent microcalcification classification system based on the Breast Imaging Reporting and Data System (BI-RADS) catalog.

Materials and Methods 

Over 56,000 images of 268 mammograms from 94 patients were labeled to 3 classes according to the BI-RADS standard: “no microcalcifications” (BI-RADS 1), “probably benign microcalcifications” (BI-RADS 2/3), and “suspicious microcalcifications” (BI-RADS 4/5). Using the preprocessed images, a dCNN was trained and validated, generating 3 types of models: BI-RADS 4 cohort, BI-RADS 5 cohort, and BI-RADS 4 + 5 cohort. For the final validation of the trained dCNN models, a test data set consisting of 141 images of 51 mammograms from 26 patients labeled according to the corresponding BI-RADS classification from the radiological reports was applied. The performances of the dCNN models were evaluated, classifying each of the mammograms and computing the accuracy in comparison to the classification from the radiological reports. For visualization, probability maps of the classification were generated.

Results 

The accuracy on the validation set after 130 epochs was 99.5% for the BI-RADS 4 cohort, 99.6% for the BI-RADS 5 cohort, and 98.1% for the BI-RADS 4 + 5 cohort. Confusion matrices of the “real-world” test data set for the 3 cohorts were generated where the radiological reports served as ground truth. The resulting accuracy was 39.0% for the BI-RADS 4 cohort, 80.9% for BI-RADS 5 cohort, and 76.6% for BI-RADS 4 + 5 cohort. The probability maps exhibited excellent image quality with correct classification of microcalcification distribution.

Conclusions 

The dCNNs can be trained to successfully classify microcalcifications on mammograms according to the BI-RADS classification system in order to act as a standardized quality control tool providing the expertise of a team of radiologists.

Read the full article online here:

Download the PDF of the article

June 16, 2025 by g4qwj 0 Comments

Automatic and standardized quality assurance of digital mammography and tomosynthesis with deep convolutional neural networks

Abstract

Objectives

The aim of this study was to develop and validate a commercially available AI platform for the automatic determination of image quality in mammography and tomosynthesis considering a standardized set of features.

Materials and methods

In this retrospective study, 11,733 mammograms and synthetic 2D reconstructions from tomosynthesis of 4200 patients from two institutions were analyzed by assessing the presence of seven features which impact image quality in regard to breast positioning. Deep learning was applied to train five dCNN models on features detecting the presence of anatomical landmarks and three dCNN models for localization features. The validity of models was assessed by the calculation of the mean squared error in a test dataset and was compared to the reading by experienced radiologists.

Results

Accuracies of the dCNN models ranged between 93.0% for the nipple visualization and 98.5% for the depiction of the pectoralis muscle in the CC view. Calculations based on regression models allow for precise measurements of distances and angles of breast positioning on mammograms and synthetic 2D reconstructions from tomosynthesis. All models showed almost perfect agreement compared to human reading with Cohen’s kappa scores above 0.9.

Conclusions

An AI-based quality assessment system using a dCNN allows for precise, consistent and observer-independent rating of digital mammography and synthetic 2D reconstructions from tomosynthesis. Automation and standardization of quality assessment enable real-time feedback to technicians and radiologists that shall reduce a number of inadequate examinations according to PGMI (Perfect, Good, Moderate, Inadequate) criteria, reduce a number of recalls and provide a dependable training platform for inexperienced technicians.

Key points

  1. Deep convolutional neural network (dCNN) models have been trained for classification of mammography imaging quality features.

  2. AI can reliably classify diagnostic image quality of mammography and tomosynthesis.

  3. Quality control of mammography and tomosynthesis can be automated.

Read the full article online here:

Download the PDF of the article

June 16, 2025 by g4qwj 0 Comments

Fully automatic classification of automated breast ultrasound (ABUS) imaging according to BI-RADS using a deep convolutional neural network

Abstract

Purpose

The aim of this study was to develop and test a post-processing technique for detection and classification of lesions according to the BI-RADS atlas in automated breast ultrasound (ABUS) based on deep convolutional neural networks (dCNNs).

Methods and materials

In this retrospective study, 645 ABUS datasets from 113 patients were included; 55 patients had lesions classified as high malignancy probability. Lesions were categorized in BI-RADS 2 (no suspicion of malignancy), BI-RADS 3 (probability of malignancy < 3%), and BI-RADS 4/5 (probability of malignancy > 3%). A deep convolutional neural network was trained after data augmentation with images of lesions and normal breast tissue, and a sliding-window approach for lesion detection was implemented. The algorithm was applied to a test dataset containing 128 images and performance was compared with readings of 2 experienced radiologists.

Results

Results of calculations performed on single images showed accuracy of 79.7% and AUC of 0.91 [95% CI: 0.85–0.96] in categorization according to BI-RADS. Moderate agreement between dCNN and ground truth has been achieved (κ: 0.57 [95% CI: 0.50–0.64]) what is comparable with human readers. Analysis of whole dataset improved categorization accuracy to 90.9% and AUC of 0.91 [95% CI: 0.77–1.00], while achieving almost perfect agreement with ground truth (κ: 0.82 [95% CI: 0.69–0.95]), performing on par with human readers. Furthermore, the object localization technique allowed the detection of lesion position slice-wise.

Conclusions

Our results show that a dCNN can be trained to detect and distinguish lesions in ABUS according to the BI-RADS classification with similar accuracy as experienced radiologists.

Key Points

 A deep convolutional neural network (dCNN) was trained for classification of ABUS lesions according to the BI-RADS atlas.

 A sliding-window approach allows accurate automatic detection and classification of lesions in ABUS examinations.

Read the full article online here:

Download the PDF of the article

June 16, 2025 by g4qwj 0 Comments

Diagnostic accuracy of automated ACR BI-RADS breast density classification using deep convolutional neural networks

Abstract

Objectives

High breast density is a well-known risk factor for breast cancer. This study aimed to develop and adapt two (MLO, CC) deep convolutional neural networks (DCNN) for automatic breast density classification on synthetic 2D tomosynthesis reconstructions.

Methods

In total, 4605 synthetic 2D images (1665 patients, age: 57 ± 37 years) were labeled according to the ACR (American College of Radiology) density (A-D). Two DCNNs with 11 convolutional layers and 3 fully connected layers each, were trained with 70% of the data, whereas 20% was used for validation. The remaining 10% were used as a separate test dataset with 460 images (380 patients). All mammograms in the test dataset were read blinded by two radiologists (reader 1 with two and reader 2 with 11 years of dedicated mammographic experience in breast imaging), and the consensus was formed as the reference standard. The inter- and intra-reader reliabilities were assessed by calculating Cohen’s kappa coefficients, and diagnostic accuracy measures of automated classification were evaluated.

Results

The two models for MLO and CC projections had a mean sensitivity of 80.4% (95%-CI 72.2–86.9), a specificity of 89.3% (95%-CI 85.4–92.3), and an accuracy of 89.6% (95%-CI 88.1–90.9) in the differentiation between ACR A/B and ACR C/D. DCNN versus human and inter-reader agreement were both “substantial” (Cohen’s kappa: 0.61 versus 0.63).

Conclusion

The DCNN allows accurate, standardized, and observer-independent classification of breast density based on the ACR BI-RADS system.

Key Points

 A DCNN performs on par with human experts in breast density assessment for synthetic 2D tomosynthesis reconstructions.

 The proposed technique may be useful for accurate, standardized, and observer-independent breast density evaluation of tomosynthesis.

 

Read the full article online here:

Download the PDF of the article

June 16, 2025 by g4qwj 0 Comments

Classification of Mammographic Breast Microcalcifications Using a Deep Convolutional Neural Network

Abstract

The aim of this study was to investigate the potential of a machine learning algorithm to classify breast cancer solely by the presence of soft tissue opacities in mammograms, independent of other morphological features, using a deep convolutional neural network (dCNN). Soft tissue opacities were classified based on their radiological appearance using the ACR BI-RADS atlas. We included 1744 mammograms from 438 patients to create 7242 icons by manual labeling. The icons were sorted into three categories: “no opacities” (BI-RADS 1), “probably benign opacities” (BI-RADS 2/3) and “suspicious opacities” (BI-RADS 4/5). A dCNN was trained (70% of data), validated (20%) and finally tested (10%). A sliding window approach was applied to create colored probability maps for visual impression. Diagnostic performance of the dCNN was compared to human readout by experienced radiologists on a “real-world” dataset. The accuracies of the models on the test dataset ranged between 73.8% and 89.8%. Compared to human readout, our dCNN achieved a higher specificity (100%, 95% CI: 85.4–100%; reader 1: 86.2%, 95% CI: 67.4–95.5%; reader 2: 79.3%, 95% CI: 59.7–91.3%), and the sensitivity (84.0%, 95% CI: 63.9–95.5%) was lower than that of human readers (reader 1:88.0%, 95% CI: 67.4–95.4%; reader 2:88.0%, 95% CI: 67.7–96.8%). In conclusion, a dCNN can be used for the automatic detection as well as the standardized and observer-independent classification of soft tissue opacities in mammograms independent of the presence of microcalcifications. Human decision making in accordance with the BI-RADS classification can be mimicked by artificial intelligence.
 

Read the full article online here:

Download the PDF of the article