Intravoxel Incoherent Motion: Model-Free Determination…

Intravoxel Incoherent Motion: Model-Free Determination of Tissue Type in Abdominal Organs Using Machine Learning.

Autors: Ciritsis A, Rossi C, Wurnig MC, Phi Van V, Boss A

Abstract

PURPOSE:
For diffusion data sets including low and high b-values, the intravoxel incoherent motion model is commonly applied to characterize tissue. The aim of the present study was to show that machine learning allows a model-free approach to determine tissue type without a priori assumptions on the underlying physiology.
MATERIALS AND METHODS:
In 8 healthy volunteers, diffusion data sets were acquired using an echo-planar imaging sequence with 16 b-values in the range between 0 and 1000 s/mm. Using the k-nearest neighbors technique, the machine learning algorithm was trained to distinguish abdominal organs (liver, kidney, spleen, muscle) using the signal intensities at different b-values as training features. For systematic variation of model complexity (number of neighbors), performance was assessed by calculation of the accuracy and the kappa coefficient (κ). Most important b-values for tissue discrimination were determined by principal component analysis.

RESULTS:
The optimal trade-off between model complexity and overfitting was found in the range between K = 11 to 13. On „real-world“ data not previously applied to optimize the algorithm, the k-nearest neighbors algorithm was capable to accurately distinguish tissue types with best accuracy of 94.5% and κ = 0.92 reached for intermediate model complexity (K = 11). The principal component analysis showed that most important b-values are (with decreasing importance): b = 1000 s/mm, b = 970 s/mm, b = 750 s/mm, b = 20 s/mm, b = 620 s/mm, and b = 40 s/mm. Applying a reduced set of 6 most important b-values, still a similar accuracy was achieved on the real-world data set with an average accuracy of 93.7% and a κ coefficient of 0.91.

CONCLUSIONS:
Machine learning allows for a model-free determination of tissue type using intra voxel incoherent motion signal decay curves as features. The technique may be useful for segmentation of abdominal organs or distinction between healthy and pathological tissues.

Determination of mammographic breast density…

Determination of mammographic breast density using a deep convolutional neural network.

Authors: Ciritsis A, Rossi C, Vittoria De Martini I, Eberhard M, Marcon M, Becker AS, Berger N, Boss A

Abstract

OBJECTIVE::
High breast density is a risk factor for breast cancer. The aim of this study was to develop a deep convolutional neural network (dCNN) for the automatic classification of breast density based on the mammographic appearance of the tissue according to the American College of Radiology Breast Imaging Reporting and Data System (ACR BI-RADS) Atlas.

METHODS::
In this study, 20,578 mammography single views from 5221 different patients (58.3 ± 11.5 years) were downloaded from the picture archiving and communications system of our institution and automatically sorted according to the ACR density (a-d) provided by the corresponding radiological reports. A dCNN with 11 convolutional layers and 3 fully connected layers was trained and validated on an augmented dataset. The model was finally tested on two different datasets against: i) the radiological reports and ii) the consensus decision of two human readers. None of the test datasets was part of the dataset used for the training and validation of the algorithm.

RESULTS::
The optimal number of epochs was 91 for medio-lateral oblique (MLO) projections and 94 for cranio-caudal projections (CC), respectively. Accuracy for MLO projections obtained on the validation dataset was 90.9% (CC: 90.1%). Tested on the first test dataset of mammographies (850 MLO and 880 CC), the algorithm showed an accordance with the corresponding radiological reports of 71.7% for MLO and of 71.0% for CC. The agreement with the radiological reports improved in the differentiation between dense and fatty breast for both projections (MLO = 88.6% and CC = 89.9%). In the second test dataset of 200 mammographies, a good accordance was found between the consensus decision of the two readers on both, the MLO-model (92.2%) and the right craniocaudal-model (87.4%). In the differentiation between fatty (ACR A/B) and dense breasts (ACR C/D), the agreement reached 99% for the MLO and 96% for the CC projections, respectively.

CONCLUSIONS::
The dCNN allows for accurate classification of breast density based on the ACR BI-RADS system. The proposed technique may allow accurate, standardized, and observer independent breast density evaluation of mammographies.
ADVANCES IN KNOWLEDGE::
Standardized classification of mammographies by a dCNN could lead to a reduction of falsely classified breast densities, thereby allowing for a more accurate breast cancer risk assessment for the individual patient and a more reliable decision, whether additional ultrasound is recommended.

Automatic Classification of Ultrasound Breast Lesions…

Automatic Classification of Ultrasound Breast Lesions using a Deep Convolutional Neural Network mimicking human Decision Making

Authors: Ciritsis A, Rossi C, Eberhard M, Marcon M, Becker AS, Boss A

Abstract

OBJECTIVES:
To evaluate a deep convolutional neural network (dCNN) for detection, highlighting, and classification of ultrasound (US) breast lesions mimicking human decision-making according to the Breast Imaging Reporting and Data System (BI-RADS).

METHODS AND MATERIALS:
One thousand nineteen breast ultrasound images from 582 patients (age 56.3 ± 11.5 years) were linked to the corresponding radiological report. Lesions were categorized into the following classes: no tissue, normal breast tissue, BI-RADS 2 (cysts, lymph nodes), BI-RADS 3 (non-cystic mass), and BI-RADS 4-5 (suspicious). To test the accuracy of the dCNN, one internal dataset (101 images) and one external test dataset (43 images) were evaluated by the dCNN and two independent readers. Radiological reports, histopathological results, and follow-up examinations served as reference. The performances of the dCNN and the humans were quantified in terms of classification accuracies and receiver operating characteristic (ROC) curves.

RESULTS:
In the internal test dataset, the classification accuracy of the dCNN differentiating BI-RADS 2 from BI-RADS 3-5 lesions was 87.1% (external 93.0%) compared with that of human readers with 79.2 ± 1.9% (external 95.3 ± 2.3%). For the classification of BI-RADS 2-3 versus BI-RADS 4-5, the dCNN reached a classification accuracy of 93.1% (external 95.3%), whereas the classification accuracy of humans yielded 91.6 ± 5.4% (external 94.1 ± 1.2%). The AUC on the internal dataset was 83.8 (external 96.7) for the dCNN and 84.6 ± 2.3 (external 90.9 ± 2.9) for the humans.
CONCLUSION:
dCNNs may be used to mimic human decision-making in the evaluation of single US images of breast lesion according to the BI-RADS catalog. The technique reaches high accuracies and may serve for standardization of highly observer-dependent US assessment.

KEY POINTS:
• Deep convolutional neural networks could be used to classify US breast lesions. • The implemented dCNN with its sliding window approach reaches high accuracies in the classification of US breast lesions. • Deep convolutional neural networks may serve for standardization in US BI-RADS classification.

Automated pixel-wise brain tissue segmentation…

Automated pixel-wise brain tissue segmentation of diffusion-weighted images via machine learning

Authors: Ciritsis A, Rossi C, Eberhard M, Marcon M, Becker AS, Boss A

Abstract

The diffusion-weighted (DW) MR signal sampled over a wide range of b-values potentially allows for tissue differentiation in terms of cellularity, microstructure, perfusion, and T2 relaxivity. This study aimed to implement a machine learning algorithm for automatic brain tissue segmentation from DW-MRI datasets, and to determine the optimal sub-set of features for accurate segmentation. DWI was performed at 3 T in eight healthy volunteers using 15 b-values and 20 diffusion-encoding directions. The pixel-wise signal attenuation, as well as the trace and fractional anisotropy (FA) of the diffusion tensor, were used as features to train a support vector machine classifier for gray matter, white matter, and cerebrospinal fluid classes. The datasets of two volunteers were used for validation. For each subject, tissue classification was also performed on 3D T1 -weighted data sets with a probabilistic framework. Confusion matrices were generated for quantitative assessment of image classification accuracy in comparison with the reference method. DWI-based tissue segmentation resulted in an accuracy of 82.1% on the validation dataset and of 82.2% on the training dataset, excluding relevant model over-fitting. A mean Dice coefficient (DSC) of 0.79 ± 0.08 was found. About 50% of the classification performance was attributable to five features (i.e. signal measured at b-values of 5/10/500/1200 s/mm2 and the FA). This reduced set of features led to almost identical performances for the validation (82.2%) and the training (81.4%) datasets (DSC = 0.79 ± 0.08). Machine learning techniques applied to DWI data allow for accurate brain tissue segmentation based on both morphological and functional information.

Go to Top