Radiological images and machine learning: Trends, perspectives, and prospects
تصاویر رادیولوژی و یادگیری ماشین: روند، دیدگاه ها، و چشم انداز-2019
The application of machine learning to radiological images is an increasingly active research area that is expected to grow in the next five to ten years. Recent advances in machine learning have the potential to recognize and classify complex patterns from different radiological imaging modalities such as x-rays, computed tomography, magnetic resonance imaging and positron emission tomography imaging. In many applications, machine learning based systems have shown comparable performance to human decision-making. The applications of machine learning are the key ingredients of future clinical decision making and monitoring systems. This review covers the fundamental concepts behind various machine learning techniques and their applications in several radiological imaging areas, such as medical image segmentation, brain function studies and neurological disease diagnosis, as well as computer-aided systems, image registration, and content-based image retrieval systems. Synchronistically, we will briefly discuss current challenges and future directions regarding the application of machine learning in radiological imaging. By giving insight on how take advantage of machine learning powered applications, we expect that clinicians can prevent and diagnose diseases more accurately and efficiently.
Keywords: Deep learning | Machine learning | Imaging modalities | Deep neural networ
State-of-the-Art Deep Learning in Cardiovascular Image Analysis
یادگیری عمیق و پیشرفته در تجزیه و تحلیل تصویر قلب و عروق-2019
Cardiovascular imaging is going to change substantially in the next decade, fueled by the deep learning revolution. For medical professionals, it is important to keep track of these developments to ensure that deep learning can have meaningful impact on clinical practice. This review aims to be a stepping stone in this process. The general concepts underlying most successful deep learning algorithms are explained, and an overview of the state-of-the-art deep learning in cardiovascular imaging is provided. This review discusses >80 papers, covering modalities ranging from cardiac magnetic resonance, computed tomography, and single-photon emission computed tomography, to intravascular optical coherence tomography and echocardiography. Many different machines learning algorithms were used throughout these papers, with the most common being convolutional neural networks. Recent algorithms such as generative adversarial models were also used. The potential implications of deep learning algorithms on clinical practice, now and in the near future, are discussed. (J Am Coll Cardiol Img 2019;12:1549–65) © 2019 by the American College of Cardiology Foundation.
Identification and Quantification of Cardiovascular Structures From CCTA
شناسایی و تعیین ساختارهای قلبی و عروقی از CCTA-2019
OBJECTIVES This study designed and evaluated an end-to-end deep learning solution for cardiac segmentation and quantification. BACKGROUND Segmentation of cardiac structures from coronary computed tomography angiography (CCTA) images is laborious. We designed an end-to-end deep-learning solution. METHODS Scans were obtained from multicenter registries of 166 patients who underwent clinically indicated CCTA. Left ventricular volume (LVV) and right ventricular volume (RVV), left atrial volume (LAV) and right atrial volume (RAV), and left ventricular myocardial mass (LVM) were manually annotated as ground truth. A U-Netinspired, deep-learning model was trained, validated, and tested in a 70:20:10 split. RESULTS Mean age was 61.1 8.4 years, and 49% were women. A combined overall median Dice score of 0.9246 (interquartile range: 0.8870 to 0.9475) was achieved. The median Dice scores for LVV, RVV, LAV, RAV, and LVM were 0.938 (interquartile range: 0.887 to 0.958), 0.927 (interquartile range: 0.916 to 0.946), 0.934 (interquartile range: 0.899 to 0.950), 0.915 (interquartile range: 0.890 to 0.920), and 0.920 (interquartile range: 0.811 to 0.944), respectively. Model prediction correlated and agreed well with manual annotation for LVV (r ¼ 0.98), RVV (r ¼ 0.97), LAV (r ¼ 0.78), RAV (r ¼ 0.97), and LVM (r ¼ 0.94) (p < 0.05 for all). Mean difference and limits of agreement for LVV, RVV, LAV, RAV, and LVM were 1.20 ml (95% CI: 7.12 to 9.51), 0.78 ml (95% CI: 10.08 to 8.52), 3.75 ml (95% CI: 21.53 to 14.03), 0.97 ml (95% CI: 6.14 to 8.09), and 6.41 g (95% CI: 8.71 to 21.52), respectively. CONCLUSIONS A deep-learning model rapidly segmented and quantified cardiac structures. This was done with high accuracy on a pixel level, with good agreement with manual annotation, facilitating its expansion into areas of research and clinical import. (J Am Coll Cardiol Img 2019;-:-–-) © 2019 by the American College of Cardiology Foundation.
Detecting abnormal thyroid cartilages on CT using deep learning
تشخیص غضروف غیر طبیعی تیروئید در CT با استفاده از یادگیری عمیق-2019
Purpose: The purpose of this study was to evaluate the performance of a deep learning algorithm in detecting abnormalities of thyroid cartilage from computed tomography (CT) examination. Materials and methods: A database of 515 harmonized thyroid CT examinations was used, of which information regarding cartilage abnormality was provided for 326. The process consisted of determining image abnormality and, from these preprocessed images, finding the best learning algorithm to appropriately characterize thyroid cartilage as normal or abnormal. CT images were cropped to be centered around the cartilage in order to focus on the relevant area. New images were generated from the originals by applying simple transformations in order to augment the database. Characterizations of cartilage abnormalities were made using transfer learning, by using the architecture of a pre-trained neural network called VGG16 and adapting the final layers to a binary classification problem. Results: The best algorithm yielded an area under the receiving operator characteristic curve (AUC) of 0.72 on a sample of 82 thyroid test images. The sensitivity and specificity of the abnormality detection were 83% and 64% at the best threshold, respectively. Applying the model on another independent sample of 189 new thyroid images resulted in an AUC of 0.70. Conclusion: This study demonstrates the feasibility of using a deep learning-based abnormality detection system to evaluate thyroid cartilage from CT examinations. However, although promising results, the model is not yet able to match an expert’s diagnosis.
KEYWORDS : Thyroid cartilage | Artificial intelligence (AI) | Deep learning | Post-mortem computed tomography (CT) | Larynx
Acceleration of spleen segmentation with end-to-end deep learning method and automated pipeline
تسریع تقسیم بندی طحال با روش یادگیری عمیق پایان انتها به انتها و خط لوله خودکار-2019
Delineation of Computed Tomography (CT) abdominal anatomical structure, specifically spleen segmentation, is useful for not only measuring tissue volume and biomarkers but also for monitoring interventions. Recently, segmentation algorithms using deep learning have been widely used to reduce time humans spend to label CT data. However, the computerized segmentation has two major difficulties: managing intermediate results (e.g., resampled scans, 2D sliced image for deep learning), and setting up the system environments and packages for autonomous execution. To overcome these issues, we propose an automated pipeline for the abdominal spleen segmentation. This pipeline provides an end-to-end synthesized process that allows users to avoid installing any packages and to deal with the intermediate results locally. The pipeline has three major stages: pre-processing of input data, segmentation of spleen using deep learning, 3D reconstruction with the generated labels by matching the segmentation results with the original image dimensions, which can then be used later and for display or demonstration. Given the same volume scan, the approach described here takes about 50 s on average whereas the manual segmentation takes about 30 min on the average. Even if it includes all subsidiary processes such as preprocessing and necessary setups, the whole pipeline process requires on the average 20 min from beginning to end.
Keywords: Clinical trial | Spleen segmentation | Deep learning | Docker | End-to-end automation | DICOM | Image processing
A deep learning model for predicting chemical composition of gallstones with big data in medical Internet of Things
یک مدل یادگیری عمیق برای پیش بینی ترکیب شیمیایی سنگهای صفراوی با داده های بزرگ در اینترنت اشیا پزشکی -2019
In this paper, we present a deep learning model for potentially diagnosing gallbladder stone with big data from medical Internet of Things. With the increasing trend of population aging and the change of people’s living habits, gallbladder stone is increasingly common around the world. Specially, gallstones can be classified into four types, i.e., cholesterol stones, bile pigment stones, mixed stones and other rare stones, based on the chemical composition of gallstones. Furthermore, the chemical composition directly determines the treatment options. Currently, medical Internet of Things enables the collection of big medical data from massive ultrasonic images, computed tomography and magnetic resonance imaging of gallstones. However, it is a challenging issue to determine the exact chemical composition of gallstones from the collected big medical data. To tackle this issue, this paper presents a convolutional neural network to potentially learn the features of the collected data. Furthermore, we describe an effective learning approach for training the developed convolutional neural network. Finally, we analyze the characteristics of different types of gallstones, which can help improve our presented model to potentially determine the chemical composition of gallstones. The presented model can potentially obtain smart medical data from medical Internet of Things for assisted diagnose and treatment recommendation of gallbladder stones, aiming to build smart Internet of Things, especially smart health.
Keywords: Smart data | Medical Internet of Things | Deep learning | Gallbladder stones | Big data
CT texture analysis for the prediction of KRAS mutation status in colorectal cancer via a machine learning approach
تجزیه و تحلیل بافت CT برای پیش بینی وضعیت جهش KRAS در سرطان کولورکتال از طریق یک روش یادگیری ماشین-2019
Purpose: This study aimed to investigate whether a machine learning-based computed tomography (CT) texture analysis could predict the mutation status of V-Ki-ras2 Kirsten rat sarcoma viral oncogene homolog (KRAS) in colorectal cancer. Method: This retrospective study comprised 40 patients with pathologically confirmed colorectal cancer who underwent KRAS mutation testing, contrast-enhancement CT, and 18F-fluorodeoxyglucose (FDG) positron emission tomography (PET) before treatment. Of the 40 patients, 20 had mutated KRAS genes, whereas 20 had wild-type KRAS genes. Fourteen CT texture parameters were extracted from portal venous phase CT images of primary tumors, and the maximum standard uptake values (SUVmax) on 18F-FDG PET images were recorded. Univariate logistic regression was used to develop predictive models for each CT texture parameter and SUVmax, and a machine learning method (multivariate support vector machine) was used to develop a comprehensive set of CT texture parameters. The area under the receiver operating characteristic (ROC) curve (AUC) of each model was calculated using five-fold cross validation. In addition, the performance of the machine learning method with the CT texture parameters was compared with that of SUVmax. Results: In the univariate analyses, the AUC of each CT texture parameter ranged from 0.4 to 0.7, while the AUC of the SUVmax was 0.58. Comparatively, the multivariate support vector machine with comprehensive CT texture parameters yielded an AUC of 0.82, indicating a superior prediction performance when compared to the SUVmax. Conclusions: A machine learning-based CT texture analysis was superior to the SUVmax for predicting the KRAS mutation status of a colorectal cancer.
Keywords: Colorectal cancer | CT texture analysis | Machine learning | KRAS mutation | Radiogenomics
Gender differences in the diagnostic performance of machine learning coronary CT angiography-derived fractional flow reserve -results from the MACHINE registry
تفاوت های جنسیتی در عملکرد تشخیصی یادگیری دستگاه یادگیری عروق کرونر CT-نتیجه حاصل از کسری جریان کسری ناشی از آنژیوگرافی از رجیستری ماشین-2019
Purpose: This study investigated the impact of gender differences on the diagnostic performance of machine-learning based coronary CT angiography (cCTA)-derived fractional flow reserve (CT-FFRML) for the detection of lesion-specific ischemia. Method: Five centers enrolled 351 patients (73.5% male) with 525 vessels in the MACHINE (Machine leArning Based CT angiograpHy derIved FFR: a Multi-ceNtEr) registry. CT-FFRML and invasive FFR≤0.80 were considered hemodynamically significant, whereas cCTA luminal stenosis ≥50% was considered obstructive. The diagnostic performance to assess lesion-specific ischemia in both men and women was assessed on a per-vessel basis. Results: In total, 398 vessels in men and 127 vessels in women were included. Compared to invasive FFR, CT-FFRML reached a sensitivity, specificity, positive predictive value, and negative predictive value of 78% (95%CI 72–84), 79% (95%CI 73–84), 75% (95%CI 69–79), and 82% (95%CI: 76–86) in men vs. 75% (95%CI 58–88), 81 (95%CI 72–89), 61% (95%CI 50–72) and 89% (95%CI 82–94) in women, respectively. CT-FFRML showed no statistically significant difference in the area under the receiver-operating characteristic curve (AUC) in men vs. women (AUC: 0.83 [95%CI 0.79–0.87] vs. 0.83 [95%CI 0.75–0.89], p=0.89). CT-FFRML was not superior to cCTA alone [AUC: 0.83 (95%CI: 0.75–0.89) vs. 0.74 (95%CI: 0.65–0.81), p=0.12] in women, but showed a statistically significant improvement in men [0.83 (95%CI: 0.79–0.87) vs. 0.76 (95%CI: 0.71–0.80), p=0.007]. Conclusions: Machine-learning based CT-FFR performs equally in men and women with superior diagnostic performance over cCTA alone for the detection of lesion-specific ischemia.
Keywords: Coronary artery disease | Machine learning | Spiral computed tomography | Fractional flow reserve
Anatomical context improves deep learning on the brain age estimation task
بهبود زمینه آناتومیک یادگیری عمیق بر روی تخمین سن مغز-2019
Deep learning has shown remarkable improvements in the analysis of medical images without the need for engineered features. In this work, we hypothesize that deep learning is complementary to traditional feature estimation. We propose a network design to include traditional structural imaging features alongside deep convolutional ones and illustrate this approach on the task of imaging-based age prediction in two separate contexts: T1-weighted brain magnetic resonance imaging (MRI) (N=5121, ages 4–96, healthy controls) and computed tomography (CT) of the head (N=1313, ages 1–97, healthy controls). In brain MRI, we can predict age with a mean absolute error of 4.08 years by combining raw images along with engineered structural features, compared to 5.00 years using image-derived features alone and 8.23 years using structural features alone. In head CT, we can predict age with a median absolute error of 9.99 years combining features, compared to 11.02 years with image-derived features alone and 13.28 years with structural features alone. These results show that we can complement traditional feature estimation using deep learning to improve prediction tasks. As the field of medical image processing continues to integrate deep learning, it will be important to use the new techniques to complement traditional imaging features instead of fully displacing them.
Keywords: Deep learning | Convolutional neural networks | Brain age | Medical image processing
Deep learning for variational multimodality tumor segmentation in PET/CT
یادگیری عمیق برای تقسیم تومور چند متغیری تغییرات در PET / CT-2019
Positron emission tomography/computed tomography (PET/CT) imaging can simultaneously acquire func- tional metabolic information and anatomical information of the human body. How to rationally fuse the complementary information in PET/CT for accurate tumor segmentation is challenging. In this study, a novel deep learning based variational method was proposed to automatically fuse multimodality infor- mation for tumor segmentation in PET/CT. A 3D fully convolutional network (FCN) was first designed and trained to produce a probability map from the CT image. The learnt probability map describes the prob- ability of each CT voxel belonging to the tumor or the background, and roughly distinguishes the tumor from its surrounding soft tissues. A fuzzy variational model was then proposed to incorporate the prob- ability map and the PET intensity image for an accurate multimodality tumor segmentation, where the probability map acted as a membership degree prior. A split Bregman algorithm was used to minimize the variational model. The proposed method was validated on a non-small cell lung cancer dataset with 84 PET/CT images. Experimental results demonstrated that: (1) Only a few training samples were needed for training the designed network to produce the probability map; (2) The proposed method can be ap- plied to small datasets, normally seen in clinic research; (3) The proposed method successfully fused the complementary information in PET/CT, and outperformed two existing deep learning-based multi- modality segmentation methods and other multimodality segmentation methods using traditional fusion strategies (without deep learning); (4) The proposed method had a good performance for tumor segmen- tation, even for those with Fluorodeoxyglucose (FDG) uptake inhomogeneity and blurred tumor edges (two major challenges in PET single modality segmentation) and complex surrounding soft tissues (one major challenge in CT single modality segmentation), and achieved an average dice similarity indexes (DSI) of 0.86 ±0.05, sensitivity (SE) of 0.86 ±0.07, positive predictive value (PPV) of 0.87 ±0.10, volume error (VE) of 0.16 ±0.12, and classification error (CE) of 0.30 ±0.12.
Keywords: Tumor segmentation | PET/CT images | Variational method | Deep learning | Information fusion