Medical image classification using synergic deep learning
طبقه بندی تصویر پزشکی با استفاده از یادگیری عمیق هم افزایی-2019
The classification of medical images is an essential task in computer-aided diagnosis, medical image re- trieval and mining. Although deep learning has shown proven advantages over traditional methods that rely on the handcrafted features, it remains challenging due to the significant intra-class variation and inter-class similarity caused by the diversity of imaging modalities and clinical pathologies. In this paper, we propose a synergic deep learning (SDL) model to address this issue by using multiple deep convo- lutional neural networks (DCNNs) simultaneously and enabling them to mutually learn from each other. Each pair of DCNNs has their learned image representation concatenated as the input of a synergic net- work, which has a fully connected structure that predicts whether the pair of input images belong to the same class. Thus, if one DCNN makes a correct classification, a mistake made by the other DCNN leads to a synergic error that serves as an extra force to update the model. This model can be trained end-to-end under the supervision of classification errors from DCNNs and synergic errors from each pair of DCNNs. Our experimental results on the ImageCLEF-2015, ImageCLEF-2016, ISIC-2016, and ISIC-2017 datasets in- dicate that the proposed SDL model achieves the state-of-the-art performance in these medical image classification tasks.
Keywords: Medical image classification | Intra-class variation | Inter-class similarity | Synergic deep learning model
TOP-GAN: Stain-free cancer cell classification using deep learning with a small training set
TOP-GAN: طبقه بندی سلول های سرطانی بدون لکه با استفاده از یادگیری عمیق با یک مجموعه آموزشی کوچک-2019
We propose a new deep learning approach for medical imaging that copes with the problem of a small training set, the main bottleneck of deep learning, and apply it for classification of healthy and cancer cell lines acquired by quantitative phase imaging. The proposed method, called transferring of pre-trained generative adversarial network (TOP-GAN), is hybridization between transfer learning and generative ad- versarial networks (GANs). Healthy cells and cancer cells of different metastatic potential have been im- aged by low-coherence off-axis holography. After the acquisition, the optical path delay maps of the cells are extracted and directly used as inputs to the networks. In order to cope with the small number of clas- sified images, we use GANs to train a large number of unclassified images from another cell type (sperm cells). After this preliminary training, we change the last layers of the network and design automatic classifiers for the correct cell type (healthy/primary cancer/metastatic cancer) with 90–99% accuracies, although small training sets of down to several images are used. These results are better in comparison to other classic methods that aim at coping with the same problem of a small training set. We believe that our approach makes the combination of holographic microscopy and deep learning networks more accessible to the medical field by enabling a rapid, automatic and accurate classification in stain-free imaging flow cytometry. Furthermore, our approach is expected to be applicable to many other medical image classification tasks, suffering from a small training set.
Keywords: Holography | Quantitative phase imaging | Deep learning | Machine learning algorithms | Image classification | Biological cells
Automated deep learning design for medical image classification by health-care professionals with no coding experience: a feasibility study
طراحی یادگیری عمیق خودکار برای طبقه بندی تصویر پزشکی توسط متخصصان مراقبت های بهداشتی و بدون تجربه برنامه نویسی: یک مطالعه امکان سنجی-2019
Background Deep learning has the potential to transform health care; however, substantial expertise is required to train such models. We sought to evaluate the utility of automated deep learning software to develop medical image diagnostic classifiers by health-care professionals with no coding—and no deep learning—expertise. Methods We used five publicly available open-source datasets: retinal fundus images (MESSIDOR); optical coherence tomography (OCT) images (Guangzhou Medical University and Shiley Eye Institute, version 3); images of skin lesions (Human Against Machine [HAM] 10000), and both paediatric and adult chest x-ray (CXR) images (Guangzhou Medical University and Shiley Eye Institute, version 3 and the National Institute of Health [NIH] dataset, respectively) to separately feed into a neural architecture search framework, hosted through Google Cloud AutoML, that automatically developed a deep learning architecture to classify common diseases. Sensitivity (recall), specificity, and positive predictive value (precision) were used to evaluate the diagnostic properties of the models. The discriminative performance was assessed using the area under the precision recall curve (AUPRC). In the case of the deep learning model developed on a subset of the HAM10000 dataset, we did external validation using the Edinburgh Dermofit Library dataset. Findings Diagnostic properties and discriminative performance from internal validations were high in the binary classification tasks (sensitivity 73·3–97·0%; specificity 67–100%; AUPRC 0·87–1·00). In the multiple classification tasks, the diagnostic properties ranged from 38% to 100% for sensitivity and from 67% to 100% for specificity. The discriminative performance in terms of AUPRC ranged from 0·57 to 1·00 in the five automated deep learning models. In an external validation using the Edinburgh Dermofit Library dataset, the automated deep learning model showed an AUPRC of 0·47, with a sensitivity of 49% and a positive predictive value of 52%. Interpretation All models, except the automated deep learning model trained on the multilabel classification task of the NIH CXR14 dataset, showed comparable discriminative performance and diagnostic properties to state-of-the-art performing deep learning algorithms. The performance in the external validation study was low. The quality of the open-access datasets (including insufficient information about patient flow and demographics) and the absence of measurement for precision, such as confidence intervals, constituted the major limitations of this study. The availability of automated deep learning platforms provide an opportunity for the medical community to enhance their understanding in model development and evaluation. Although the derivation of classification models without requiring a deep understanding of the mathematical, statistical, and programming principles is attractive, comparable performance to expertly designed models is limited to more elementary classification tasks. Furthermore, care should be placed in adhering to ethical principles when using these automated models to avoid discrimination and causing harm. Future studies should compare several application programming interfaces on thoroughly curated datasets.