Improving Workflow Efficiency for Mammography Using Machine Learning
بهبود بهره وری گردش کار برای ماموگرافی با استفاده از یادگیری ماشین-2019
Objective: The aim of this study was to determine whether machine learning could reduce the number of mammograms the radiologist must read by using a machine-learning classifier to correctly identify normal mammograms and to select the uncertain and abnormal examinations for radiological interpretation. Methods: Mammograms in a research data set from over 7,000 women who were recalled for assessment at six UK National Health Service Breast Screening Program centers were used. A convolutional neural network in conjunction with multitask learning was used to extract imaging features from mammograms that mimic the radiological assessment provided by a radiologist, the patient’s nonimaging features, and pathology outcomes. A deep neural network was then used to concatenate and fuse multiple mammogram views to predict both a diagnosis and a recommendation of whether or not additional radiological assessment was needed. Results: Ten-fold cross-validation was used on 2,000 randomly selected patients from the data set; the remainder of the data set was used for convolutional neural network training. While maintaining an acceptable negative predictive value of 0.99, the proposed model was able to identify 34% (95% confidence interval, 25%-43%) and 91% (95% confidence interval: 88%-94%) of the negative mammograms for test sets with a cancer prevalence of 15% and 1%, respectively. Conclusion: Machine learning was leveraged to successfully reduce the number of normal mammograms that radiologists need to read without degrading diagnostic accuracy.
Key Words: Breast cancer | deep learning | machine learning | mammography | radiology
Deep learning facilitates the diagnosis of adult asthma
تسهیلات یادگیری عمیق در تشخیص آسم بزرگسالان-2019
Background: We explored whether the use of deep learning to model combinations of symptom-physical signs and objective tests, such as lung function tests and the bronchial challenge test, would improve model performance in predicting the initial diagnosis of adult asthma when compared to the conventional machine learning diagnostic method. Methods: The data were obtained from the clinical records on prospective study of 566 adult outpatients who visited Kindai University Hospital for the first time with complaints of non-specific respiratory symptoms. Asthma was comprehensively diagnosed by specialists based on symptom-physical signs and objective tests. Model performance metrics were compared to logistic analysis, support vector machine (SVM) learning, and the deep neural network (DNN) model. Results: For the diagnosis of adult asthma based on symptom-physical signs alone, the accuracy of the DNN model was 0.68, whereas that for the SVM was 0.60 and for the logistic analysis was 0.65. When adult asthma was diagnosed based on symptom-physical signs, biochemical findings, lung function tests, and the bronchial challenge test, the accuracy of the DNN model increased to 0.98 and was significantly higher than the 0.82 accuracy of the SVM and the 0.94 accuracy of the logistic analysis. Conclusions: DNN is able to better facilitate diagnosing adult asthma, compared with classical machine learnings, such as logistic analysis and SVM. The deep learning models based on symptom-physical signs and objective tests appear to improve the performance for diagnosing adult asthma
Keywords: Artificial intelligence | Asthma | Deep learning | Diagnosis | Support vector machine
Classifying transportation mode and speed from trajectory data via deep multi-scale learning
طبقه بندی حالت حمل و نقل و سرعت داده های مسیر از طریق یادگیری عمیق چند مقیاسی-2019
With the rapid development of mobile Internet, the Internet of Things and other new technologies, mo- bile devices are generating massive amounts of spatio-temporal trajectory data. This paper aims to pro- pose a method that can automatically classify transportation mode and speed, help people understand the mobility of moving objects, thus making people’s life more convenient and traffic management easier. Although there have been some studies on trajectory classification, yet they either require manual feature selection or fail to fully consider the impact of time and space on classification results. None of them can extract features automatically and comprehensively. Hence, we propose Deep Multi-Scale Learning Model and design a deep neural network to learn features under multi-scale time and space granularities au- tomatically. The obtained features are fused to output final classification results. Our method is based on the latest image classification network structure DenseNet, and incorporates attention mechanism and residual learning. This model is able to fully capture spatial features so as to enhance feature propaga- tion and capture long-term dependence. Moreover, the number of network structure parameters is also reduced. We have evaluated our Deep Multi-Scale Learning Model on two real datasets. The results show that our model is superior to the current state-of-the-art models in top-1 accuracy, recall and f1-score. Furthermore, the classification results from our model can help to understand mobility accurately.
Keywords: Trajectory data | Mobility | Deep multi-scale learning
Times-series data augmentation and deep learning for construction equipment activity recognition
تقویت داده های سری زمانی و یادگیری عمیق برای شناخت فعالیت تجهیزات ساختمانی-2019
Automated, real-time, and reliable equipment activity recognition on construction sites can help to minimize idle time, improve operational efficiency, and reduce emissions. Previous efforts in activity recognition of construction equipment have explored different classification algorithms anm accelerometers and gyroscopes. These studies utilized pattern recognition approaches such as statistical models (e.g., hidden-Markov models); shallow neural networks (e.g., Artificial Neural Networks); and distance algorithms (e.g., K-nearest neighbor) to classify the time-series data collected from sensors mounted on the equipment. Such methods necessitate the segmentation of continuous operational data with fixed or dynamic windows to extract statistical features. This heuristic and manual feature extraction process is limited by human knowledge and can only extract human-specified shallow features. However, recent developments in deep neural networks, specifically recurrent neural network (RNN), presents new opportunities to classify sequential time-series data with recurrent lateral connections. RNN can automatically learn high-level representative features through the network instead of being manually designed, making it more suitable for complex activity recognition. However, the application of RNN requires a large training dataset which poses a practical challenge to obtain from real construction sites. Thus, this study presents a data-augmentation framework for generating synthetic time-series training data for an RNN-based deep learning network to accurately and reliably recognize equipment activities. The proposed methodology is validated by generating synthetic data from sample datasets, that were collected from two earthmoving operations in the real world. The synthetic data along with the collected data were used to train a long short-term memory (LSTM)-based RNN. The trained model was evaluated by comparing its performance with traditionally used classification algorithms for construction equipment activity recognition. The deep learning framework presented in this study outperformed the traditionally used machine learning classification algorithms for activity recognition regarding model accuracy and generalization.
Keywords: Construction equipment activity recognition | Inertial measurement unit | Deep learning | Time-series data augmentation | LSTM network | Big data analytics
Intelligent fault diagnosis for rotating machinery using deep Q-network based health state classification: A deep reinforcement learning approach
تشخیص خطای هوشمند برای ماشین آلات در حال چرخش با استفاده از طبقه بندی حالت سلامت مبتنی بر شبکه Q عمقی: یک روش یادگیری تقویتی عمیق-2019
Fault diagnosis methods for rotating machinery have always been a hot research topic, and artificial intelligencebased approaches have attracted increasing attention from both researchers and engineers. Among those related studies and methods, artificial neural networks, especially deep learning-based methods, are widely used to extract fault features or classify fault features obtained by other signal processing techniques. Although such methods could solve the fault diagnosis problems of rotating machinery, there are still two deficiencies. (1) Unable to establish direct linear or non-linear mapping between raw data and the corresponding fault modes, the performance of such fault diagnosis methods highly depends on the quality of the extracted features. (2) The optimization of neural network architecture and parameters, especially for deep neural networks, requires considerable manual modification and expert experience, which limits the applicability and generalization of such methods. As a remarkable breakthrough in artificial intelligence, AlphaGo, a representative achievement of deep reinforcement learning, provides inspiration and direction for the aforementioned shortcomings. Combining the advantages of deep learning and reinforcement learning, deep reinforcement learning is able to build an end-to-end fault diagnosis architecture that can directly map raw fault data to the corresponding fault modes. Thus, based on deep reinforcement learning, a novel intelligent diagnosis method is proposed that is able to overcome the shortcomings of the aforementioned diagnosis methods. Validation tests of the proposed method are carried out using datasets of two types of rotating machinery, rolling bearings and hydraulic pumps, which contain a large number of measured raw vibration signals under different health states and working conditions. The diagnosis results show that the proposed method is able to obtain intelligent fault diagnosis agents that can mine the relationships between the raw vibration signals and fault modes autonomously and effectively. Considering that the learning process of the proposed method depends only on the replayed memories of the agent and the overall rewards, which represent much weaker feedback than that obtained by the supervised learning-based method, the proposed method is promising in establishing a general fault diagnosis architecture for rotating machinery.
Keywords: Fault diagnosis | Rotating machinery | Deep reinforcement learning | Deep Q-network
ISeeU: Visually interpretable deep learning for mortality prediction inside the ICU
ISeeU: یادگیری عمیق قابل تفسیر برای پیش بینی مرگ و میر در بخش مراقبت های ویژه-2019
To improve the performance of Intensive Care Units (ICUs), the field of bio-statistics has developed scores which try to predict the likelihood of negative outcomes. These help evaluate the effectiveness of treatments and clinical practice, and also help to identify patients with unexpected outcomes. However, they have been shown by several studies to offer sub-optimal performance. Alternatively, Deep Learning offers state of the art capabilities in certain prediction tasks and research suggests deep neural networks are able to outperform traditional techniques. Nevertheless, a main impediment for the adoption of Deep Learning in healthcare is its reduced interpretability, for in this field it is crucial to gain insight into the why of predictions, to assure that models are actually learning relevant features instead of spurious correlations. To address this, we propose a deep multiscale convolutional architecture trained on the Medical Information Mart for Intensive Care III (MIMIC-III) for mortality prediction, and the use of concepts from coalitional game theory to construct visual explanations aimed to show how important these inputs are deemed by the network. Results show our model attains a ROC AUC of 0.8735 (± 0.0025) which is competitive with the state of the art of Deep Learning mortality models trained on MIMIC-III data, while remaining interpretable. Supporting code can be found at https://github.com/ williamcaicedo/ISeeU.
Keywords: Deep learning | MIMIC-III | ICU | Shapley Values
Radiological images and machine learning: Trends, perspectives, and prospects
تصاویر رادیولوژی و یادگیری ماشین: روند، دیدگاه ها، و چشم انداز-2019
The application of machine learning to radiological images is an increasingly active research area that is expected to grow in the next five to ten years. Recent advances in machine learning have the potential to recognize and classify complex patterns from different radiological imaging modalities such as x-rays, computed tomography, magnetic resonance imaging and positron emission tomography imaging. In many applications, machine learning based systems have shown comparable performance to human decision-making. The applications of machine learning are the key ingredients of future clinical decision making and monitoring systems. This review covers the fundamental concepts behind various machine learning techniques and their applications in several radiological imaging areas, such as medical image segmentation, brain function studies and neurological disease diagnosis, as well as computer-aided systems, image registration, and content-based image retrieval systems. Synchronistically, we will briefly discuss current challenges and future directions regarding the application of machine learning in radiological imaging. By giving insight on how take advantage of machine learning powered applications, we expect that clinicians can prevent and diagnose diseases more accurately and efficiently.
Keywords: Deep learning | Machine learning | Imaging modalities | Deep neural networ
Automatic staging model of heart failure based on deep learning
مدل مرحله بندی خودکار نارسایی قلبی مبتنی بر یادگیری عمیق-2019
Heart failure (HF) is a disease that is harmful to human health. Recent advances in machine learningyielded new techniques to train deep neural networks, which resulted in highly successful applica-tions in many pattern recognition tasks such as object detection and speech recognition. To improve thediagnostic accuracy of HF staging, this study evaluates the performance of deep learning-based modelson combined features for its categorization. We proposed a novel deep convolutional neural network-Recurrent neural network (CNN-RNN) model for automatic staging of heart failure diseases in real-timeand dynamically. We employed the data segmentation and data augmentation pre-processing datasetto make the classification performance of the proposed architecture better. Specifically, this paper useconvolutional neural network (CNN) as a feature extractor instead of training the entire network toextract the characteristics of the electrocardiogram (ECG) signals and form a feature set. We combine theabove feature set with other clinical features, feed the combined features to RNN for classification, andfinally obtain 5 classification results. Experiments shows that the CNN-RNN model proposed in this paperachieved an accuracy of 97.6%, the sensitivity of 96.3%, specificity of 97.4% and proportion of 97.1% fortwo seconds of ECG segments. We obtained an accuracy, sensitivity, specificity and proportion of 96.2%,96.9%, 95.7%, and 94.3% respectively for five seconds of ECG duration. The model can be used as an aid tohelp clinicians confirm their diagnosis.
Keywords:Heart failure | Staging model | Deep learning | Deep CNN-RNN model
Automated diagnosis of ear disease using ensemble deep learning with a big otoendoscopy image database
تشخیص خودکار بیماری گوش با استفاده از یادگیری عمیق گروه با یک پایگاه داده بزرگ تصویر otoendoscopy-2019
Background: Ear and mastoid disease can easily be treated by early detection and appropriate medical care. However, short of specialists and relatively lowdiagnostic accuracy calls for a newway of diagnostic strategy, inwhich deep learning may play a significant role. The current study presents a machine learning model to automatically diagnose ear disease using a large database of otoendoscopic images acquired in the clinical environment. Methods: Total 10,544 otoendoscopic images were used to train nine public convolution-based deep neural networks to classify eardrum and external auditory canal features into six categories of ear diseases, covering most ear diseases (Normal, Attic retraction, Tympanic perforation, Otitis externa±myringitis, Tumor). After evaluating several optimization schemes, two best-performingmodelswere selected to compose an ensemble classifier, by combining classification scores of each classifier. Findings: According to accuracy and training time, transfer learning models based on Inception-V3 and ResNet101 were chosen and the ensemble classifier using the two models yielded a significant improvement over each model, the accuracy of which is in average 93·67% for the 5-folds cross-validation. Considering substantial data-size dependency of classifier performance in the transfer learning, evaluated in this study, the high accuracy in the current model is attributable to the large database. Interpretation: The current study is unprecedented in terms of both disease diversity and diagnostic accuracy, which is compatible or even better than an average otolaryngologist. The classifier was trainedwith data in a various acquisition condition,which is suitable for the practical environment. This study shows the usefulness of utilizing a deep learning model in the early detection and treatment of ear disease in the clinical situation. Fund: This research was supported by Brain Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT(NRF-2017M3C7A1049051).
Keywords: Convolutional neural network | Deep learning | Otoendoscopy | Tympanic membrane | Ear disease | Ensemble learning
Deep-learning model for predicting 30-day postoperative mortality
مدل یادگیری عمیق برای پیش بینی مرگ و میر 30 روز بعد از عمل-2019
Background: Postoperative mortality occurs in 1e2% of patients undergoing major inpatient surgery. The currently available prediction tools using summaries of intraoperative data are limited by their inability to reflect shifting risk associated with intraoperative physiological perturbations. We sought to compare similar benchmarks to a deeplearning algorithm predicting postoperative 30-day mortality. Methods: We constructed a multipath convolutional neural network model using patient characteristics, co-morbid conditions, preoperative laboratory values, and intraoperative numerical data from patients undergoing surgery with tracheal intubation at a single medical centre. Data for 60 min prior to a randomly selected time point were utilised. Model performance was compared with a deep neural network, a random forest, a support vector machine, and a logistic regression using predetermined summary statistics of intraoperative data. Results: Of 95 907 patients, 941 (1%) died within 30 days. The multipath convolutional neural network predicted postoperative 30-day mortality with an area under the receiver operating characteristic curve of 0.867 (95% confidence interval [CI]: 0.835e0.899). This was higher than that for the deep neural network (0.825; 95% CI: 0.790e0.860), random forest (0.848; 95% CI: 0.815e0.882), support vector machine (0.836; 95% CI: 0.802e870), and logistic regression (0.837; 95% CI: 0.803e0.871). Conclusions: A deep-learning time-series model improves prediction compared with models with simple summaries of intraoperative data. We have created a model that can be used in real time to detect dynamic changes in a patient’s risk for postoperative mortality.
Keywords: anaesthesiology | deep learning | machine learning | postoperative complications | risk prediction | surgery