Development of machine learning algorithms for prediction of mortality in spinal epidural abscess
توسعه الگوریتم های یادگیری ماشین برای پیش بینی مرگ و میر در آبسه اپیدورال ستون فقرات-2019
BACKGROUND CONTEXT: In-hospital and short-term mortality in patients with spinal epidural abscess (SEA) remains unacceptably high despite diagnostic and therapeutic advancements. Forecasting this potentially avoidable consequence at the time of admission could improve patient management and counseling. Few studies exist to meet this need, and none have explored methodologies such as machine learning. PURPOSE: The purpose of this study was to develop machine learning algorithms for prediction of in-hospital and 90-day postdischarge mortality in SEA. STUDY DESIGN/SETTING: Retrospective, case-control study at two academic medical centers and three community hospitals from 1993 to 2016. PATIENTS SAMPLE: Adult patients with an inpatient admission for radiologically confirmed diagnosis of SEA. OUTCOME MEASURES: In-hospital and 90-day postdischarge mortality. METHODS: Five machine learning algorithms (elastic-net penalized logistic regression, random forest, stochastic gradient boosting, neural network, and support vector machine) were developed and assessed by discrimination, calibration, overall performance, and decision curve analysis. RESULTS: Overall, 1,053 SEA patients were identified in the study, with 134 (12.7%) experiencing in-hospital or 90-day postdischarge mortality. The stochastic gradient boosting model achieved the best performance across discrimination, c-statistic=0.89, calibration, and decision curve analysis. The variables used for prediction of 90-day mortality, ranked by importance, were age, albumin, platelet count, neutrophil to lymphocyte ratio, hemodialysis, active malignancy, and diabetes. The final algorithm was incorporated into a web application available here: https://sorg-apps.shinyapps.io/seamortality/. CONCLUSIONS: Machine learning algorithms show promise on internal validation for prediction of 90-day mortality in SEA. Future studies are needed to externally validate these algorithms inindependent populations.
Keywords: Artificial intelligence | Healthcare | Machine learning | Mortality | Spinal epidural abscess | Spine surgery
Improving Workflow Efficiency for Mammography Using Machine Learning
بهبود بهره وری گردش کار برای ماموگرافی با استفاده از یادگیری ماشین-2019
Objective: The aim of this study was to determine whether machine learning could reduce the number of mammograms the radiologist must read by using a machine-learning classifier to correctly identify normal mammograms and to select the uncertain and abnormal examinations for radiological interpretation. Methods: Mammograms in a research data set from over 7,000 women who were recalled for assessment at six UK National Health Service Breast Screening Program centers were used. A convolutional neural network in conjunction with multitask learning was used to extract imaging features from mammograms that mimic the radiological assessment provided by a radiologist, the patient’s nonimaging features, and pathology outcomes. A deep neural network was then used to concatenate and fuse multiple mammogram views to predict both a diagnosis and a recommendation of whether or not additional radiological assessment was needed. Results: Ten-fold cross-validation was used on 2,000 randomly selected patients from the data set; the remainder of the data set was used for convolutional neural network training. While maintaining an acceptable negative predictive value of 0.99, the proposed model was able to identify 34% (95% confidence interval, 25%-43%) and 91% (95% confidence interval: 88%-94%) of the negative mammograms for test sets with a cancer prevalence of 15% and 1%, respectively. Conclusion: Machine learning was leveraged to successfully reduce the number of normal mammograms that radiologists need to read without degrading diagnostic accuracy.
Key Words: Breast cancer | deep learning | machine learning | mammography | radiology
Machine Learning Prediction Models for In-Hospital Mortality After Transcatheter Aortic Valve Replacement
یادگیری ماشین مدل های پیش بینی شده برای مرگ در بیمارستان پس از جایگزینی ترانس دریچه آئورت-2019
OBJECTIVES This study sought to develop and compare an array of machine learning methods to predict in-hospital mortality after transcatheter aortic valve replacement (TAVR) in the United States. BACKGROUND Existing risk prediction tools for in-hospital complications in patients undergoing TAVR have been designed using statistical modeling approaches and have certain limitations. METHODS Patient data were obtained from the National Inpatient Sample database from 2012 to 2015. The data were randomly divided into a development cohort (n ¼ 7,615) and a validation cohort (n ¼ 3,268). Logistic regression, artificial neural network, naive Bayes, and random forest machine learning algorithms were applied to obtain in-hospital mortality prediction models. RESULTS A total of 10,883 TAVRs were analyzed in our study. The overall in-hospital mortality was 3.6%. Overall, prediction models’ performance measured by area under the curve were good (>0.80). The best model was obtained by logistic regression (area under the curve: 0.92; 95% confidence interval: 0.89 to 0.95). Most obtained models plateaued after introducing 10 variables. Acute kidney injury was the main predictor of in-hospital mortality ranked with the highest mean importance in all the models. The National Inpatient Sample TAVR score showed the best discrimination among available TAVR prediction scores. CONCLUSIONS Machine learning methods can generate robust models to predict in-hospital mortality for TAVR. The National Inpatient Sample TAVR score should be considered for prognosis and shared decision making in TAVR patients. (J Am Coll Cardiol Intv 2019;12:1328–38) © 2019 by the American College of Cardiology Foundation.
HMM-based Supervised Machine Learning Framework for the Detection of ECG R Peak Locations
چارچوب یادگیری ماشین نظارت شده مبتنی بر HMM مبتنی برای تشخیص مکان های اوج ECG-2019
Objective: Fetal Electro Cardiogram (fECG) provides critical information on the wellbeing of a foetus heart in its developing stages in the mother’s womb. The objective of this work is to extract fECG which is buried in a composite signal consisting of itself, maternal ECG (mECG) and noises contributed from various unavoidable sources. In the past, the challenge of extracting fECG from the composite signal was dealt with by Stochastic Weiner filter, model-based Kalman filter and other adaptive filtering techniques. Blind Source Separation (BSS) based Independent Component Analysis (ICA) has shown an edge over the adaptive filtering techniques as the former does not require a reference signal. Recently, data-driven machine learning techniques e.g., adaptive neural networks, adaptive neuro-fuzzy inference system, support vector machine (SVM) are also applied. Method: This work pursues hidden Markov model (HMM)-based supervised machine learning frame-work for the determination of the location of fECG QRS complex from the composite abdominal signal. HMM is used to model the underlying hidden states of the observable time series of the extracted and separated fECG data with its QRS peak location as one of the hidden states. The state transition probabilities are estimated in the training phase using the annotated data sets. Afterwards, using the estimated HMM networks, fQRS locations are detected in the testing phase. To evaluate the proposed technique, the accuracy of the correct detection of QRS complex with respect to the correct annotation of QRS complex location is considered and quantified by the sensitivity, probability of false alarm, and accuracy. Results: The best results that have been achieved using the proposed method are: accuracy – 97.1%, correct detection rate (translated to sensitivity) – 100%, and false alarm rate – 2.89%.
Keywords: fECG | mECG | Machine learning | HMM | Accuracy | Sensitivity
Towards Early Monitoring of Chemotherapy-induced Drug Resistance Based on Single Cell Metabolomics: Combining Single-probe Mass Spectrometry with Machine Learning
مانیتورینگ اولیه مقاومت داروئی ناشی از شیمی درمانی بر اساس متابولومیک تک سلولی: ترکیبی از طیف سنجی جرمی تک پروب با یادگیری ماشین-2019
Despite the presence of methods evaluating drug resistance during chemotherapies, techniques, which allow for monitoring the degree of drug resistance in early chemotherapeutic stage from single cells in their native microenvironment, are still absent. Herein, we report an analytical approach that combines single cell mass spectrometry (SCMS) based metabolomics with machine learning (ML) models to address the existing challenges. Metabolomic profiles of live cancer cells (HCT-116) with different levels (i.e., no, low, and high) of chemotherapy-induced drug resistance were measured using the Single-probe SCMS technique. A series of ML models, including random forest (RF), artificial neural network (ANN), and penalized logistic regression (LR), were constructed to predict the degrees of drug resistance of individual cells. A systematic comparison of performance was conducted among multiple models, and the method validation was carried out experimentally. Our results indicate that these ML models, especially the RF model constructed on the obtained SCMS datasets, can rapidly and accurately predict different degrees of drug resistance of live single cells. With such rapid and reliable assessment of drug resistance demonstrated at the single cell level, our method can be potentially employed to evaluate chemotherapeutic efficacy in the clinic.
Key words : Single Cell Mass Spectrometry | Metabolomics | Machine Learning | Drug Resistance | The Singleprobe
Applying Neural-Network-Based Machine Learning to Additive Manufacturing: Current Applications, Challenges, and Future Perspectives
استفاده از یادگیری ماشین مبتنی بر شبکه عصبی برای تولید افزودنی: برنامه های فعلی ، چالش ها و دیدگاه های آینده-2019
Additive manufacturing (AM), also known as three-dimensional printing, is gaining increasing attention from academia and industry due to the unique advantages it has in comparison with traditional subtractive manufacturing. However, AM processing parameters are difficult to tune, since they can exert a huge impact on the printed microstructure and on the performance of the subsequent products. It is a difficult task to build a process–structure–property–performance (PSPP) relationship for AM using traditional numerical and analytical models. Today, the machine learning (ML) method has been demonstrated to be a valid way to perform complex pattern recognition and regression analysis without an explicit need to construct and solve the underlying physical models. Among ML algorithms, the neural network (NN) is the most widely used model due to the large dataset that is currently available, strong computational power, and sophisticated algorithm architecture. This paper overviews the progress of applying the NN algorithm to several aspects of the AM whole chain, including model design, in situ monitoring, and quality evaluation. Current challenges in applying NNs to AM and potential solutions for these problems are then outlined. Finally, future trends are proposed in order to provide an overall discussion of this interdisciplinary area.
Keywords: Additive manufacturing | 3D printing | Neural network | Machine learning | Algorithm
Machine learning phase transition: An iterative proposal
فاز انتقال یادگیری ماشین: یک پیشنهاد تکرار شونده-2019
We propose an iterative proposal to estimate critical points for statistical models based on configurations by combing machine-learning tools. Firstly, phase scenarios and preliminary boundaries of phases are obtained by dimensionality-reduction techniques. Besides, this step not only provides labelled samples for the subsequent step but also is necessary for its application to novel statistical models. Secondly, making use of these samples as training set, neural networks are employed to assign labels to those samples between the phase boundaries in an iterative manner. Newly labelled samples would be put in the training set used in subsequent training and the phase boundaries would be updated as well. The average of the phase boundaries is expected to converge to the critical temperature in this proposal. In concrete examples, we implement this proposal to estimate the critical temperatures for two q-state Potts models with continuous and first order phase transitions. Linear and manifold dimensionality-reduction techniques are employed in the first step. Both a convolutional neural network and a bidirectional recurrent neural network with long short-term memory units perform well for two Potts models in the second step. The convergent behaviors of the estimations reflect the types of phase transitions. And the results indicate that our proposal may be used to explore phase transitions for new general statistical models.
Parallel score fusion of ECG and fingerprint for human authentication based on convolution neural network
همجوشی امتیاز موازی ECG و اثر انگشت را برای احراز هویت انسان بر اساس شبکه های عصبی کانولوشن-2019
Biometrics have been extensively used in the past decades in various security systems and have been deployed around the world. However, all unimodal biometrics have their own limitations and disadvantages (e.g., fingerprint suffers from spoof attacks). Most of these limitations can be addressed by designing a multimodal biometric system, which deploys over one biometric modality to improve the performance and make the system robust to spoof attacks. In this paper, we proposed a secure multimodal biometric system by fusing electrocardiogram (ECG) and fingerprint based on convolution neural network (CNN). To the best of our knowledge, this is the first study to fuse ECG and fingerprint using CNN for human authentication. The feature extraction for individual modalities are performed using CNN and then biometric templates are generated from these features. After that, we have applied one of the cancelable biometric techniques to protect these templates. In the authentication stage, we proposed a Q-Gaussian multi support vector machine (QG-MSVM) as a classifier to improve the authentication performance. Dataset augmentation is successfully used to increase the authentication performance of the proposed system. Our system is tested on two databases, the PTB database from PhysioNet bank for ECG and LivDet2015 database for the fingerprint. Experimental results show that the proposed multimodal system is efficient, robust and reliable than existing multimodal authentication algorithms. According to the advantages of the proposed system, it can be deployed in real applications
Keywords: Authentication | CNN | ECG | Fingerprint | Multimodal biometrics | MSVM
A machine learning approach for traffic-noise annoyance assessment
یک روش یادگیری ماشین برای تخمین آزار سر و صدای ترافیک-2019
In this study, models for predicting traffic-noise annoyance based on noise perception, noise exposure levels, and demographics were developed. By applying machine-learning techniques, in particular artificial neural networks (ANN), support vector machines (SVM) and multiple linear regressions (MLR), the traffic-noise annoyance models were obtained, and the error rates compared. A traffic noise map and the estimation of noise exposure for the case study area were developed. Although, it is quite evident that subjective noise perception and predicted noise exposure levels strongly influence traffic-noise annoyance, traditional statistical models fail to produce accurate predictions. Therefore, a machine-learning approach was applied, which showed a better performance in terms of error rates and the coefficient of determination (R2). The best results for predicting traffic-noise annoyance were obtained with the ANN model, obtaining 42% and 35% error reduction in training subsets compared to the MRL and SVM models, respectively. For testing subsets, the error reductions were 24% and 19% for the corresponding models. The coefficient of determination R2 increased 3.8 and 2.3 times using ANN compared to MRL and SVM models in training subsets respectively, and 1.7 times (in both MRL and SVM models) for testing subsets. In this way, the applied methodology can be used as a reliable and more accurate tool for determining the impact of transportation noise in urban context, promoting the well-being of the population and the creation of suitable public policy.
Keywords: Noise annoyance | Traffic noise | Machine-learning | Artificial neural networks | Support vector machine
Emergent Schrödinger equation in an introspective machine learning architecture
معادله شرودینگر اضطراری در یک معماری یادگیری ماشین درون نگر-2019
Can physical concepts and laws emerge in a neural network as it learns to predict the observation data of physical systems? As a benchmark and a proof-of-principle study of this possibility, here we show an introspective learning architecture that can automatically develop the concept of the quantum wave function and discover the Schrödinger equation from simulated experimental data of the potential-todensity mappings of a quantum particle. This introspective learning architecture contains a machine translator to perform the potential to density mapping, and a knowledge distiller auto-encoder to extract the essential information and its update law from the hidden states of the translator, which turns out to be the quantum wave function and the Schrödinger equation. We envision that our introspective learning architecture can enable machine learning to discover new physics in the future.
Keywords: Quantum physics | Machine learning | Potential-to-density mapping | Neural network | Recurrent autoencoder