Quantitative EEG reactivity and machine learning for prognostication in hypoxic-ischemic brain injury
واکنش کمی EEG و یادگیری ماشین برای پیش آگهی در آسیب مغزی هیپوکسیک-ایسکمیک-2019
Objective: Electroencephalogram (EEG) reactivity is a robust predictor of neurological recovery after cardiac arrest, however interrater-agreement among electroencephalographers is limited. We sought to evaluate the performance of machine learning methods using EEG reactivity data to predict good longterm outcomes in hypoxic-ischemic brain injury. Methods: We retrospectively reviewed clinical and EEG data of comatose cardiac arrest subjects. Electroencephalogram reactivity was tested within 72 h from cardiac arrest using sound and pain stimuli. A Quantitative EEG (QEEG) reactivity method evaluated changes in QEEG features (EEG spectra, entropy, and frequency features) during the 10 s before and after each stimulation. Good outcome was defined as Cerebral Performance Category of 1–2 at six months. Performance of a random forest classifier was compared against a penalized general linear model (GLM) and expert electroencephalographer review. Results: Fifty subjects were included and sixteen (32%) had good outcome. Both QEEG reactivity methods had comparable performance to expert EEG reactivity assessment for good outcome prediction (mean AUC 0.8 for random forest vs. 0.69 for GLM vs. 0.69 for expert review, respectively; p non-significant). Conclusions: Machine-learning models utilizing quantitative EEG reactivity data can predict long-term outcome after cardiac arrest. Significance: A quantitative approach to EEG reactivity assessment may support prognostication in cardiac arrest.
Keywords: EEG reactivity | Quantitative EEG | Hypoxic-ischemic encephalopathy | Cardiac arrest | Machine learning
A machine-learning-based prediction model of fistula formation after interstitial brachytherapy for locally advanced gynecological malignancies
یک مدل پیش بینی مبتنی بر یادگیری ماشینی از تشکیل فیستول پس از براکی تراپی بینابینی برای بدخیمی های ژنتیکی بومی محلی-2019
PURPOSE: External beam radiotherapy combined with interstitial brachytherapy is commonly used to treat patients with bulky, advanced gynecologic cancer. However, the high radiation dose needed to control the tumor may result in fistula development. There is a clinical need to identify patients at high risk for fistula formation such that treatment may be managed to prevent this toxic side effect. This work aims to develop a fistula prediction model framework using machine learning based on patient, tumor, and treatment features. METHODS AND MATERIALS: This retrospective study included 35 patients treated at our institution using interstitial brachytherapy for various gynecological malignancies. Five patients developed rectovaginal fistula and two developed both rectovaginal and vesicovaginal fistula. For each patient, 31 clinical features of multiple data types were collected to develop a fistula prediction framework. A nonlinear support vector machine was used to build the prediction model. Sequential backward feature selection and sequential floating backward feature selection methods were used to determine optimal feature sets. To overcome data imbalance issues, the synthetic minority oversampling technique was used to generate synthetic fistula cases for model training. RESULTS: Seven mixed data features were selected by both sequential backward selection and sequential floating backward selection methods. Our prediction model using these features achieved a high prediction accuracy, that is, 0.904 area under the curve, 97.1% sensitivity, and 88.5% specificity. CONCLUSIONS: A machine-learningebased prediction model of fistula formation has been developed for patients with advanced gynecological malignancies treated using interstitial brachytherapy. This model may be clinically impactful pending refinement and validation in a larger series.
Keywords: Machine learning | Support vector machine | Interstitial brachytherapy | Gynecologic cancer
Improving Workflow Efficiency for Mammography Using Machine Learning
بهبود بهره وری گردش کار برای ماموگرافی با استفاده از یادگیری ماشین-2019
Objective: The aim of this study was to determine whether machine learning could reduce the number of mammograms the radiologist must read by using a machine-learning classifier to correctly identify normal mammograms and to select the uncertain and abnormal examinations for radiological interpretation. Methods: Mammograms in a research data set from over 7,000 women who were recalled for assessment at six UK National Health Service Breast Screening Program centers were used. A convolutional neural network in conjunction with multitask learning was used to extract imaging features from mammograms that mimic the radiological assessment provided by a radiologist, the patient’s nonimaging features, and pathology outcomes. A deep neural network was then used to concatenate and fuse multiple mammogram views to predict both a diagnosis and a recommendation of whether or not additional radiological assessment was needed. Results: Ten-fold cross-validation was used on 2,000 randomly selected patients from the data set; the remainder of the data set was used for convolutional neural network training. While maintaining an acceptable negative predictive value of 0.99, the proposed model was able to identify 34% (95% confidence interval, 25%-43%) and 91% (95% confidence interval: 88%-94%) of the negative mammograms for test sets with a cancer prevalence of 15% and 1%, respectively. Conclusion: Machine learning was leveraged to successfully reduce the number of normal mammograms that radiologists need to read without degrading diagnostic accuracy.
Key Words: Breast cancer | deep learning | machine learning | mammography | radiology
The Application of Machine Learning to Quality Improvement Through the Lens of the Radiology Value Network
کاربرد یادگیری ماشین برای بهبود کیفیت از طریق لنز شبکه ارزش رادیولوژی-2019
Recent advances in machine learning and artificial intelligence offer promising applications to radiology quality improvement initiatives as they relate to the radiology value network. Coordination within the interlocking web of systems, events, and stakeholders in the radiology value network may be mitigated though standardization, automation, and a focus on workflow efficiency. In this article the authors present applications of these various strategies via use cases for quality improvement projects at different points in the radiology value network. In addition, the authors discuss opportunities for machine-learning applications in data aggregation as opposed to traditional applications in data extraction.
Key Words: Machine learning | artificial intelligence | radiology quality improvement | radiology value network | data aggregation
An Evaluation of Machine Learning Approaches for the Prediction of Essential Genes in Eukaryotes Using Protein Sequence-Derived Features
ارزیابی رویکردهای یادگیری ماشینی برای پیش بینی ژنهای ضروری در یوکاریوتها با استفاده از ویژگیهای حاصل از توالی پروتئین-2019
The availability of whole-genome sequences and associated multi-omics data sets, combined with advances in gene knockout and knockdown methods, has enabled large-scale annotation and exploration of gene and protein functions in eukaryotes. Knowing which genes are essential for the survival of eukaryotic organisms is paramount for an understanding of the basic mechanisms of life, and could assist in identifying intervention targets in eukaryotic pathogens and cancer. Here, we studied essential gene orthologs among selected species of eukaryotes, and then employed a systematic machine-learning approach, using protein sequence-derived features and selection procedures, to investigate essential gene predictions within and among species. We showed that the numbers of essential gene orthologs comprise small fractions when comparedwith the total number of orthologs among the eukaryotic species studied. In addition, we demonstrated that machine-learning models trainedwith subsets of essentiality-related data performed better than random guessing of gene essentiality for a particular species. Consistent with our gene ortholog analysis, the predictions of essential genes among multiple (including distantly-related) species is possible, yet challenging, suggesting that most essential genes are unique to a species. The presentwork provides a foundation for the expansion of genome-wide essentiality investigations in eukaryotes using machine learning approaches.
Keywords: Machine-learning | Essential genes | Essentiality prediction | Eukaryotes
Machine learning phase transition: An iterative proposal
فاز انتقال یادگیری ماشین: یک پیشنهاد تکرار شونده-2019
We propose an iterative proposal to estimate critical points for statistical models based on configurations by combing machine-learning tools. Firstly, phase scenarios and preliminary boundaries of phases are obtained by dimensionality-reduction techniques. Besides, this step not only provides labelled samples for the subsequent step but also is necessary for its application to novel statistical models. Secondly, making use of these samples as training set, neural networks are employed to assign labels to those samples between the phase boundaries in an iterative manner. Newly labelled samples would be put in the training set used in subsequent training and the phase boundaries would be updated as well. The average of the phase boundaries is expected to converge to the critical temperature in this proposal. In concrete examples, we implement this proposal to estimate the critical temperatures for two q-state Potts models with continuous and first order phase transitions. Linear and manifold dimensionality-reduction techniques are employed in the first step. Both a convolutional neural network and a bidirectional recurrent neural network with long short-term memory units perform well for two Potts models in the second step. The convergent behaviors of the estimations reflect the types of phase transitions. And the results indicate that our proposal may be used to explore phase transitions for new general statistical models.
A machine learning approach for traffic-noise annoyance assessment
یک روش یادگیری ماشین برای تخمین آزار سر و صدای ترافیک-2019
In this study, models for predicting traffic-noise annoyance based on noise perception, noise exposure levels, and demographics were developed. By applying machine-learning techniques, in particular artificial neural networks (ANN), support vector machines (SVM) and multiple linear regressions (MLR), the traffic-noise annoyance models were obtained, and the error rates compared. A traffic noise map and the estimation of noise exposure for the case study area were developed. Although, it is quite evident that subjective noise perception and predicted noise exposure levels strongly influence traffic-noise annoyance, traditional statistical models fail to produce accurate predictions. Therefore, a machine-learning approach was applied, which showed a better performance in terms of error rates and the coefficient of determination (R2). The best results for predicting traffic-noise annoyance were obtained with the ANN model, obtaining 42% and 35% error reduction in training subsets compared to the MRL and SVM models, respectively. For testing subsets, the error reductions were 24% and 19% for the corresponding models. The coefficient of determination R2 increased 3.8 and 2.3 times using ANN compared to MRL and SVM models in training subsets respectively, and 1.7 times (in both MRL and SVM models) for testing subsets. In this way, the applied methodology can be used as a reliable and more accurate tool for determining the impact of transportation noise in urban context, promoting the well-being of the population and the creation of suitable public policy.
Keywords: Noise annoyance | Traffic noise | Machine-learning | Artificial neural networks | Support vector machine
Neurobiological divergence of the positive and negative schizophrenia subtypes identified upon a new factor-structure of psychopathology using non-negative factorization: An international machine-learning study
واگرایی عصبی از زیرگروه های اسکیزوفرنی مثبت و منفی مشخص شده بر یک ساختار جدید از روانشناسی با استفاده از فاکتورسازی غیر منفی: یک مطالعه بین المللی یادگیری ماشین-2019
Objective: Disentangling psychopathological heterogeneity in schizophrenia is challenging and previous results remain inconclusive. We employed advanced machine-learning to identify a stable and generalizable factorization of the “Positive and Negative Syndrome Scale (PANSS)”, and used it to identify psychopathological subtypes as well as their neurobiological differentiations. Methods: PANSS data from the Pharmacotherapy Monitoring and Outcome Survey cohort (1545 patients, 586 followed up after 1.35±0.70 years) were used for learning the factor-structure by an orthonormal projective non-negative factorization. An international sample, pooled from nine medical centers across Europe, USA, and Asia (490 patients), was used for validation. Patients were clustered into psychopathological subtypes based on the identified factor-structure, and the neurobiological divergence between the subtypes was assessed by classification analysis on functional MRI connectivity patterns. Results: A four-factor structure representing negative, positive, affective, and cognitive symptoms was identified as the most stable and generalizable representation of psychopathology. It showed higher internal consistency than the original PANSS subscales and previously proposed factor-models. Based on this representation, the positive-negative dichotomy was confirmed as the (only) robust psychopathological subtypes, and these subtypes were longitudinally stable in about 80% of the repeatedly assessed patients. Finally, the individual subtype could be predicted with good accuracy from functional connectivity profiles of the ventro-medial frontal cortex, temporoparietal junction, and precuneus. Conclusions: Machine-learning applied to multi-site data with cross-validation yielded a factorization generalizable across populations and medical systems. Together with subtyping and the demonstrated ability to predict subtype membership from neuroimaging data, this work further disentangles the heterogeneity in schizophrenia
Keywords: non-negative factorization | brain imaging | subtyping | machine learning | multivariate classification | schizophrenia
Towards an integrated machine-learning framework for model evaluation and uncertainty quantification
به سمت یک چارچوب یکپارچه یادگیری ماشین برای ارزیابی مدل و کمیت عدم اطمینان-2019
We introduce a new paradigm for treating and exploiting simulation data, serving in parallel as an alternative workflow for model evaluation and uncertainty quantification. Instead of reporting simulations of base-case and specific variations scenarios, databases covering a wide spectrum of operational conditions are built by means of machine-learning using sophisticated mathematical algorithms. While the approach works for all sorts of computer-aided engineering applications, the present contribution addresses the CFD/CMFD sub-branch, with application to a widely used benchmark of convective flow boiling. In addition to comparing simulation and experimental results on a case-by-case basis, machine-learning is used to create their respective (CFD and experiment) data-driven models (DDM), which will in a later stage serve for assessing the predictive performance of the CFD models over a wider range of experimental conditions, hence providing a high-level classification of their range of applicability.
Keywords: Fluid flow simulation | Wall boiling | Data analytics | Digital Twin | Machine-learning | Data-driven models (DDM)
Comparison between two programs for image analysis, machine learning and subsequent classification
مقایسه دو برنامه برای تجزیه و تحلیل تصویر ، یادگیری ماشین و طبقه بندی بعدی-2019
In the early 1950s, flow cytometry was developed as the first method for automated quantitative cellular analysis. In the early 1990s, the first equipment for image cytometry (laser scanning cytometry, LSC) became commercially available. As flow cytometry was considered the gold standard, various studies found that the results of flow cytometry and LSC generated comparable results. One of the first programs for image analysis that included morphological parameters was ImageJ, published in 1997. One of the newer programs for image analysis that is not limited to fluorescence images is the free software CellProfiler. In 2008, the same group published a new software, CellProfiler Analyst. One part of CellProfiler Analyst is a supervised machine-learningbased classifier that allows users to conduct imaging-based diagnoses, e.g., cellular diagnosis based on morphology. Another relatively new, free software for image analysis is QuPath. The aim of the present study was to compare two free programs for conducting image analysis, CellProfiler and QuPath, and the subsequent classification based on machine learning. For this study, images of renal tissue were analyzed, and the identified objects were classified. The same images were loaded in both software programs. Advanced statistical analysis was used to compare the two methods. The Bland-Altman assay showed that all of the differences were within the mean±1.96 * standard deviation, i.e., the differences are normally distributed, and the software programs are comparable. For the analyzed samples (renal tissue stained with HIF and TUNEL), the use of QuPath was easier because it offers image analysis without a previous processing of the images (e.g., conversion to grayscale, inverted intensities) and an unsupervised machine learning process
Keywords: Image cytometry | Machine learning | Cellular diagnosis