Unsupervised classification of multi-omics data during cardiac remodeling using deep learning
طبقه بندی بدون نظارت شده داده های چند omics در طی بازسازی قلب با استفاده از یادگیری عمیق-2019
Integration of multi-omics in cardiovascular diseases (CVDs) presents high potentials for translational discoveries. By analyzing abundance levels of heterogeneous molecules over time, we may uncover biological interactions and networks that were previously unidentifiable. However, to effectively perform integrative analysis of temporal multi-omics, computational methods must account for the heterogeneity and complexity in the data. To this end, we performed unsupervised classification of proteins and metabolites in mice during cardiac remodeling using two innovative deep learning (DL) approaches. First, long short-term memory (LSTM)- based variational autoencoder (LSTM-VAE) was trained on time-series numeric data. The low-dimensional embeddings extracted from LSTM-VAE were then used for clustering. Second, deep convolutional embedded clustering (DCEC) was applied on images of temporal trends. Instead of a two-step procedure, DCEC performes a joint optimization for image reconstruction and cluster assignment. Additionally, we performed K-means clustering, partitioning around medoids (PAM), and hierarchical clustering. Pathway enrichment analysis using the Reactome knowledgebase demonstrated that DL methods yielded higher numbers of significant biological pathways than conventional clustering algorithms. In particular, DCEC resulted in the highest number of enriched pathways, suggesting the strength of its unified framework based on visual similarities. Overall, unsupervised DL is shown to be a promising analytical approach for integrative analysis of temporal multi-omics.
Keywords: Cardiovascular | Clustering | Multi-omics Time-series | Unsupervised deep learning | Integrative analysis
Projecting Australias forest cover dynamics and exploring influential factors using deep learning
پیش بینی پویایی پوشش جنگلی در استرالیا و کشف عوامل مؤثر با استفاده از یادگیری عمیق-2019
This study presents the first application of deep learning techniques in capturing long-term, time-continuous forest cover dynamics at a continental scale. We developed a spatially-explicit ensemble model for projecting Australias forest cover change using Long Short-Term Memory (LSTM) deep learning neural networks applied to a multi-dimensional, high-resolution spatiotemporal dataset and run on a high-performance computing cluster. We further quantified the influence of explanatory variables on the spatiotemporal dynamics of continental forest cover. Deep learning greatly outperformed a state-of-the-art spatial-econometric model at continental, state, and grid-cell scales. For example, at the continental scale, compared to the spatial-econometric model, the deep learning model improved projection performance by 44% (root-mean-square error) and 12% (pseudo Rsquared). The results illustrate the robustness and effectiveness of the LSTM model. This work provides a reliable tool for projecting forest cover and agricultural production under given future scenarios, supporting decisionmaking in sustainable land development, management, and conservation.
Keywords: Long short-term memory | Deep learning | Forest cover change | Spatiotemporal data | Projections | Deforestation
A deep learning framework for automatic diagnosis of unipolar depression
یک چارچوب یادگیری عمیق برای تشخیص خودکار افسردگی تک قطبی-2019
Background and purpose: In recent years, the development of machine learning (ML) frameworks for automatic diagnosis of unipolar depression has escalated to a next level of deep learning frameworks. However, this idea needs further validation. Therefore, this paper has proposed an electroencephalographic (EEG)-based deep learning framework that automatically discriminated depressed and healthy controls and provided the diagnosis. Basic procedures: In this paper, two different deep learning architectures were proposed that utilized one dimensional convolutional neural network (1DCNN) and 1DCNN with long short-term memory (LSTM) architecture. The proposed deep learning architectures automatically learn patterns in the EEG data that were useful for classifying the depressed and healthy controls. In addition, the proposed models were validated with restingstate EEG data obtained from 33 depressed patients and 30 healthy controls. Main findings: As results, significant differences were observed between the two groups. The classification results involving the CNN model were accuracy=98.32%, precision=99.78%, recall=98.34%, and f-score= 97.65%. In addition, the study has reported LSTM with 1DCNN classification accuracy=95.97%, precision= 99.23%, recall=93.67%, and f-score=95.14%. Conclusions: Deep learning frameworks could revolutionize the clinical applications for EEG-based diagnosis for depression. Based on the results, it may be concluded that the deep learning framework could be used as an automatic method for diagnosing the depression.
Keywords: EEG-based deep learning for depression | EEG-based diagnosis of unipolar depression | Convolutional neural network for depression | Long short-term memory classifiers for depression | EEG-based machine learning methods for depression
Spatio-temporal deep learning method for ADHD fMRI classification
روش یادگیری عمیق فضا-زمانی برای طبقه بندی ADHD fMRI-2019
Attention Deficit/Hyperactivity Disorder (ADHD) is one kind of neurodevelopmental disor- ders common in children. Due to the complexity of the pathological mechanism, there is a lack of objective diagnostic methods up to now. This paper aimed to propose automatic ADHD diagnostic method using resting state functional magnetic resonance imaging (rs- fMRI) data with the spatio-temporal deep learning models. Unlike traditional methods, this paper constructed a deep learning method called 4-D CNN based on granular computing which were trained based on derivative changes in entropy, and can calculate granular- ity at a coarse level by stacking layers. Considering the structure of rs-fMRI as time-series 3-D frames, several models of spatial and temporal granular computing and fusion were proposed, including feature pooling, long short-term memory (LSTM) and spatio-temporal convolution. This paper introduced an approach to augment dataset which can sample one subject’s rs-fMRI frames into several relatively short term pieces with a fixed stride. The public dataset of ADHD-200 Consortium was used to train and validate our method. And the results of evaluations showed that our method outperformed traditional methods on the dataset (accuracy: 71.3%, AUC: 0.80). Therefore, our 4-D CNN method can be used to build more accurate automatic assistant diagnosis tool of ADHD.
Keywords: Spatio-temporal | Deep learning | ADHD | fMRI classification | granular computing
Deep learning in exchange markets
یادگیری عمیق در بازارهای ارز-2019
We present the implementation of a short-term forecasting system of price movements in exchange mar- kets using market depth data and a systematic procedure to enable a fully automated trading system. Three types of Deep Learning (DL) Neural Network (NN) methodologies are trained and tested: Deep NN Classifier (DNNC), Long Short-Term Memory (LSTM) and Convolutional NN (CNN). Although the LSTM is more suitable for multivariate time series analysis from a theoretical point of view, test results indicate that the CNN has on average the best predictive power in the case study under analysis, which is the UK to Win Horse Racing market during pre-live stage in the world’s most relevant betting exchange. Implica- tions from the generalized use of automated trading systems in betting exchange markets are discussed.
Keywords: Deep learning | Betting exchange | Market depth | Classification
An efficient hybrid deep learning approach for internet security
یک رویکرد یادگیری عمیق ترکیبی کارآمد برای امنیت اینترنت-2019
Nowadays, internet is mostly used communication tool worldwide. However, the major problem of the internet is to provide security. To provide internet security, many researches and papers have been suggested about information and network security. The commonly used system against network attacks is firewalls. In this study, a novel firewall data classification approach is presented. This approach uses 10 cases to obtain numerical results. The proposed approach consists of data acquisition from Firewall, feature selection and classification steps. Firstly, the Firewall data were gathered from a Firewall. Then, the redundant features are eliminated and these features are normalized using min–max normalization. The obtained final feature sets are forwarded to classifiers. In the cases defined, Long Short-Term Memory (LSTM), Bi-directional Long Short-Term Memory (Bi-LSTM) and Support Vector Machine (SVM) are utilized as classifiers. It was seen from the results, the deep learning approach are more successful than SVM classifier and the highest classification accuracy was calculated as 97.38% by using Bi-LSTM-LSTM hybrid network. The proposed method has several advantages and these are (1) the proposed method achieved high success rates using hybrid deep learning approaches (2) the training time of the proposed method is short (3) an intelligent network security monitoring method is presented using basic methods and deep learning. In addition, a useful approach has been presented to achieve high success rate at the end of the faster training process than traditional machine learning methods. Briefly, an intelligent monitoring system is proposed for network security.
Keywords: Deep learning | LSTM | Bi-LSTM | Network security | Classification | Big data
Multi-output bus travel time prediction with convolutional LSTM neural network
پیش بینی زمان سفر با اتوبوس چند خروجی با شبکه عصبی LSTM حلقوی-2019
Accurate and reliable travel time predictions in public transport networks are essential for delivering an attractive service that is able to compete with other modes of transport in urban areas. The traditional application of this information, where arrival and departure predictions are displayed on digital boards, is highly visible in the city landscape of most modern metropolises. More recently, the same information has become critical as input for smart-phone trip planners in order to alert passengers about unreachable connections, alternative route choices and prolonged travel times. More sophisticated Intelligent Transport Systems (ITS) include the predictions of connection assurance, i.e. an expert system that will decide to hold services to enable passenger exchange, in case one of the services is delayed up to a certain level. In order to operate such systems, and to ensure the confidence of passengers in the systems, the infor- mation provided must be accurate and reliable. Traditional methods have trouble with this as congestion, and thus travel time variability, increases in cities, consequently making travel time predictions in urban areas a non-trivial task. This paper presents a system for bus travel time prediction that leverages the non-static spatio-temporal correlations present in urban bus networks, allowing the discovery of com- plex patterns not captured by traditional methods. The underlying model is a multi-output, multi-time- step, deep neural network that uses a combination of convolutional and long short-term memory (LSTM) layers. The method is empirically evaluated and compared to other popular approaches for link travel time prediction and currently available services, including the currently deployed model at Movia, the regional public transport authority in Greater Copenhagen. We find that the proposed model significantly outper- forms all the other methods we compare with, and is able to detect small irregular peaks in bus travel times very quickly.
Keywords: Bus travel time prediction | Intelligent Transport Systems | Convolutional neural network (CNN) | Long short-term memory (LSTM) | Deep learning
Learning fashion compatibility across categories with deep multimodal neural networks
یادگیری سازگاری مد در سراسر دسته با شبکه های عصبی عمیق چند حالته-2019
Fashion compatibility is a subjective sense of human for relationships between fashion items, which is essential for fashion recommendation. Recently, it increasingly attracts more and more attentions and has become a very hot research topic. Learning fashion compatibility is a challenging task, since it needs to consider plenty of factors about fashion items, such as color, texture, style and functionality. Unlike low-level visual compatibility (e.g., color, texture), high-level semantic compatibility (e.g., style, function- ality) cannot be handled purely based on fashion images. In this paper, we propose a novel multimodal framework to learn fashion compatibility, which simultaneously integrates both semantic and visual em- beddings into a unified deep learning model. For semantic embeddings, a multilayered Long Short-Term Memory (LSTM) is employed for discriminative semantic representation learning, while a deep Convo- lutional Neural Network (CNN) is used for visual embeddings. A fusion module is then constructed to combine semantic and visual information of fashion items, which equivalently transforms semantic and visual spaces into a latent feature space. Furthermore, a new triplet ranking loss with compatible weights is introduced to measure fine-grained relationships between fashion items, which is more consistent with human feelings on fashion compatibility in reality. Extensive experiments conducted on Amazon fashion dataset demonstrate the effectiveness of the proposed method for learning fashion compatibility, which outperforms the state-of-the-art approaches.
Keywords: Fashion compatibility | Deep learning | Neural networks | Multimodal
Automatic driver stress level classification using multimodal deep learning
Automatic driver stress level classification using multimodal deep learning-2019
Stress has been identified as one of the contributing factors to vehicle crashes which create a significant cost in terms of loss of life and productivity for governments and societies. Motivated by the need to ad- dress the significant costs of driver stress, it is essential to build a practical system that can detect drivers’ stress levels in real time with high accuracy. A driver stress detection model often requires data from dif- ferent modalities, including ECG signals, vehicle data (e.g., steering wheel, brake pedal) and contextual data (e.g., weather conditions and other ambient factors). Most of the current works use traditional ma- chine learning techniques to fuse multimodal data at different levels (e.g., feature level) to classify drivers’ stress levels. Although traditional multimodal fusion models are beneficial for driver stress detection, they inherently have some critical limitations (e.g., ignore non-linear correlation across modalities) that may hinder the development of a reliable and accurate model. To overcome the limitations of traditional mul- timodal fusion, this paper proposes a framework based on adopting deep learning techniques for driver stress classification captured by multimodal data. Specifically, we propose a multimodal fusion model based on convolutional neural networks (CNN) and long short-term memory (LSTM) to fuse the ECG, vehicle data and contextual data to jointly learn the highly correlated representation across modalities, after learning each modality, with a single deep network. To validate the effectiveness of the proposed model, we perform experiments on our dataset collected using an advanced driving simulator. In this pa- per, we present a multi-modal system based on the adoption of deep learning techniques to improve the performance of driver stress classification. The results show that the proposed model outperforms model built using the traditional machine learning techniques based on handcrafted features (average accuracy: 92.8%, sensitivity: 94.13%, specificity: 97.37% and precision: 95.00%).
Keywords: Deep learning | Driver stress detection | Convolutional neural network | Long short term memory | ECG signal | Vehicle data
Improved scene identification and object detection on egocentric vision of daily activities
شناسایی صحنه و تشخیص شی در دیدگاه خودمدار از فعالیت های روزانه-2017
Article history:Received 16 December 2015Revised 26 September 2016Accepted 19 October 2016Available online 21 October 2016Keywords:Scene classiﬁcation Object detection Scene understandingFirst camera person visionThis work investigates the relationship between scene and associated objects on daily activities under egocentric vision constraints. Daily activities are performed in prototypical scenes that share a lot of vi- sual appearances independent of where or by whom the video was recorded. The intrinsic characteristics of egocentric vision suggest that the location where the activity is conducted remains consistent through- out frames. This paper shows that egocentric scene identiﬁcation is improved by taking the temporal context into consideration. Moreover, since most of the objects are typically associated with particular types of scenes, we show that a generic object detection method can also be improved by re-scoring the results of the object detection method according to the scene content. We ﬁrst show the case where the scene identity is explicitly predicted to improve object detection, and then we show a framework using Long Short-Term Memory (LSTM) where no labeling of the scene type is needed. We performed exper- iments in the Activities of Daily Living (ADL) public dataset (Pirsiavash and Ramanan,2012), which is a standard benchmark for egocentric vision.© 2016 Elsevier Inc. All rights reserved.
Keywords: Scene classification | Object detection | Scene understanding | First camera person vision