Transfer learning of deep neural network representations for fMRI decoding
انتقال یادگیری بازنمایی های شبکه عصبی عمیق برای رمزگشایی fMRI-2019
Background: Deep neural networks have revolutionised machine learning, with unparalleled performance in object classification. However, in brain imaging (e.g., fMRI), the direct application of Convolutional Neural Networks (CNN) to decoding subject states or perception from imaging data seems impractical given the scarcity of available data. New method: In this work we propose a robust method to transfer information from deep learning (DL) features to brain fMRI data with the goal of decoding. By adopting Reduced Rank Regression with Ridge Regularisation we establish a multivariate link between imaging data and the fully connected layer (fc7) of a CNN. We exploit the reconstructed fc7 features by performing an object image classification task on two datasets: one of the largest fMRI databases, taken from different scanners from more than two hundred subjects watching different movie clips, and another with fMRI data taken while watching static images. Results: The fc7 features could be significantly reconstructed from the imaging data, and led to significant decoding performance. Comparison with existing methods: The decoding based on reconstructed fc7 outperformed the decoding based on imaging data alone. Conclusion: In this work we show how to improve fMRI-based decoding benefiting from the mapping between functional data and CNN features. The potential advantage of the proposed method is twofold: the extraction of stimuli representations by means of an automatic procedure (unsupervised) and the embedding of high-dimensional neuroimaging data onto a space designed for visual object discrimination, leading to a more manageable space from dimensionality point of view.
Keywords: Deep learning | Convolutional Neural Network | Transfer learning | Brain decoding | fMRI | MultiVoxel Pattern Analysis
DISL: Deep Isomorphic Substructure Learning for network representations
DISL: یادگیری زیرساختار ایزومورفیک عمیق برای بازنمایی شبکه ها-2019
The analysis of complex networks based on deep learning has drawn much attention recently. Generally, due to the scale and complexity of modern networks, traditional methods are gradually losing the analytic efficiency and effectiveness. Therefore, it is imperative to design a network analysis model which caters to the massive amount of data and learns more comprehensive information from networks. In this paper, we propose a novel model, namely Deep Isomorphic Substructure Learning (DISL) model, which aims to learn network representations from patterns with isomorphic substructures. Specifically, in DISL, deep learning techniques are used to learn a better network representation for each vertex (node). We provide the method that makes the isomorphic units self-embed into vertex-based subgraphs whose explicit topologies are extracted from raw graphstructured data, and design a Probability-guided Random Walk (PRW) procedure to explore the set of substructures. Sequential samples yielded by PRW provide the information of relational similarity, which integrates the information of correlation and co-occurrence of vertices and the information of substructural isomorphism of subgraphs. We maximize the likelihood of the preserved relationships for learning the implicit similarity knowledge. The architecture of the Convolutional Neural Networks (CNNs) is redesigned for simultaneously processing the explicit and implicit features to learn a more comprehensive representation for networks. The DISL model is applied to several vertex classification tasks for social networks. Our results show that DISL outperforms the challenging state-of-the-art Network Representation Learning (NRL) baselines by a significant margin on accuracy and weighted-F1 scores over the experimental datasets.
Keywords: Deep learning | Network representations | Isomorphic substructures | Probability-guided random walk | Convolutional neural networks