دانلود و نمایش مقالات مرتبط با Deep neural network::صفحه 1
دانلود بهترین مقالات isi همراه با ترجمه فارسی

با سلام خدمت کاربران عزیز، به اطلاع می رساند ترجمه مقالاتی که سال انتشار آن ها زیر 2008 می باشد رایگان بوده و میتوانید با وارد شدن در صفحه جزییات مقاله به رایگان ترجمه را دانلود نمایید.

نتیجه جستجو - Deep neural network

تعداد مقالات یافته شده: 69
ردیف عنوان نوع
1 A mesh-free method for interface problems using the deep learning approach
روشی بدون مش برای مشکلات رابط با استفاده از روش یادگیری عمیق-2020
In this paper, we propose a mesh-free method to solve interface problems using the deep learning approach. Two types of PDEs are considered. The first one is an elliptic PDE with a discontinuous and high-contrast coefficient. While the second one is a linear elasticity equation with discontinuous stress tensor. In both cases, we represent the solutions of the PDEs using the deep neural networks (DNNs) and formulate the PDEs into variational problems, which can be solved via the deep learning approach. To deal with inhomogeneous boundary conditions, we use a shallow neural network to approximate the boundary conditions. Instead of using an adaptive mesh refinement method or specially designed basis functions or numerical schemes to compute the PDE solutions, the proposed method has the advantages that it is easy to implement and is mesh-free. Finally, we present numerical results to demonstrate the accuracy and efficiency of the proposed method for interface problems.
Keywords: Deep learning | Variational problems | Mesh-free method | Linear elasticity | High-contrast | Interface problems
مقاله انگلیسی
2 Ensemble sparse supervised model for bearing fault diagnosis in smart manufacturing
گروه نظارت پراکنده برای تشخیص خطا در تولید هوشمند ، مدل نظارت پراکنده-2020
Machinery fault diagnosis is of great significance to improve the reliability of smart manufacturing. Deep learning based fault diagnosis methods have achieved great success. However, the features extracted by different models may vary resulting in ambiguous representation of the data, and even wasted time with manually selecting the optimal hyperparameters. To solve the problems, this paper proposes a new framework named Ensemble Sparse Supervised Model (ESSM), in which a typical deep learning model is treated as two phases of feature learning and model learning. In the feature learning phase, the original data is represented to be a feature matrix as non-redundant as possible by applying sparse filtering. Then, the feature matrix is fed into the model learning phase. Regularization, dropout and rectified linear unit (ReLU) are used in the models neurons and layers to build a sparse deep neural network. Finally, the output of the sparse deep neural network provides feedback to the first phase to obtain better sparse features. In the proposed method, hyperparameters need to be pre-specified and a python library of talos is employed to finish the process automatically. The proposed method is verified using the bearing data provided by Case Western Reserve University. The result demonstrates that the proposed method can capture the effective pattern of data with the help of sparse constraints and simultaneously provide convenience for the operators with assuring performance.
Keywords: Sparse representation | Deep learning | Fault diagnosis
مقاله انگلیسی
3 Intelligent-ReaxFF: Evaluating the reactive force field parameters with machine learning
Intelli-ReaxFF: ارزیابی پارامترهای میدان نیروی واکنش با یادگیری ماشین-2020
Machine learning has been widely used in quantum chemistries, such as data mining in quantum mechanics calculation and representations of potential energy surface by neural networks. In this study, we report our efforts on the optimization of the ReaxFF parameters with machine learning frameworks. Although deep neural network potentials like High-Dimensional Neural Network Potentials (HDNNP) have achieved much success in applications such as materials modeling, factors like the memory usage, training time, and accuracies are still problems when the training data set is big. On the other hand, classical potentials like ReaxFF and REBO does not have these problems, and a combination of two is an ideal solution. Machine learning has generated techniques such as automatic differentiation and backpropagation, with which we can optimize deep neural networks or complexed interatomic potentials like ReaxFF. With the TensorFlow coding platform, we have constructed an Intelligent ReaxFF (I-ReaxFF) model with terms of matrix (or tensor) operations that can optimize ReaxFF parameters automatically with gradient-based optimizers like adaptive moment solver (Adam) and backpropagations. As inherited from TensorFlow, one significant feature of our code is the GPU acceleration. The training speed can be five times faster with GPU acceleration than pure CPU calculation. Another feather is that it can directly use the ab initio molecular dynamics trajectories with surrounding periodic images as training data, therefore, allowing the data set can be prepared with ease.
Keywords: Neural network | Parameterization | ReaxFF | Materials modeling | Machine learning
مقاله انگلیسی
4 Machine learning based system for managing energy efficiency of public sector as an approach towards smart cities
سیستم مبتنی بر یادگیری ماشین برای مدیریت بهره وری انرژی بخش دولتی به عنوان رویکردی به شهرهای هوشمند-2020
Energy efficiency of public sector is an important issue in the context of smart cities due to the fact that buildings are the largest energy consumers, especially public buildings such as educational, health, government and other public institutions that have a large usage frequency. However, recent developments of machine learning within Big Data environment have not been exploited enough in this domain. This paper aims to answer the question of how to incorporate Big Data platform and machine learning into an intelligent system for managing energy efficiency of public sector as a substantial part of the smart city concept. Deep neural networks, Rpart regression tree and Random forest with variable reduction procedures were used to create prediction models of specific energy consumption of Croatian public sector buildings. The most accurate model was produced by Random forest method, and a comparison of important predictors extracted by all three methods has been conducted. The models could be implemented in the suggested intelligent system named MERIDA which integrates Big Data collection and predictive models of energy consumption for each energy source in public buildings, and enables their synergy into a managing platform for improving energy efficiency of the public sector within Big Data environment. The paper also discusses technological requirements for developing such a platform that could be used by public administration to plan reconstruction measures of public buildings, to reduce energy consumption and cost, as well as to connect such smart public buildings as part of smart cities. Such digital transformation of energy management can increase energy efficiency of public administration, its higher quality of service and healthier environment.
Keywords: Planning models | Energy efficiency | Machine learning | Public sector | Smart cities
مقاله انگلیسی
5 Fuzzy logic interpretation of quadratic networks
تفسیر منطق فازی از شبکه های درجه دوم-2020
Over past several years, deep learning has achieved huge successes in various applications. However, such a data-driven approach is often criticized for lack of interpretability. Recently, we proposed arti- ficial quadratic neural networks consisting of quadratic neurons in potentially many layers. In cellular level, a quadratic function is used to replace the inner product in a traditional neuron, and then under- goes a nonlinear activation. With a single quadratic neuron, any fuzzy logic operation, such as XOR, can be implemented. In this sense, any deep network constructed with quadratic neurons can be interpreted as a deep fuzzy logic system. Since traditional neural networks and quadratic counterparts can represent each other and fuzzy logic operations are naturally implemented in quadratic neural networks, it is plau- sible to explain how a deep neural network works with a quadratic network as the system model. In this paper, we generalize and categorize fuzzy logic operations implementable with individual quadratic neu- rons, and then perform statistical/information-theoretic analyses of exemplary quadratic neural networks.
Keywords: Machine learning | Artificial neural network (ANN) | Quadratic network | Fuzzy logic
مقاله انگلیسی
6 Extract interpretability-accuracy balanced rules from artificial neural networks: A review
استخراج قوانین متعادل با دقت تفسیر از شبکه های عصبی مصنوعی: بررسی-2020
Artificial neural networks (ANN) have been widely used and have achieved remarkable achievements. However, neural networks with high accuracy and good performance often have extremely complex in- ternal structures such as deep neural networks (DNN). This shortcoming makes the neural networks as incomprehensible as a black box, which is unacceptable in some practical applications. But pursuing ex- cessive interpretation of the neural networks will make the performance of the model worse. Based on this contradictory issue, we first summarize the mainstream methods about quantitatively evaluating the accuracy and interpretability of rule set. And then review existing methods on extracting rules from Mul- tilayer Perceptron (MLP) and DNN in three categories: Decomposition Approach (Extract rules in neuron level such as visualizing the structure of network), Pedagogical Approach (By studying the correspon- dence between input and output such as by computing gradient) and Eclectics Approach (Combine the above two ideas). Some potential research directions about extracting rules from DNN are discussed in the last.
Keywords: Rule extraction | Accuracy | Interpretability | Multilayer Perceptron | Deep neural network
مقاله انگلیسی
7 Sparse low rank factorization for deep neural network compression
فاکتورسازی رتبه پراکنده برای فشرده سازی شبکه عصبی عمیق-2020
Storing and processing millions of parameters in deep neural networks is highly challenging during the deployment of model in real-time application on resource constrained devices. Popular low-rank approx- imation approach singular value decomposition (SVD) is generally applied to the weights of fully con- nected layers where compact storage is achieved by keeping only the most prominent components of the decomposed matrices. Years of research on pruning-based neural network model compression re- vealed that the relative importance or contribution of each neuron in a layer highly vary among each other. Recently, synapses pruning has also demonstrated that having sparse matrices in network archi- tecture achieve lower space and faster computation during inference time. We extend these arguments by proposing that the low-rank decomposition of weight matrices should also consider significance of both input as well as output neurons of a layer. Combining the ideas of sparsity and existence of un- equal contributions of neurons towards achieving the target, we propose sparse low rank (SLR) method which sparsifies SVD matrices to achieve better compression rate by keeping lower rank for unimportant neurons. We demonstrate the effectiveness of our method in compressing famous convolutional neural networks based image recognition frameworks which are trained on popular datasets. Experimental re- sults show that the proposed approach SLR outperforms vanilla truncated SVD and a pruning baseline, achieving better compression rates with minimal or no loss in the accuracy. Code of the proposed ap- proach is avaialble at https://github.com/sridarah/slr .
Keywords: Low-rank approximation | Singular value decomposition | Sparse matrix | Deep neural networks | Convolutional neural networks
مقاله انگلیسی
8 Surrogate-Assisted Evolutionary Search of Spiking Neural Architectures in Liquid State Machines
جستجوی تکاملی با کمک Surrogate از معماری عصبی اسپایک در ماشینهای حالت مایع-2020
Spiking neural networks (SNNs) are believed to be a powerful neural computation framework inspired by the vivo neurons. As a class of recurrent SNNs, liquid state machines (LSMs) are biologically more plausible models imitating the architecture and functions of the human brain for information processing. However, few LSM models can outperform conventional analogue neural networks for solving real-world classification or regression problems, which can mainly be attributed to the sensitivity of the training performance to the architecture of the reservoir and the parameters in the spiking neuron models. Most recently, many algorithms have been proposed for automated machine learning that aims to automatically design the architecture and parameters of deep neural networks without much human intervention. Although automated machine learning and neural architecture search have been extremely successful in conventional neural networks, little research on search for an optimal architecture and hyperparameters of LSMs has been reported. This work proposes on a surrogate-assisted evolutionary search method for optimization of the hyperparameters and neural architecture of the reservoir of LSMs using the covariance matrix adaptation evolution strategy (CMA-ES). For reducing the search space, the architecture of the LSM is encoded by a connectivity probability together with the hyperparameters in the spiking neuron models. To enhance the computational efficiency, a Gaussian process is adopted as the surrogate to assist the CMA-ES. The proposed GP-assisted CMA-ES is compared with the canonical CMA-ES and a Bayesian optimization algorithm on two popular datasets including image and action recognition. Our results confirm that the proposed algorithm is efficient and effective in optimizing the parameters and architecture of LSMs.
Keywords: Spiking neural network | Liquid state machine | Parameter and architecture search | Surrogate-assisted evolutionary search | Evolution strategy | Bayesian optimization | Gaussian process
مقاله انگلیسی
9 Log-sum enhanced sparse deep neural network
شبکه عصبی پراکنده عمیق با افزایش log-sum-2020
How to design deep neural networks (DNNs) for the representation and analysis of high dimensional but small sample size data is still a big challenge. One solution is to construct a sparse network. At present, there exist many approaches to achieve sparsity for DNNs by regularization, but most of them are carried out only in the pre-training process due to the difficulty in the derivation of explicit formulae in the finetuning process. In this paper, a log-sum function is used as the regularization terms for both the responses of hidden neurons and the network connections in the loss function of the fine-tuning process. It provides a better approximation to the L0-norm than several often used norms. Based on the gradient formula of the loss function, the fine-tuning process can be executed more efficiently. Specifically, the commonly used gradient calculation in many deep learning research platforms, such as PyTorch or TensorFlow, can be accelerated. Given the analytic formula for calculating gradients used in any layer of DNN, the error accumulated from successive numerical approximations in the differentiation process can be avoided. With the proposed log-sum enhanced sparse deep neural network (LSES-DNN), the sparsity of the responses and the connections can be well controlled to improve the adaptivity of DNNs. The proposed model is applied to MRI data for both the diagnosis of schizophrenia and the study of brain developments. Numerical experiments demonstrate its superior performance among several classical classifiers tested.
Keywords: Deep neural network | Log-sum enhanced sparsity | Back propagation algorithm | Concise gradient formula | Magnetic resonance imaging
مقاله انگلیسی
10 N3-CPL: Neuroplasticity-based Neuromorphic Network Cell Proliferation Learning
N3-CPL: یادگیری تکثیر سلولی شبکه عصبی مبتنی بر Neuroplasticity-2020
Recent deep neural networks have brought tremendous changes and advancements in many areas. The neural networks are built mainly in the black box form, with complex combinations of continuous nonlinear function values and weighted input values. The reason why these combinations are called ‘neural networks’ is that they are inspired by the biological structure of the human brain. The biggest feature of the neural network is that it enables the use of a gradient-based optimization method for the training, by building a huge nonlinear function. Recently, the large-scale labeled dataset construction, the increase in innovative computing resources, and the development of advanced regularization methods have given the neural networks an excellent modeling ability to generalize the data that is difficult to analyze, thereby making huge progress in the network performance.
مقاله انگلیسی
rss مقالات ترجمه شده rss مقالات انگلیسی rss کتاب های انگلیسی rss مقالات آموزشی
logo-samandehi