دانلود و نمایش مقالات مرتبط با Artificial neural network::صفحه 1
دانلود بهترین مقالات isi همراه با ترجمه فارسی
نتیجه جستجو - Artificial neural network

تعداد مقالات یافته شده: 89
ردیف عنوان نوع
1 الگوریتم تکاملی چند هدفی مبتنی بر شبکه عصبی برای زمانبندی گردش کار پویا در محاسبات ابری
سال انتشار: 2020 - تعداد صفحات فایل pdf انگلیسی: 16 - تعداد صفحات فایل doc فارسی: 45
زمانبندی گردشکار یک موضوع پژوهشی است که به طور گسترده در محاسبات ابری مورد مطالعه قرار گرفته است و از منابع ابری برای کارهای گردش کار استفاده می¬شود و برای این منظور اهداف مشخص شده در QoS را لحاظ می¬کند. در این مقاله، مسئله زمانبندی گردش کار پویا را به عنوان یک مسئله بهینه سازی چند هدفه پویا (DMOP) مدل می¬کنیم که در آن منبع پویایی سازی بر اساس خرابی منابع و تعداد اهداف است که ممکن است با گذر زمان تغییر کنند. خطاهای نرم افزاری و یا نقص سخت افزاری ممکن است باعث ایجاد پویایی نوع اول شوند. از سوی دیگر مواجهه با سناریوهای زندگی واقعی در محاسبات ابری ممکن است تعداد اهداف را در طی اجرای گردش کار تغییر دهد. در این مطالعه یک الگوریتم تکاملی چند هدفه پویا مبتنی بر پیش بینی را به نام الگوریتم NN-DNSGA-II ارائه می¬دهیم و برای این منظور شبکه عصبی مصنوعی را با الگوریتم NGSA-II ترکیب می¬کنیم. علاوه بر این پنج الگوریتم پویای مبتنی بر غیرپیش بینی از ادبیات موضوعی برای مسئله زمانبندی گردش کار پویا ارائه می¬شوند. راه¬حل¬های زمانبندی با در نظر گرفتن شش هدف یافت می¬شوند: حداقل سازی هزینه ساخت، انرژی و درجه عدم تعادل و حداکثر سازی قابلیت اطمینان و کاربرد. مطالعات تجربی مبتنی بر کاربردهای دنیای واقعی از سیستم مدیریت گردش کار Pegasus نشان می¬دهد که الگوریتم NN-DNSGA-II ما به طور قابل توجهی از الگوریتم¬های جایگزین خود در بیشتر موارد بهتر کار می¬کند با توجه به معیارهایی که برای DMOP با مورد واقعی پارتو بهینه در نظر گرفته می¬شود از جمله تعداد راه¬حل¬های غیرغالب، فاصله¬گذاری Schott و شاخص Hypervolume.
مقاله ترجمه شده
2 A survey on deep learning based face recognition
مروری بر شناخت چهره مبتنی بر یادگیری عمیق-2019
Deep learning, in particular the deep convolutional neural networks, has received increasing interests in face recognition recently, and a number of deep learning methods have been proposed. This paper summarizes about 330 contributions in this area. It reviews major deep learning concepts pertinent to face image analysis and face recognition, and provides a concise overview of studies on specific face recognition problems, such as handling variations in pose, age, illumination, expression, and heterogeneous face matching. A summary of databases used for deep face recognition is given as well. Finally, some open challenges and directions are discussed for future research.
Keywords: Deep learning | Face recognition | Artificial Neural Network | Convolutional Neural Networks | Autoencoder | Generative Adversarial Networks
مقاله انگلیسی
3 Intelligent fault diagnosis of cooling radiator based on deep learning analysis of infrared thermal images
تشخیص خطای هوشمند رادیاتور خنک کننده بر اساس تجزیه و تحلیل یادگیری عمیق از تصاویر حرارتی مادون قرمز-2019
Detection of faults and intelligent monitoring of equipment operations are essential for modern industries. Cooling radiator condition is one of the factors that affects engine performance. This paper proposes a novel and accurate radiator condition monitoring and intelligent fault detection based on thermal images and using a deep convolutional neural network (CNN) which has a specific configuration to combine the feature extraction and classification steps. The CNN model is constructed from VGG-16 structure that is followed by batch normalization layer, dropout layer, and dense layer. The suggested CNN model directly uses infrared thermal images as input to classify six conditions of the radiator: normal, tubes blockage, coolant leakage, cap failure, loose connections between fins & tubes and fins blockage. Evaluation of the model demonstrates that leads to results better than traditional computational intelligence methods, such as an artificial neural network, and can be employed with high performance and accuracy for fault diagnosis and condition monitoring of the cooling radiator under various working circumstances.
Keywords: Cooling radiator | Fault detection | Thermal image analysis | Deep learning | Convolutional neural network
مقاله انگلیسی
4 Development of a hybrid classification technique based on deep learning applied to MSG / SEVIRI multispectral data
تدوین یک روش طبقه بندی ترکیبی مبتنی بر یادگیری عمیق به داده های چند قطبی MSG / SEVIRI-2019
The approach developed in this paper for the classification of precipitation intensities is based on deep learning of neural network. Multispectral data from the MSG satellite (Meteosat Second Generation) providing information about the cloud’s physical and optical characteristics are exploited and used as inputs to a deep neural network model. The model is a combination of CNN (Convolutional Neural Network) and DMLP (Deep Multi- Layer Peceptron) which is learned and validated by comparison with the corresponding Radar data during the rainy seasons 2006/2007 and 2010/2011 respectively. The CNN extracts spatial characteristics from MSG multispectral images. Then, the set of spatial and multi-spectral information are used as inputs for the DMLP. The results show an improvement compared to the three other classifiers (Random Forest, Support Vector Machine and Artificial Neural Network). The CNN-DMLP method was also compared to the technique combining the three classifiers (SAR). The results indicate a percentage correct (PC) of 97% and a probability of detection (POD) of 90% for CNN-DMLP method compared to 94% and 87% for of the SAR technique, respectively. In terms of bias, the CNN-DMLP method gives 1.08 compared 1.10 for SAR technique.
Keywords: MSG satellite | Classification | Deep learning | CNN | MLP
مقاله انگلیسی
5 A Survey and Taxonomy of FPGA-based Deep Learning Accelerators
مرور و طبقه بندی شتاب دهنده های یادگیری عمیق مبتنی بر FPGA-2019
Deep learning, the fastest growing segment of Artificial Neural Network (ANN), has led to the emergence of many machine learning applications and their implementation across multiple platforms such as CPUs, GPUs and recon- figurable hardware ( Field-Programmable Gate Arrays or FPGAs). However, inspired by the structure and function of ANNs, large-scale deep learning topologies require a considerable amount of parallel processing, memory re- sources, high throughput and significant processing power. Consequently, in the context of real time hardware systems, it is crucial to find the right trade-offbetween performance, energy efficiency, fast development, and cost. Although limited in size and resources, several approaches have showed that FPGAs provide a good starting point for the development of future deep learning implementation architectures. Through this paper, we briefly review recent work related to the implementation of deep learning algorithms in FPGAs. We will analyze and compare the design requirements and features of existing topologies to finally propose development strategies and implementation architectures for better use of FPGA-based deep learning topologies. In this context, we will examine the frameworks used in these studies, which will allow testing a lot of topologies to finally arrive at the best implementation alternatives in terms of performance and energy efficiency.
Keywords: Deep learning | Framework | Optimized implementation | FPGA
مقاله انگلیسی
6 Deep learning for waveform identification of resting needle electromyography signals
یادگیری عمیق برای شناسایی شکل موج سیگنالهای الکترومیوگرافی سوزن ساکن-2019
Objective: Given the recent advent in machine learning and artificial intelligence on medical data analysis, we hypothesized that the deep learning algorithm can classify resting needle electromyography (n- EMG) discharges. Methods: Six clinically observed resting n-EMG signals were used as a dataset. The data were converted to Mel-spectrogram. Data augmentation was then applied to the training data. Deep learning algorithms were applied to assess the accuracies of correct classification, with or without the use of pre-trained weights for deep-learning networks. Results: While the original data yielded the accuracy up to 0.86 on the test dataset, data-augmentation up to 200,000 training images showed significant increase in the accuracy to 1.0. The use of pre-trained weights (fine tuning) showed greater accuracy than ‘‘training from scratch”. Conclusions: Resting n-EMG signals were successfully classified by deep-learning algorithm, especially with the use of data augmentation and transfer learning techniques. Significance: Computer-aided signal identification of clinical n-EMG testing might be possible by deeplearning algorithms.
Keywords: Needle electromyography | Deep learning | Artificial neural network | Data augmentation | Resting discharge
مقاله انگلیسی
7 Towards Early Monitoring of Chemotherapy-induced Drug Resistance Based on Single Cell Metabolomics: Combining Single-probe Mass Spectrometry with Machine Learning
مانیتورینگ اولیه مقاومت داروئی ناشی از شیمی درمانی بر اساس متابولومیک تک سلولی: ترکیبی از طیف سنجی جرمی تک پروب با یادگیری ماشین-2019
Despite the presence of methods evaluating drug resistance during chemotherapies, techniques, which allow for monitoring the degree of drug resistance in early chemotherapeutic stage from single cells in their native microenvironment, are still absent. Herein, we report an analytical approach that combines single cell mass spectrometry (SCMS) based metabolomics with machine learning (ML) models to address the existing challenges. Metabolomic profiles of live cancer cells (HCT-116) with different levels (i.e., no, low, and high) of chemotherapy-induced drug resistance were measured using the Single-probe SCMS technique. A series of ML models, including random forest (RF), artificial neural network (ANN), and penalized logistic regression (LR), were constructed to predict the degrees of drug resistance of individual cells. A systematic comparison of performance was conducted among multiple models, and the method validation was carried out experimentally. Our results indicate that these ML models, especially the RF model constructed on the obtained SCMS datasets, can rapidly and accurately predict different degrees of drug resistance of live single cells. With such rapid and reliable assessment of drug resistance demonstrated at the single cell level, our method can be potentially employed to evaluate chemotherapeutic efficacy in the clinic.
Key words : Single Cell Mass Spectrometry | Metabolomics | Machine Learning | Drug Resistance | The Singleprobe
مقاله انگلیسی
8 A machine learning approach for traffic-noise annoyance assessment
یک روش یادگیری ماشین برای تخمین آزار سر و صدای ترافیک-2019
In this study, models for predicting traffic-noise annoyance based on noise perception, noise exposure levels, and demographics were developed. By applying machine-learning techniques, in particular artificial neural networks (ANN), support vector machines (SVM) and multiple linear regressions (MLR), the traffic-noise annoyance models were obtained, and the error rates compared. A traffic noise map and the estimation of noise exposure for the case study area were developed. Although, it is quite evident that subjective noise perception and predicted noise exposure levels strongly influence traffic-noise annoyance, traditional statistical models fail to produce accurate predictions. Therefore, a machine-learning approach was applied, which showed a better performance in terms of error rates and the coefficient of determination (R2). The best results for predicting traffic-noise annoyance were obtained with the ANN model, obtaining 42% and 35% error reduction in training subsets compared to the MRL and SVM models, respectively. For testing subsets, the error reductions were 24% and 19% for the corresponding models. The coefficient of determination R2 increased 3.8 and 2.3 times using ANN compared to MRL and SVM models in training subsets respectively, and 1.7 times (in both MRL and SVM models) for testing subsets. In this way, the applied methodology can be used as a reliable and more accurate tool for determining the impact of transportation noise in urban context, promoting the well-being of the population and the creation of suitable public policy.
Keywords: Noise annoyance | Traffic noise | Machine-learning | Artificial neural networks | Support vector machine
مقاله انگلیسی
9 Times-series data augmentation and deep learning for construction equipment activity recognition
تقویت داده های سری زمانی و یادگیری عمیق برای شناخت فعالیت تجهیزات ساختمانی-2019
Automated, real-time, and reliable equipment activity recognition on construction sites can help to minimize idle time, improve operational efficiency, and reduce emissions. Previous efforts in activity recognition of construction equipment have explored different classification algorithms anm accelerometers and gyroscopes. These studies utilized pattern recognition approaches such as statistical models (e.g., hidden-Markov models); shallow neural networks (e.g., Artificial Neural Networks); and distance algorithms (e.g., K-nearest neighbor) to classify the time-series data collected from sensors mounted on the equipment. Such methods necessitate the segmentation of continuous operational data with fixed or dynamic windows to extract statistical features. This heuristic and manual feature extraction process is limited by human knowledge and can only extract human-specified shallow features. However, recent developments in deep neural networks, specifically recurrent neural network (RNN), presents new opportunities to classify sequential time-series data with recurrent lateral connections. RNN can automatically learn high-level representative features through the network instead of being manually designed, making it more suitable for complex activity recognition. However, the application of RNN requires a large training dataset which poses a practical challenge to obtain from real construction sites. Thus, this study presents a data-augmentation framework for generating synthetic time-series training data for an RNN-based deep learning network to accurately and reliably recognize equipment activities. The proposed methodology is validated by generating synthetic data from sample datasets, that were collected from two earthmoving operations in the real world. The synthetic data along with the collected data were used to train a long short-term memory (LSTM)-based RNN. The trained model was evaluated by comparing its performance with traditionally used classification algorithms for construction equipment activity recognition. The deep learning framework presented in this study outperformed the traditionally used machine learning classification algorithms for activity recognition regarding model accuracy and generalization.
Keywords: Construction equipment activity recognition | Inertial measurement unit | Deep learning | Time-series data augmentation | LSTM network | Big data analytics
مقاله انگلیسی
10 Intelligent fault diagnosis for rotating machinery using deep Q-network based health state classification: A deep reinforcement learning approach
تشخیص خطای هوشمند برای ماشین آلات در حال چرخش با استفاده از طبقه بندی حالت سلامت مبتنی بر شبکه Q عمقی: یک روش یادگیری تقویتی عمیق-2019
Fault diagnosis methods for rotating machinery have always been a hot research topic, and artificial intelligencebased approaches have attracted increasing attention from both researchers and engineers. Among those related studies and methods, artificial neural networks, especially deep learning-based methods, are widely used to extract fault features or classify fault features obtained by other signal processing techniques. Although such methods could solve the fault diagnosis problems of rotating machinery, there are still two deficiencies. (1) Unable to establish direct linear or non-linear mapping between raw data and the corresponding fault modes, the performance of such fault diagnosis methods highly depends on the quality of the extracted features. (2) The optimization of neural network architecture and parameters, especially for deep neural networks, requires considerable manual modification and expert experience, which limits the applicability and generalization of such methods. As a remarkable breakthrough in artificial intelligence, AlphaGo, a representative achievement of deep reinforcement learning, provides inspiration and direction for the aforementioned shortcomings. Combining the advantages of deep learning and reinforcement learning, deep reinforcement learning is able to build an end-to-end fault diagnosis architecture that can directly map raw fault data to the corresponding fault modes. Thus, based on deep reinforcement learning, a novel intelligent diagnosis method is proposed that is able to overcome the shortcomings of the aforementioned diagnosis methods. Validation tests of the proposed method are carried out using datasets of two types of rotating machinery, rolling bearings and hydraulic pumps, which contain a large number of measured raw vibration signals under different health states and working conditions. The diagnosis results show that the proposed method is able to obtain intelligent fault diagnosis agents that can mine the relationships between the raw vibration signals and fault modes autonomously and effectively. Considering that the learning process of the proposed method depends only on the replayed memories of the agent and the overall rewards, which represent much weaker feedback than that obtained by the supervised learning-based method, the proposed method is promising in establishing a general fault diagnosis architecture for rotating machinery.
Keywords: Fault diagnosis | Rotating machinery | Deep reinforcement learning | Deep Q-network
مقاله انگلیسی
rss مقالات ترجمه شده rss مقالات انگلیسی rss کتاب های انگلیسی rss مقالات آموزشی