دانلود و نمایش مقالات مرتبط با شبکه عصبی::صفحه 1
بلافاصله پس از پرداخت دانلود کنید
نتیجه جستجو - شبکه عصبی

تعداد مقالات یافته شده: 157
ردیف عنوان نوع
1 Wake modeling of wind turbines using machine learning
مدل سازی توربین های بادی با استفاده از یادگیری ماشین-2020
In the paper, a novel framework that employs the machine learning and CFD (computational fluid dynamics) simulation to develop new wake velocity and turbulence models with high accuracy and good efficiency is proposed to improve the turbine wake predictions. An ANN (artificial neural network) model based on the backpropagation (BP) algorithm is designed to build the underlying spatial relationship between the inflow conditions and the three-dimensional wake flows. To save the computational cost, a reduced-order turbine model ADM-R (actuator disk model with rotation), is incorporated into RANS (Reynolds-averaged Navier-Stokes equations) simulations coupled with a modified k − ε turbulence model to provide big datasets of wake flow for training, testing, and validation of the ANN model. The numerical framework of RANS/ADM-R simulations is validated by a standalone Vestas V80 2MW wind turbine and NTNU wind tunnel test of double aligned turbines. In the ANN-based wake model, the inflow wind speed and turbulence intensity at hub height are selected as input variables, while the spatial velocity deficit and added turbulence kinetic energy (TKE) in wake field are taken as output variables. The ANN-based wake model is first deployed to a standalone turbine, and then the spatial wake characteristics and power generation of an aligned 8-turbine row as representation of Horns Rev wind farm are also validated against Large Eddy Simulations (LES) and field measurement. The results of ANNbased wake model show good agreement with the numerical simulations and measurement data, indicating that the ANN is capable of establishing the complex spatial relationship between inflow conditions and the wake flows. The machine learning techniques can remarkably improve the accuracy and efficiency of wake predictions.
Keywords: Wind turbine wake | Wake model | Artificial neural network (ANN) | Machine learning | ADM-R (actuator-disk model with rotation) | model | Computational fluid dynamics (CFD)
مقاله انگلیسی
2 Towards a real-time processing framework based on improved distributed recurrent neural network variants with fastText for social big data analytics
به سمت یک چارچوب پردازش در زمان واقعی بر اساس بهبود انواع شبکه عصبی مکرر توزیع شده با fastText برای تجزیه و تحلیل داده های بزرگ اجتماعی-2020
Big data generated by social media stands for a valuable source of information, which offers an excellent opportunity to mine valuable insights. Particularly, User-generated contents such as reviews, recommendations, and users’ behavior data are useful for supporting several marketing activities of many companies. Knowing what users are saying about the products they bought or the services they used through reviews in social media represents a key factor for making decisions. Sentiment analysis is one of the fundamental tasks in Natural Language Processing. Although deep learning for sentiment analysis has achieved great success and allowed several firms to analyze and extract relevant information from their textual data, but as the volume of data grows, a model that runs in a traditional environment cannot be effective, which implies the importance of efficient distributed deep learning models for social Big Data analytics. Besides, it is known that social media analysis is a complex process, which involves a set of complex tasks. Therefore, it is important to address the challenges and issues of social big data analytics and enhance the performance of deep learning techniques in terms of classification accuracy to obtain better decisions. In this paper, we propose an approach for sentiment analysis, which is devoted to adopting fastText with Recurrent neural network variants to represent textual data efficiently. Then, it employs the new representations to perform the classification task. Its main objective is to enhance the performance of well-known Recurrent Neural Network (RNN) variants in terms of classification accuracy and handle large scale data. In addition, we propose a distributed intelligent system for real-time social big data analytics. It is designed to ingest, store, process, index, and visualize the huge amount of information in real-time. The proposed system adopts distributed machine learning with our proposed method for enhancing decision-making processes. Extensive experiments conducted on two benchmark data sets demonstrate that our proposal for sentiment analysis outperforms well-known distributed recurrent neural network variants (i.e., Long Short-Term Memory (LSTM), Bidirectional Long Short-Term Memory (BiLSTM), and Gated Recurrent Unit (GRU)). Specifically, we tested the efficiency of our approach using the three different deep learning models. The results show that our proposed approach is able to enhance the performance of the three models. The current work can provide several benefits for researchers and practitioners who want to collect, handle, analyze and visualize several sources of information in real-time. Also, it can contribute to a better understanding of public opinion and user behaviors using our proposed system with the improved variants of the most powerful distributed deep learning and machine learning algorithms. Furthermore, it is able to increase the classification accuracy of several existing works based on RNN models for sentiment analysis.
Keywords: Big data | FastText | Recurrent neural networks | LSTM | BiLSTM | GRU | Natural language processing | Sentiment analysis | Social big data analytics
مقاله انگلیسی
3 الگوریتم تکاملی چند هدفی مبتنی بر شبکه عصبی برای زمانبندی گردش کار پویا در محاسبات ابری
سال انتشار: 2020 - تعداد صفحات فایل pdf انگلیسی: 16 - تعداد صفحات فایل doc فارسی: 45
زمانبندی گردشکار یک موضوع پژوهشی است که به طور گسترده در محاسبات ابری مورد مطالعه قرار گرفته است و از منابع ابری برای کارهای گردش کار استفاده می¬شود و برای این منظور اهداف مشخص شده در QoS را لحاظ می¬کند. در این مقاله، مسئله زمانبندی گردش کار پویا را به عنوان یک مسئله بهینه سازی چند هدفه پویا (DMOP) مدل می¬کنیم که در آن منبع پویایی سازی بر اساس خرابی منابع و تعداد اهداف است که ممکن است با گذر زمان تغییر کنند. خطاهای نرم افزاری و یا نقص سخت افزاری ممکن است باعث ایجاد پویایی نوع اول شوند. از سوی دیگر مواجهه با سناریوهای زندگی واقعی در محاسبات ابری ممکن است تعداد اهداف را در طی اجرای گردش کار تغییر دهد. در این مطالعه یک الگوریتم تکاملی چند هدفه پویا مبتنی بر پیش بینی را به نام الگوریتم NN-DNSGA-II ارائه می¬دهیم و برای این منظور شبکه عصبی مصنوعی را با الگوریتم NGSA-II ترکیب می¬کنیم. علاوه بر این پنج الگوریتم پویای مبتنی بر غیرپیش بینی از ادبیات موضوعی برای مسئله زمانبندی گردش کار پویا ارائه می¬شوند. راه¬حل¬های زمانبندی با در نظر گرفتن شش هدف یافت می¬شوند: حداقل سازی هزینه ساخت، انرژی و درجه عدم تعادل و حداکثر سازی قابلیت اطمینان و کاربرد. مطالعات تجربی مبتنی بر کاربردهای دنیای واقعی از سیستم مدیریت گردش کار Pegasus نشان می¬دهد که الگوریتم NN-DNSGA-II ما به طور قابل توجهی از الگوریتم¬های جایگزین خود در بیشتر موارد بهتر کار می¬کند با توجه به معیارهایی که برای DMOP با مورد واقعی پارتو بهینه در نظر گرفته می¬شود از جمله تعداد راه¬حل¬های غیرغالب، فاصله¬گذاری Schott و شاخص Hypervolume.
مقاله ترجمه شده
4 Neural network aided development of a semi-empirical interatomic potential for titanium
شبکه عصبی به توسعه پتانسیل متقابل نیمه تجربی تیتانیوم کمک کرده است-2020
Artificial neural networks, utilizing machine learning techniques to uncover subtle and complex patterns in big data problems, are able to condense large amounts of computationally expensive density functional theory and ab initio results into classical force field potentials. However, in order to produce a computationally efficient network, with minimal network architecture, a structural fingerprint whose components are highly correlated to the per atom energy is necessary. In this paper, we demonstrate the effectiveness a structural fingerprint motivated by the highly successful MEAM formalism by creating an artificial neural network containing a single hidden layer of 20 nodes which provides a semi-empirical force field potential for elemental titanium. This potential is suitable for dynamic calculations of α-, β-, and ω-titanium at a variety of temperatures. This potential is able to achieve a number of results in agreement with DFT calculations which surpass classical potential formalisms with comparable computational performance.
Keywords: Machine learning | Neural networks | Titanium
مقاله انگلیسی
5 Intelligent-ReaxFF: Evaluating the reactive force field parameters with machine learning
Intelli-ReaxFF: ارزیابی پارامترهای میدان نیروی واکنش با یادگیری ماشین-2020
Machine learning has been widely used in quantum chemistries, such as data mining in quantum mechanics calculation and representations of potential energy surface by neural networks. In this study, we report our efforts on the optimization of the ReaxFF parameters with machine learning frameworks. Although deep neural network potentials like High-Dimensional Neural Network Potentials (HDNNP) have achieved much success in applications such as materials modeling, factors like the memory usage, training time, and accuracies are still problems when the training data set is big. On the other hand, classical potentials like ReaxFF and REBO does not have these problems, and a combination of two is an ideal solution. Machine learning has generated techniques such as automatic differentiation and backpropagation, with which we can optimize deep neural networks or complexed interatomic potentials like ReaxFF. With the TensorFlow coding platform, we have constructed an Intelligent ReaxFF (I-ReaxFF) model with terms of matrix (or tensor) operations that can optimize ReaxFF parameters automatically with gradient-based optimizers like adaptive moment solver (Adam) and backpropagations. As inherited from TensorFlow, one significant feature of our code is the GPU acceleration. The training speed can be five times faster with GPU acceleration than pure CPU calculation. Another feather is that it can directly use the ab initio molecular dynamics trajectories with surrounding periodic images as training data, therefore, allowing the data set can be prepared with ease.
Keywords: Neural network | Parameterization | ReaxFF | Materials modeling | Machine learning
مقاله انگلیسی
6 Identification of animal individuals using deep learning: A case study of giant panda
شناسایی فردی حیوانی با استفاده از یادگیری عمیق: یک مطالعه موردی از پاندا غول پیکر-2020
Giant panda (Ailuropoda melanoleuca) is an iconic species of conservation. However, long-term monitoring of wild giant pandas has been a challenge, largely due to the lack of appropriate method for the identification of target panda individuals. Although there are some traditional methods, such as distance-bamboo stem fragments methods, molecular biological method, and manual visual identification, they all have some limitations that can restrict their application. Therefore, it is urgent to explore a reliable and efficient approach to identify giant panda individuals. Here, we applied the deep learning technology and developed a novel face-identification model based on convolutional neural network to identify giant panda individuals. The model was able to identify 95% of giant panda individuals in the validation dataset. In all simulated field situations where the quality of photo data was degraded, the model still accurately identified more than 90% of panda individuals. The identification accuracy of our model is robust to brightness, small rotation, and cleanness of photos, although large rotation angle (> 20°) of photos has significant influence on the identification accuracy of the model (P < 0.01). Our model can be applied in future studies of giant panda such as long-term monitoring, big data analysis for behavior and be adapted for individual identification of other wildlife species.
Keywords: Deep learning | convolutional neural network | Individual identification | Giant panda
مقاله انگلیسی
7 A hybrid deep learning model for efficient intrusion detection in big data environment
یک مدل یادگیری عمیق ترکیبی برای تشخیص نفوذ موثر در محیط داده های بزرگ-2020
The volume of network and Internet traffic is expanding daily, with data being created at the zettabyte to petabyte scale at an exceptionally high rate. These can be character- ized as big data, because they are large in volume, variety, velocity, and veracity. Security threats to networks, the Internet, websites, and organizations are growing alongside this growth in usage. Detecting intrusions in such a big data environment is difficult. Various intrusion-detection systems (IDSs) using artificial intelligence or machine learning have been proposed for different types of network attacks, but most of these systems either cannot recognize unknown attacks or cannot respond to such attacks in real time. Deep learning models, recently applied to large-scale big data analysis, have shown remarkable performance in general but have not been examined for detection of intrusions in a big data environment. This paper proposes a hybrid deep learning model to efficiently detect network intrusions based on a convolutional neural network (CNN) and a weight-dropped, long short-term memory (WDLSTM) network. We use the deep CNN to extract mean- ingful features from IDS big data and WDLSTM to retain long-term dependencies among extracted features to prevent overfitting on recurrent connections. The proposed hybrid method was compared with traditional approaches in terms of performance on a publicly available dataset, demonstrating its satisfactory performance.
Keywords: Big data | Intrusion | detection Deep learning | Convolution neural network | Weight-dropped long short-term memory | network
مقاله انگلیسی
8 Neural network-based seismic response prediction model for building structures using artificial earthquakes
مدل پیش بینی لرزه ای مبتنی بر شبکه عصبی برای سازه های ساختمان با استفاده از زلزله های مصنوعی-2020
In this paper, a new model for predicting seismic responses of buildings based on the correlation of ground motion (GM) and the structure is presented by simulating numerous artificial earthquakes (AEQs). In the model, neural network (NN) configurations representing the relationships between GM characteristics and seismic responses of a structure are developed to predict responses of the structure with only GM data measured by monitoring system in future seismic events. To extract the GM characteristics, multiple AEQs corresponding to the design response spectrum are generated based on probabilistic vibration theory, instead of using historical earthquakes. In the presented NN configurations, GM characteristics including mean and predominant period, significant duration, and peak ground acceleration are established as the input layer and the maximum interstory drift ratio and maximum displacement are established as the output layer. In addition, a new parameter called resonance area is proposed to represent the relationship between a GM and a target structure in the frequency domain and utilized in the NN input layer. By employing the new parameter, dynamic characteristics of the structure are considered in the response estimation of the model with related to GM. The model is applied to seismic response prediction for four multi-degrees-of-freedom (MDOF) structures with different natural periods using 2700 AEQs. The validities of the presented NN models are confirmed by investigating the performance of response prediction. The effectiveness of the resonance area parameter in the NN for predicting the seismic responses is assessed and discussed. Furthermore, the effects of the constitution of NNs and computational costs of those NNs on estimation were investigated. Finally, the presented model is employed for prediction of seismic responses for a structural model of a planar reinforced concrete building structure.
Keywords: Structural health monitoring | Seismic response prediction | Neural network | Artificial earthquake
مقاله انگلیسی
9 Detection of flood disaster system based on IoT, big data and convolutional deep neural network
تشخیص سیستم بحرانی سیل بر اساس اینترنت اشیا، داده های بزرگ و شبکه عصبی عمیق پیچشی-2020
Natural disasters could be defined as a blend of natural risks and vulnerabilities. Each year, natural as well as human-instigated disasters, bring about infrastructural damages, distresses, revenue losses, injuries in addition to huge death roll. Researchers around the globe are trying to find a unique solution to gather, store and analyse Big Data (BD) in order to predict results related to flood based prediction system. This paper has proposed the ideas and methods for the detection of flood disaster based on IoT, BD, and convolutional deep neural network (CDNN) to overcome such difficulties. First, the input data is taken from the flood BD. Next, the repeated data are reduced by using HDFS map-reduce (). After removal of repeated data, the data are pre-processed using missing value imputation and normalization function. Then, centred on the pre-processed data, the rule is generated by using a combination of attributes method. At the last stage, the generated rules are provided as the input to the CDNN classifier which classifies them as a) chances for the occurrence of flood and b) no chances for the occurrence of a flood. The outcomes obtained from the proposed CDNN method is compared parameters like Sensitivity, Specificity, Accuracy, Precision, Recall and F-score. Moreover, when the outcomes is compared other existing algorithms like Artificial Neural Network (ANN) & Deep Learning Neural Network (DNN), the proposed system gives is very accurate result than other methods.
Keywords: Hadoop distributed file system (HDFS) | Convolutional deep neural network (CDNN) | Normalization | Rule generation | Missing value imputation
مقاله انگلیسی
10 Machine-learning based error prediction approach for coarse-grid Computational Fluid Dynamics (CG-CFD)
رویکرد پیش بینی خطا مبتنی بر یادگیری ماشین برای دینامیک سیالات محاسباتی درشت-شبکه (CG-CFD)-2020
Computational Fluid Dynamics (CFD) is one of the modeling approaches essential to identifying the parameters that affect Containment Thermal Hydraulics (CTH) phenomena. While the CFD approach can capture the multidimensional behavior of CTH phenomena, its computational cost is high when modeling complex accident scenarios. To mitigate this expense, we propose reliance on coarse-grid CFD (CG-CFD). Coarsening the computational grid increases the grid-induced error thus requiring a novel approach that will produce a surrogate model predicting the distribution of the CG-CFD local error and correcting the fluid-flow variables. Given sufficiently fine-mesh simulations, a surrogate model can be trained to predict the CG-CFD local errors as a function of the coarse-grid local flow features. The surrogate model is constructed using Machine Learning (ML) regression algorithms. Two of the widely used ML regression algorithms were tested: Artificial Neural Network (ANN) and Random Forest (RF). The proposed CG-CFD method is illustrated with a three-dimensional turbulent flow inside a lid-driven cavity. We studied a set of scenarios to investigate the capability of the surrogate model to interpolate and extrapolate outside the training data range. The proposed method has proven capable of correcting the coarse-grid results and obtaining reasonable predictions for new cases (of different Reynolds number, different grid sizes, or larger geometries). Based on the investigated cases, we found this novel method maximizes the benefit of the available data and shows potential for a good predictive capability.
Keywords: Coarse grid (mesh) | CFD | Machine learning | Discretization error | Big data | Artificial neural network | Random forest | Data-driven
مقاله انگلیسی
rss مقالات ترجمه شده rss مقالات انگلیسی rss کتاب های انگلیسی rss مقالات آموزشی