با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد).
ردیف | عنوان | نوع |
---|---|---|
1 |
High-Performance Reservoir Computing With Fluctuations in Linear Networks
محاسبات مخزن با کارایی بالا با نوسانات در شبکه های خطی-2022 Reservoir computing has emerged as a powerful
machine learning paradigm for harvesting nontrivial information
processing out of disordered physical systems driven by sequential inputs. To this end, the system observables must become
nonlinear functions of the input history. We show that encoding
the input to quantum or classical fluctuations of a network of
interacting harmonic oscillators can lead to a high performance
comparable to that of a standard echo state network in several
nonlinear benchmark tasks. This equivalence in performance
holds even with a linear Hamiltonian and a readout linear in the
system observables. Furthermore, we find that the performance of
the network of harmonic oscillators in nonlinear tasks is robust to
errors both in input and reservoir observables caused by external
noise. For any reservoir computing system with a linear readout,
the magnitude of trained weights can either amplify or suppress
noise added to reservoir observables. We use this general result to
explain why the oscillators are robust to noise and why having
precise control over reservoir memory is important for noise
robustness in general. Our results pave the way toward reservoir
computing harnessing fluctuations in disordered linear systems.
Index Terms: Dynamical systems | machine learning | quantum mechanics | recurrent neural networks | reservoir computing | supervised learning. |
مقاله انگلیسی |
2 |
Towards optimal control of air handling units using deep reinforcement learning and recurrent neural network
به سمت کنترل بهینه واحدهای مدیریت هوا با استفاده از یادگیری تقویتی عمیق و شبکه عصبی بازگشتی -2020 A new generation of smart stormwater systems promises to reduce the need for new construction by enhancing the performance of the existing infrastructure through real-time control. Smart stormwater systems dynamically adapt their response to individual storms by controlling distributed assets, such as valves, gates, and pumps. This paper introduces a real-time control approach based on Reinforcement Learning (RL), which has emerged as a state-of-the-art methodology for autonomous control in the artificial intelligence community. Using a Deep Neu- ral Network, a RL-based controller learns a control strategy by interacting with the system it controls - effectively trying various control strategies until converging on those that achieve a desired objective. This paper formulates and implements a RL algorithm for the real-time control of urban stormwater systems. This algorithm trains a RL agent to control valves in a distributed stormwater system across thousands of simulated storm scenarios, seeking to achieve water level and flow set-points in the system. The algorithm is first evaluated for the control of an individual stormwater basin, after which it is adapted to the control of multiple basins in a larger watershed (4 km 2 ). The results indicate that RL can very effectively control individual sites. Performance is highly sensitive to the reward formulation of the RL agent. Generally, more explicit guidance led to better control performance, and more rapid and stable convergence of the learning process. While the control of multiple distributed sites also shows promise in reducing flooding and peak flows, the complexity of controlling larger systems comes with a number of caveats. The RL controller’s performance is very sensitive to the formulation of the Deep Neural Network and requires a significant amount of computational resource to achieve a reasonable performance en- hancement. Overall, the controlled system significantly outperforms the uncontrolled system, especially across storms of high intensity and duration. A frank discussion is provided, which should allow the benefits and draw- backs of RL to be considered when implementing it for the real-time control of stormwater systems. An open source implementation of the full simulation environment and control algorithms is also provided. Keywords: Real-time control | Reinforcement learning | Smart stormwater systems |
مقاله انگلیسی |
3 |
Looking in the Right Place for Anomalies: Explainable Ai Through Automatic Location Learning
جستجوی مکان مناسب برای ناهنجاری ها: هوش مصنوعی قابل توضیح از طریق یادگیری خودکار مکان-2020 Deep learning has now become the de facto approach to the
recognition of anomalies in medical imaging. Their ’black
box’ way of classifying medical images into anomaly labels
poses problems for their acceptance, particularly with clinicians.
Current explainable AI methods offer justifications
through visualizations such as heat maps but cannot guarantee
that the network is focusing on the relevant image region fully
containing the anomaly. In this paper we develop an approach
to explainable AI in which the anomaly is assured to be overlapping
the expected location when present. This is made
possible by automatically extracting location-specific labels
from textual reports and learning the association of expected locations
to labels using a hybrid combination of Bi-Directional
Long Short-Term Memory Recurrent Neural Networks (Bi-
LSTM) and DenseNet-121. Use of this expected location to
bias the subsequent attention-guided inference network based
on ResNet101 results in the isolation of the anomaly at the
expected location when present. The method is evaluated on a
large chest X-ray dataset. |
مقاله انگلیسی |
4 |
Self-organization of action hierarchy and compositionality by reinforcement learning with recurrent neural networks
خود سازماندهی سلسله مراتبی و ترکیب پذیری عمل با یادگیری تقویتی و شبکه های عصبی بازگشتی -2020 Recurrent neural networks (RNNs) for reinforcement learning (RL) have shown distinct advantages,
e.g., solving memory-dependent tasks and meta-learning. However, little effort has been spent on improving
RNN architectures and on understanding the underlying neural mechanisms for performance
gain. In this paper, we propose a novel, multiple-timescale, stochastic RNN for RL. Empirical results
show that the network can autonomously learn to abstract sub-goals and can self-develop an action
hierarchy using internal dynamics in a challenging continuous control task. Furthermore, we show
that the self-developed compositionality of the network enhances faster re-learning when adapting to a
new task that is a re-composition of previously learned sub-goals, than when starting from scratch. We
also found that improved performance can be achieved when neural activities are subject to stochastic
rather than deterministic dynamics. Keywords: Recurrent neural network | Reinforcement learning | Partially observable Markov decision | process | Multiple timescale | Compositionality |
مقاله انگلیسی |
5 |
Random recurrent neural networks with delays
شبکه های عصبی مکرر تصادفی با تأخیر-2020 An infinite lattice model of a recurrent neural network with random connection strengths between neurons is developed and analyzed. To incorporate the presence of various type of delays in the neural networks, both discrete and distributed time varying delays are considered in the model. For the existence of random pullback attractors and periodic attractors, the nonlinear terms of the resulting system are not expected to be Lipschitz continuous, but only satisfy a weaker continuity assumption along with growth conditions, under which the uniqueness of the underlying Cauchy problem may not hold. Then after extending the concept and theory of monotone multi-valued semiflows to the random context, the structure of random pullback attractors with or without periodicity is investigated. In particular, the existence and stability properties of extremal random complete trajectories are studied. Keywords:Random recurrent neural network | Random attractor | Multi-valued non compact random dynamical system | Extremal random complete trajectory | Variable delay | Monotone |
مقاله انگلیسی |
6 |
Towards a real-time processing framework based on improved distributed recurrent neural network variants with fastText for social big data analytics
به سمت یک چارچوب پردازش در زمان واقعی بر اساس بهبود انواع شبکه عصبی مکرر توزیع شده با fastText برای تجزیه و تحلیل داده های بزرگ اجتماعی-2020 Big data generated by social media stands for a valuable source of information, which offers an
excellent opportunity to mine valuable insights. Particularly, User-generated contents such as
reviews, recommendations, and users’ behavior data are useful for supporting several marketing
activities of many companies. Knowing what users are saying about the products they bought or
the services they used through reviews in social media represents a key factor for making decisions.
Sentiment analysis is one of the fundamental tasks in Natural Language Processing.
Although deep learning for sentiment analysis has achieved great success and allowed several
firms to analyze and extract relevant information from their textual data, but as the volume of
data grows, a model that runs in a traditional environment cannot be effective, which implies the
importance of efficient distributed deep learning models for social Big Data analytics. Besides, it
is known that social media analysis is a complex process, which involves a set of complex tasks.
Therefore, it is important to address the challenges and issues of social big data analytics and
enhance the performance of deep learning techniques in terms of classification accuracy to obtain
better decisions.
In this paper, we propose an approach for sentiment analysis, which is devoted to adopting
fastText with Recurrent neural network variants to represent textual data efficiently. Then, it
employs the new representations to perform the classification task. Its main objective is to enhance
the performance of well-known Recurrent Neural Network (RNN) variants in terms of
classification accuracy and handle large scale data. In addition, we propose a distributed intelligent
system for real-time social big data analytics. It is designed to ingest, store, process,
index, and visualize the huge amount of information in real-time. The proposed system adopts
distributed machine learning with our proposed method for enhancing decision-making processes.
Extensive experiments conducted on two benchmark data sets demonstrate that our
proposal for sentiment analysis outperforms well-known distributed recurrent neural network
variants (i.e., Long Short-Term Memory (LSTM), Bidirectional Long Short-Term Memory
(BiLSTM), and Gated Recurrent Unit (GRU)). Specifically, we tested the efficiency of our approach
using the three different deep learning models. The results show that our proposed approach
is able to enhance the performance of the three models. The current work can provide
several benefits for researchers and practitioners who want to collect, handle, analyze and visualize
several sources of information in real-time. Also, it can contribute to a better understanding
of public opinion and user behaviors using our proposed system with the improved
variants of the most powerful distributed deep learning and machine learning algorithms.
Furthermore, it is able to increase the classification accuracy of several existing works based on
RNN models for sentiment analysis. Keywords: Big data | FastText | Recurrent neural networks | LSTM | BiLSTM | GRU | Natural language processing | Sentiment analysis | Social big data analytics |
مقاله انگلیسی |
7 |
Benchmarking the Performance and Energy Efficiency of AI Accelerators for AI Training
معیار عملکرد و بهره وری انرژی شتاب دهنده های هوش مصنوعی برای آموزش هوش مصنوعی-2020 Deep learning has become widely used in complex AI
applications. Yet, training a deep neural network (DNNs) model
requires a considerable amount of calculations, long running time,
and much energy. Nowadays, many-core AI accelerators (e.g.,
GPUs and TPUs) are designed to improve the performance of
AI training. However, processors from different vendors perform
dissimilarly in terms of performance and energy consumption.
To investigate the differences among several popular off-theshelf
processors (i.e., Intel CPU, NVIDIA GPU, AMD GPU, and
Google TPU) in training DNNs, we carry out a comprehensive
empirical study on the performance and energy efficiency of
these processors 1 by benchmarking a representative set of deep
learning workloads, including computation-intensive operations,
classical convolutional neural networks (CNNs), recurrent neural
networks (LSTM), Deep Speech 2, and Transformer. Different
from the existing end-to-end benchmarks which only present the
training time, We try to investigate the impact of hardware,
vendor’s software library, and deep learning framework on
the performance and energy consumption of AI training. Our
evaluation methods and results not only provide an informative
guide for end users to select proper AI accelerators, but also
expose some opportunities for the hardware vendors to improve
their software library. Index Terms: AI Accelerator | Deep Learning | CPU | GPU | TPU | Computation-intensive Operations | Convolution Neural Networks | Recurrent Neural Networks | Transformer | Deep Speech 2 |
مقاله انگلیسی |
8 |
Solder joint reliability risk estimation by AI modeling
برآورد خطر قابلیت اطمینان اتصال لحیم کاری با مدل سازی هوش مصنوعی -2020 This paper studies AI modeling for the solder joint
fatigue risk estimation under the thermal cycle loading of
redistributed wafer level packaging. The artificial neural
network (ANN), recurrent neural network (RNN) and
vectorized-gate network long short-term memory (VNLSTM)
architectures have been trained by the same dataset
to investigate their performance for this task. The learning
accuracy criterion, the implementation of all neural
network architecture, the learning results and result
analysis would be covered.
Because the involvement of the time/temperaturedependent
nonlinearity material characteristics, it is
recommended that more than three hidden layers and a
proper neural network architecture, which is capable of the
sequential data processing, should be considered in order
to guarantee the required accuracy and the satisfied
convergence speed. Keywords: Solder joint fatigue risk estimation | Time/temperature-dependent nonlinearity | ANN | RNN | LSTM | machine learning |
مقاله انگلیسی |
9 |
Structured pruning of recurrent neural networks through neuron selection
هرس ساختاری شبکه های عصبی مکرر از طریق انتخاب نورون-2020 Recurrent neural networks (RNNs) have recently achieved remarkable successes in a number of
applications. However, the huge sizes and computational burden of these models make it difficult for
their deployment on edge devices. A practically effective approach is to reduce the overall storage and
computation costs of RNNs by network pruning techniques. Despite their successful applications, those
pruning methods based on Lasso either produce irregular sparse patterns in weight matrices, which
is not helpful in practical speedup. To address these issues, we propose a structured pruning method
through neuron selection which can remove the independent neuron of RNNs. More specifically, we
introduce two sets of binary random variables, which can be interpreted as gates or switches to
the input neurons and the hidden neurons, respectively. We demonstrate that the corresponding
optimization problem can be addressed by minimizing the L0 norm of the weight matrix. Finally,
experimental results on language modeling and machine reading comprehension tasks have indicated
the advantages of the proposed method in comparison with state-of-the-art pruning competitors. In
particular, nearly 20× practical speedup during inference was achieved without losing performance
for the language model on the Penn TreeBank dataset, indicating the promising performance of the
proposed method. Keywords: Feature selection | Recurrent neural networks | Learning sparse models | Model compression |
مقاله انگلیسی |
10 |
Medi-Care AI: Predicting medications from billing codes via robust recurrent neural networks
Medi-Care AI: پیش بینی داروها از کدهای صورتحساب از طریق شبکه های عصبی تکراری شتاب دار-2020 In this paper, we present an effective deep prediction framework based on robust recurrent neural
networks (RNNs) to predict the likely therapeutic classes of medications a patient is taking, given
a sequence of diagnostic billing codes in their record. Accurately capturing the list of medications
currently taken by a given patient is extremely challenging due to undefined errors and omissions.
We present a general robust framework that explicitly models the possible contamination through
overtime decay mechanism on the input billing codes and noise injection into the recurrent hidden
states, respectively. By doing this, billing codes are reformulated into its temporal patterns with decay
rates on each medical variable, and the hidden states of RNNs are regularized by random noises which
serve as dropout to improved RNNs robustness towards data variability in terms of missing values and
multiple errors. The proposed method is extensively evaluated on real health care data to demonstrate
its effectiveness in suggesting medication orders from contaminated values. Keywords: Billing codes | Robust recurrent neural networks | Health care data | Medication prediction |
مقاله انگلیسی |