با سلام خدمت کاربران عزیز، به اطلاع می رساند ترجمه مقالاتی که سال انتشار آن ها زیر 2008 می باشد رایگان بوده و میتوانید با وارد شدن در صفحه جزییات مقاله به رایگان ترجمه را دانلود نمایید.
Energy-aware resource management for uplink non-orthogonal multiple access: Multi-agent deep reinforcement learning
مدیریت منابع آگاه در زمینه انرژی برای دسترسی چندگانه غیر متعاملی به هم پیوسته: یادگیری تقویت عمیق چند عامل-2020
Non-orthogonal multiple access (NOMA) is one of the promising technologies to meet the huge access demand and the high data rate requirements of the next generation networks. In this paper, we investigate the joint subchannel assignment and power allocation problem in an uplink multi-user NOMA system to maximize the energy efficiency (EE) while ensuring the quality-of-service (QoS) of all users. Different from conventional model-based resource allocation methods, we propose two deep reinforcement learning (DRL) based frameworks to solve this non-convex and dynamic optimization problem, referred to as discrete DRL based resource allocation (DDRA) framework and continuous DRL based resource allocation (CDRA) framework. Specifically, for the DDRA framework, we use a deep Q network (DQN) to output the optimum subchannel assignment policy, and design a distributed and discretized multi-DQN based network to allocate the corresponding transmit power of all users. For the CDRA framework, we design a joint DQN and deep deterministic policy gradient (DDPG) based network to generate the optimal subchannel assignment and power allocation policy. The entire resource allocation policies of these two frameworks are adjusted by updating the weights of their neural networks according to feedback of the system. Numerical results show that the proposed DRLbased resource allocation frameworks can significantly improve the EE of the whole NOMA system compared with other approaches. The proposed DRL based frameworks can provide good performance in various moving speed scenarios through adjusting learning parameters.
Keywords: Non-orthogonal multiple access | Resource allocation | Energy efficiency | Deep reinforcement learning | Deep deterministic policy gradient
Rule-interposing deep reinforcement learning based energy management strategy for power-split hybrid electric vehicle
استراتژی مدیریت انرژی مبتنی بر یادگیری تقویت عمیق بر قانون برای تقسیم برق خودروی برقی هیبریدی -2020
The optimization and training processes of deep reinforcement learning (DRL) based energy management strategy (EMS) can be very slow and resource-intensive. In this paper, an improved energy management framework that embeds expert knowledge into deep deterministic policy gradient (DDPG) is proposed. Incorporated with the battery characteristics and the optimal brake specific fuel consumption (BSFC) curve of hybrid electric vehicles (HEVs), we are committed to solving the optimization problem of multi-objective energy management with a large space of control variables. By incorporating this prior knowledge, the proposed framework not only accelerates the learning process, but also gets a better fuel economy, thus making the energy management system relatively stable. The experimental results show that the proposed EMS outperforms the one without prior knowledge and the other state-of-art deep reinforcement learning approaches. In addition, the proposed approach can be easily generalized to other types of HEV EMSs.
Keywords: Energy management strategy | Hybrid electric vehicle | Expert knowledge | Deep deterministic policy gradient | Continuous action space
Deep reinforcement learning based energy management for a hybrid electric vehicle
مدیریت انرژی مبتنی بر یادگیری تقویت عمیق برای یک وسیله نقلیه برقی هیبریدی-2020
This research proposes a reinforcement learning-based algorithm and a deep reinforcement learningbased algorithm for energy management of a series hybrid electric tracked vehicle. Firstly, the powertrain model of the series hybrid electric tracked vehicle (SHETV) is constructed, then the corresponding energy management formulation is established. Subsequently, a new variant of reinforcement learning (RL) method Dyna, namely Dyna-H, is developed by combining the heuristic planning step with the Dyna agent and is applied to energy management control for SHETV. Its rapidity and optimality are validated by comparing with DP and conventional Dyna method. Facing the problem of the “curse of dimensionality” in the reinforcement learning method, a novel deep reinforcement learning algorithm deep Qlearning (DQL) is designed for energy management control, which uses a new optimization method (AMSGrad) to update the weights of the neural network. Then the proposed deep reinforcement learning control system is trained and verified by the realistic driving condition with high-precision, and is compared with the benchmark method DP and the traditional DQL method. Results show that the proposed deep reinforcement learning method realizes faster training speed and lower fuel consumption than traditional DQL policy does, and its fuel economy quite approximates to global optimum. Furthermore, the adaptability of the proposed method is confirmed in another driving schedule..
Keywords: Hybrid electric tracked vehicle | Energy management | Dyna-H | Deep reinforcement learning | AMSGrad optimizer
Optimal policy for structure maintenance: A deep reinforcement learning framework
سیاست بهینه برای نگهداری ساختار: یک چارچوب یادگیری تقویت عمیق-2020
The cost-effective management of aged infrastructure is an issue of worldwide concern. Markov decision process (MDP) models have been used in developing structural maintenance policies. Recent advances in the artificial intelligence (AI) community have shown that deep reinforcement learning (DRL) has the potential to solve large MDP optimization tasks. This paper proposes a novel automated DRL framework to obtain an optimized structural maintenance policy. The DRL framework contains a decision maker (AI agent) and the structure that needs to be maintained (AI task environment). The agent outputs maintenance policies and chooses maintenance actions, and the task environment determines the state transition of the structure and returns rewards to the agent under given maintenance actions. The advantages of the DRL framework include: (1) a deep neural network (DNN) is employed to learn the state-action Q value (defined as the predicted discounted expectation of the return for consequences under a given state-action pair), either based on simulations or historical data, and the policy is then obtained from the Q value; (2) optimization of the learning process is sample-based so that it can learn directly from real historical data collected from multiple bridges (i.e., big data from a large number of bridges); and (3) a general framework is used for different structure maintenance tasks with minimal changes to the neural network architecture. Case studies for a simple bridge deck with seven components and a long-span cable-stayed bridge with 263 components are performed to demonstrate the proposed procedure. The results show that the DRL is efficient at finding the optimal policy for maintenance tasks for both simple and complex structures.
Keywords: Bridge maintenance policy | Deep reinforcement learning (DRL) | Markov decision process (MDP) | Deep Q-network (DQN) | Convolutional neural network (CNN)
Automated vehicle’s behavior decision making using deep reinforcement learning and high-fidelity simulation environment
تصمیم گیری خودکار وسیله نقلیه با استفاده از یادگیری تقویتی عمیق و محیط شبیه سازی با وفاداری بالا-2019
Automated vehicles (AVs) are deemed to be the key element for the intelligent transportation system in the future. Many studies have been made to improve AVs’ ability of environment recognition and vehicle control, while the attention paid to decision making is not enough and the existing decision algorithms are very preliminary. Therefore, a framework of the decisionmaking training and learning is put forward in this paper. It consists of two parts: the deep reinforcement learning (DRL) training program and the high-fidelity virtual simulation environment. Then the basic microscopic behavior, car-following (CF), is trained within this framework. In addition, theoretical analysis and experiments were conducted to evaluate the proposed reward functions for accelerating training using DRL. The results show that on the premise of driving comfort, the efficiency of the trained AV increases 7.9% and 3.8% respectively compared to the classical adaptive cruise control models, intelligent driver model and constant-time headway policy. Moreover, on a more complex three-lane section, we trained an integrated model combining both CF and lane-changing behavior, with the average speed further growing 2.4%. It indicates that our framework is effective for AV’s decision-making learning.
Keywords: Automated vehicle | Decision making | Deep reinforcement learning | Reward function
(Deep) Reinforcement learning for electric power system control and related problems: A short review and perspectives
(عمیق) یادگیری تقویتی برای کنترل سیستم برق و مشکلات مرتبط با آن: یک مرور کوتاه و چشم اندازها-2019
This paper reviews existing works on (deep) reinforcement learning considerations in electric power sys- tem control. The works are reviewed as they relate to electric power system operating states (normal, preventive, emergency, restorative) and control levels (local, household, microgrid, subsystem, wide-area). Due attention is paid to the control-related problems considerations (cyber-security, big data analysis, short-term load forecast, and composite load modelling). Observations from reviewed literature are drawn and perspectives discussed. In order to make the text compact and as easy as possible to read, the focus is only on the works published (or “in press”) in journals and books while conference publications are not included. Exceptions are several work available in open repositories likely to become journal pub- lications in near future. Hopefully this paper could serve as a good source of information for all those interested in solving similar problems.
Keywords: Electric power system | Reinforcement learning | Deep reinforcement learning | Control | Control-related problems
Intelligent fault diagnosis for rotating machinery using deep Q-network based health state classification: A deep reinforcement learning approach
تشخیص خطای هوشمند برای ماشین آلات در حال چرخش با استفاده از طبقه بندی حالت سلامت مبتنی بر شبکه Q عمقی: یک روش یادگیری تقویتی عمیق-2019
Fault diagnosis methods for rotating machinery have always been a hot research topic, and artificial intelligencebased approaches have attracted increasing attention from both researchers and engineers. Among those related studies and methods, artificial neural networks, especially deep learning-based methods, are widely used to extract fault features or classify fault features obtained by other signal processing techniques. Although such methods could solve the fault diagnosis problems of rotating machinery, there are still two deficiencies. (1) Unable to establish direct linear or non-linear mapping between raw data and the corresponding fault modes, the performance of such fault diagnosis methods highly depends on the quality of the extracted features. (2) The optimization of neural network architecture and parameters, especially for deep neural networks, requires considerable manual modification and expert experience, which limits the applicability and generalization of such methods. As a remarkable breakthrough in artificial intelligence, AlphaGo, a representative achievement of deep reinforcement learning, provides inspiration and direction for the aforementioned shortcomings. Combining the advantages of deep learning and reinforcement learning, deep reinforcement learning is able to build an end-to-end fault diagnosis architecture that can directly map raw fault data to the corresponding fault modes. Thus, based on deep reinforcement learning, a novel intelligent diagnosis method is proposed that is able to overcome the shortcomings of the aforementioned diagnosis methods. Validation tests of the proposed method are carried out using datasets of two types of rotating machinery, rolling bearings and hydraulic pumps, which contain a large number of measured raw vibration signals under different health states and working conditions. The diagnosis results show that the proposed method is able to obtain intelligent fault diagnosis agents that can mine the relationships between the raw vibration signals and fault modes autonomously and effectively. Considering that the learning process of the proposed method depends only on the replayed memories of the agent and the overall rewards, which represent much weaker feedback than that obtained by the supervised learning-based method, the proposed method is promising in establishing a general fault diagnosis architecture for rotating machinery.
Keywords: Fault diagnosis | Rotating machinery | Deep reinforcement learning | Deep Q-network
Decentralized network level adaptive signal control by multi-agent deep reinforcement learning
کنترل سیگنال تطبیقی سطح شبکه غیر متمرکز با یادگیری تقویت عمیق چند عاملی-2019
Adaptive traffic signal control systems are deployed to accommodate real-time traffic conditions. Yet travel demand and behavior of the individual vehicles might be overseen by their model-based control algorithms and aggregated input data. Recent development of artificial intelligence, especially the success of deep learning, makes it possible to utilize information of individual vehicles to control the traffic signals. Several pioneering studies developed modelfree control algorithms using deep reinforcement learning. However, those studies are limited to isolated intersections and their effectiveness was only evaluated in ideal simulated traffic conditions by hypothetical benchmarks. To fill the gap, this study proposes a network-level decentralized adaptive signal control algorithmusing one of the famous deep reinforcement methods, double dueling deep Q network in the multi-agent reinforcement learning framework. The proposed algorithm was evaluated by the real-world coordinated actuated signals in a simulated suburban traffic corridor which emulates the real-field traffic condition. The evaluation results showed that the proposed deepreinforcement- learning-based algorithm outperforms the benchmark. It is able to reduce 10.27% of the travel time and 46.46% of the total delay.
Keywords: Deep reinforcement learning | Multi-agent reinforcement learning | Adaptive signal control
Deep reinforcement learning-based controller for path following of an unmanned surface vehicle
کنترلر مبتنی بر یادگیری تقویتی عمیق برای پیگیری مسیر یک وسیله نقلیه سطحی بدون سرنشین-2019
In this paper, a deep reinforcement learning (DRL)-based controller for path following of an unmanned surface vehicle (USV) is proposed. The proposed controller can self-develop a vehicle’s path following capability by interacting with the nearby environment. A deep deterministic policy gradient (DDPG) algorithm, which is an actor-critic-based reinforcement learning algorithm, was adapted to capture the USV’s experience during the path-following trials. A Markov decision process model, which includes the state, action, and reward formulation, specially designed for the USV path-following problem is suggested. The control policy was trained with repeated trials of path-following simulation. The proposed method’s path-following and self-learning capabilities were validated through USV simulation and a free-running test of the full-scale USV.
Keywords: Deep reinforcement learning | Path following | Unmanned surface vehicle | Learning-based control | Artificial intelligence
Deep reinforcement learning with its application for lung cancer detection in medical Internet of Things
یادگیری تقویتی عمیق با کاربرد آن برای تشخیص سرطان ریه در اینترنت اشیاء پزشکی -2019
Recently, deep reinforcement learning has achieved great success by integrating deep learning models into reinforcement learning algorithms in various applications such as computer games and robots. Specially, it is promising for computer-aided diagnosis and treatment to combine deep reinforcement learning with medical big data generated and collected from medical Internet of Things. In this paper, we focus on the potential of the deep reinforcement learning for lung cancer detection as many people are suffering from the lung tumor and about 1.8 million patients died from lung cancer in 2018. Early detection and diagnosis of lung tumor can significantly improve the treatment effect and prolong survival. In this work, we present several representative deep reinforcement learning models that are potential to use for lung cancer detection. Furthermore, we summarize the common types of lung cancer and the main characteristics of each type. Finally, we point out the open challenges and possible future research directions of applying deep reinforcement learning to lung cancer detection, which is expected to promote the evolution of smart medicine with medical Internet of Things.
Keywords: Smart medicine | Medical Internet of Things | Deep reinforcement learning | Lung cancer