با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد).
دسته بندی:
یادگیری تقویتی - Reinforcement-Learning
سال انتشار:
2020
عنوان انگلیسی مقاله:
Study on deep reinforcement learning techniques for building energy consumption forecasting
ترجمه فارسی عنوان مقاله:
مطالعه تکنیک های یادگیری تقویتی عمیق برای پیش بینی مصرف انرژی در ساخت
منبع:
Sciencedirect - Elsevier - Energy & Buildings, 208 (2020) 109675. doi:10.1016/j.enbuild.2019.109675
نویسنده:
Tao Liu a , Zehan Tan b , Chengliang Xu a , Huanxin Chen a , ∗, Zhengfei Li a
چکیده انگلیسی:
Reliable and accurate building energy consumption prediction is becoming increasingly pivotal in build- ing energy management. Currently, data-driven approach has shown promising performances and gained lots of research attention due to its efficiency and flexibility. As a combination of reinforcement learning and deep learning, deep reinforcement learning (DRL) techniques are expected to solve nonlinear and complex issues. However, very little is known about DRL techniques in forecasting building energy con- sumption. Therefore, this paper presents a case study of an office building using three commonly-used DRL techniques to forecast building energy consumption, namely Asynchronous Advantage Actor-Critic (A3C), Deep Deterministic Policy Gradient (DDPG) and Recurrent Deterministic Policy Gradient (RDPG). The objective is to investigate the potential of DRL techniques in building energy consumption predic- tion field. A comprehensive comparison between DRL models and common supervised models is also provided. The results demonstrate that the proposed DDPG and RDPG models have obvious advantages in forecast- ing building energy consumption compared to common supervised models, while accounting for more computation time for model training. Their prediction performances measured by mean absolute error (MAE) can be improved by 16%-24% for single-step ahead prediction, and 19%-32% for multi-step ahead prediction. The results also indicate that A3C performs poor prediction accuracy and shows much slower convergence speed than DDPG and RDPG. However, A3C is still the most efficient technique among these three DRL methods. The findings are enlightening and the proposed DRL methodologies can be positively extended to other prediction problems, e.g., wind speed prediction and electricity load prediction.
Keywords: Energy consumption prediction | Ground source heat pump | Deep reinforcement learning | Asynchronous advantage Actor-Critic | Deep deterministic Policy gradient | Recurrent deterministic Policy gradient
قیمت: رایگان
توضیحات اضافی:
تعداد نظرات : 0