دانلود مقاله انگلیسی رایگان:کنترل سوپرهیت چرخه آلی رانکین تحت منبع حرارتی گذرا بر اساس یادگیری تقویتی عمیق - 2020
بلافاصله پس از پرداخت دانلود کنید
دانلود مقاله انگلیسی یادگیری تقویتی رایگان
  • Control of superheat of organic Rankine cycle under transient heat source based on deep reinforcement learning Control of superheat of organic Rankine cycle under transient heat source based on deep reinforcement learning
    Control of superheat of organic Rankine cycle under transient heat source based on deep reinforcement learning

    سال انتشار:

    2020


    عنوان انگلیسی مقاله:

    Control of superheat of organic Rankine cycle under transient heat source based on deep reinforcement learning


    ترجمه فارسی عنوان مقاله:

    کنترل سوپرهیت چرخه آلی رانکین تحت منبع حرارتی گذرا بر اساس یادگیری تقویتی عمیق


    منبع:

    Sciencedirect - Elsevier - Applied Energy, 278 (2020) 115637. doi:10.1016/j.apenergy.2020.115637


    نویسنده:

    Xuan Wang a,*, Rui Wang a, Ming Jin b, Gequn Shu a, Hua Tian a,*, Jiaying Pan a


    چکیده انگلیسی:

    The organic Rankine cycle (ORC) is a promising technology for engine waste heat recovery. During real-world operation, the engine working condition varies frequently to satisfy the power demand; thus, the transient nature of engine waste heat presents significant control challenges for the ORC. To control the superheat of the ORC precisely under a transient heat source, several optimal control methods have been used such as model predictive control and dynamic programing. However, most of them depend strongly on the accurate prediction of future disturbances. Deep reinforcement learning (DRL) is an artificial-intelligence algorithm that can overcome the aforementioned disadvantage, but the potential of DRL in control of thermodynamic systems has not yet been investigated. Thus, this paper proposes two DRL-based control methods for controlling the superheat of ORC under a transient heat source. One directly uses the DRL agent to learn the control strategy (DRL control), and the other uses the DRL agent to optimize the parameters of the proportional–integral–derivative (PID) controller (DRL-based PID control). Additionally, a switching mechanism between different DRL controllers is proposed for improving the training efficiency and enlarging the operation range of the controller. The results of this study indicate that the DRL agent can satisfactorily perform the control task and optimize the traditional controller under the trained and untrained transient heat source. Specifically, the DRL control can track the reference superheat with an average error of only 0.19 K, whereas that of the traditional PID control is 2.16 K. Furthermore, the proposed switching DRL control exhibits excellent tracking performance with an average error of only 0.21 K and robustness over a wide range of operation conditions. The successful application of DRL demonstrates its considerable potential for the control of thermodynamic systems, providing a useful reference and motivation for the application to other thermodynamic systems.
    Keywords: Organic Rankine cycle | Deep reinforcement learning | Superheat control | Artificial intelligence | Internal combustion engine


    سطح: متوسط
    تعداد صفحات فایل pdf انگلیسی: 12
    حجم فایل: 3454 کیلوبایت

    قیمت: رایگان


    توضیحات اضافی:




اگر این مقاله را پسندیدید آن را در شبکه های اجتماعی به اشتراک بگذارید (برای به اشتراک گذاری بر روی ایکن های زیر کلیک کنید)

تعداد نظرات : 0

الزامی
الزامی
الزامی
rss مقالات ترجمه شده rss مقالات انگلیسی rss کتاب های انگلیسی rss مقالات آموزشی
logo-samandehi