دانلود مقاله انگلیسی رایگان:زمانبندی پویا مبتنی بر Petri-Net سیستم تولید انعطاف پذیر از طریق یادگیری تقویتی عمیق با شبکه کانولوشن نمودار - 2020
دانلود بهترین مقالات isi همراه با ترجمه فارسی
دانلود مقاله انگلیسی یادگیری تقویتی رایگان
  • Petri-net-based dynamic scheduling of flexible manufacturing system via deep reinforcement learning with graph convolutional network Petri-net-based dynamic scheduling of flexible manufacturing system via deep reinforcement learning with graph convolutional network
    Petri-net-based dynamic scheduling of flexible manufacturing system via deep reinforcement learning with graph convolutional network

    سال انتشار:

    2020


    عنوان انگلیسی مقاله:

    Petri-net-based dynamic scheduling of flexible manufacturing system via deep reinforcement learning with graph convolutional network


    ترجمه فارسی عنوان مقاله:

    زمانبندی پویا مبتنی بر Petri-Net سیستم تولید انعطاف پذیر از طریق یادگیری تقویتی عمیق با شبکه کانولوشن نمودار


    منبع:

    Sciencedirect - Elsevier - Journal of Manufacturing Systems, 55 (2020) 1-14. doi:10.1016/j.jmsy.2020.02.004


    نویسنده:

    Liang Hua, Zhenyu Liua,*, Weifei Hua, Yueyang Wangb, Jianrong Tana, Fei Wuc


    چکیده انگلیسی:

    To benefit from the accurate simulation and high-throughput data contributed by advanced digital twin technologies in modern smart plants, the deep reinforcement learning (DRL) method is an appropriate choice to generate a self-optimizing scheduling policy. This study employs the deep Q-network (DQN), which is a successful DRL method, to solve the dynamic scheduling problem of flexible manufacturing systems (FMSs) involving shared resources, route flexibility, and stochastic arrivals of raw products. To model the system in consideration of both manufacturing efficiency and deadlock avoidance, we use a class of Petri nets combining timed-place Petri nets and a system of simple sequential processes with resources (S3PR), which is named as the timed S3PR. The dynamic scheduling problem of the timed S3PR is defined as a Markov decision process (MDP) that can be solved by the DQN. For constructing deep neural networks to approximate the DQN action-value function that maps the timed S3PR states to scheduling rewards, we innovatively employ a graph convolutional network (GCN) as the timed S3PR state approximator by proposing a novel graph convolution layer called a Petri-net convolution (PNC) layer. The PNC layer uses the input and output matrices of the timed S3PR to compute the propagation of features from places to transitions and from transitions to places, thereby reducing the number of parameters to be trained and ensuring robust convergence of the learning process. Experimental results verify that the proposed DQN with a PNC network can provide better solutions for dynamic scheduling problems in terms of manufacturing performance, computational efficiency, and adaptability compared with heuristic methods and a DQN with basic multilayer perceptrons.
    Keywords: Dynamic scheduling | Petri nets | Deep reinforcement learning | Graph convolutional networks | Digital twin


    سطح: متوسط
    تعداد صفحات فایل pdf انگلیسی: 14
    حجم فایل: 7066 کیلوبایت

    قیمت: رایگان


    توضیحات اضافی:




اگر این مقاله را پسندیدید آن را در شبکه های اجتماعی به اشتراک بگذارید (برای به اشتراک گذاری بر روی ایکن های زیر کلیک کنید)

تعداد نظرات : 0

الزامی
الزامی
الزامی
rss مقالات ترجمه شده rss مقالات انگلیسی rss کتاب های انگلیسی rss مقالات آموزشی
logo-samandehi