دسته بندی:
اینترنت اشیاء - Internet of Things
سال انتشار:
2022
عنوان انگلیسی مقاله:
Attention-based model and deep reinforcement learning for distribution of event processing tasks
ترجمه فارسی عنوان مقاله:
مدل مبتنی بر توجه و یادگیری تقویتی عمیق برای توزیع وظایف پردازش رویداد
منبع:
ScienceDirect- Elsevier- Internet of Things, 19 (2022) 100563: doi:10:1016/j:iot:2022:100563
نویسنده:
Andriy Mazayev
چکیده انگلیسی:
Event processing is the cornerstone of the dynamic and responsive Internet of Things (IoT).
Recent approaches in this area are based on representational state transfer (REST) principles,
which allow event processing tasks to be placed at any device that follows the same principles.
However, the tasks should be properly distributed among edge devices to ensure fair resources
utilization and guarantee seamless execution. This article investigates the use of deep learning
to fairly distribute the tasks. An attention-based neural network model is proposed to generate
efficient load balancing solutions under different scenarios. The proposed model is based on
the Transformer and Pointer Network architectures, and is trained by an advantage actorcritic reinforcement learning algorithm. The model is designed to scale to the number of
event processing tasks and the number of edge devices, with no need for hyperparameters
re-tuning or even retraining. Extensive experimental results show that the proposed model
outperforms conventional heuristics in many key performance indicators. The generic design
and the obtained results show that the proposed model can potentially be applied to several
other load balancing problem variations, which makes the proposal an attractive option to be
used in real-world scenarios due to its scalability and efficiency.
keywords: Web of Things (WoT) | Representational state transfer (REST) | application programming interface (APIs) | Edge computing | Load balancing | Resource placement | Deep reinforcement leaning | Transformer model | Pointer networks | Actor critic
قیمت: رایگان
توضیحات اضافی:
تعداد نظرات : 0