با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد).
ردیف | عنوان | نوع |
---|---|---|
1 |
Attention-based model and deep reinforcement learning for distribution of event processing tasks
مدل مبتنی بر توجه و یادگیری تقویتی عمیق برای توزیع وظایف پردازش رویداد-2022 Event processing is the cornerstone of the dynamic and responsive Internet of Things (IoT).
Recent approaches in this area are based on representational state transfer (REST) principles,
which allow event processing tasks to be placed at any device that follows the same principles.
However, the tasks should be properly distributed among edge devices to ensure fair resources
utilization and guarantee seamless execution. This article investigates the use of deep learning
to fairly distribute the tasks. An attention-based neural network model is proposed to generate
efficient load balancing solutions under different scenarios. The proposed model is based on
the Transformer and Pointer Network architectures, and is trained by an advantage actorcritic reinforcement learning algorithm. The model is designed to scale to the number of
event processing tasks and the number of edge devices, with no need for hyperparameters
re-tuning or even retraining. Extensive experimental results show that the proposed model
outperforms conventional heuristics in many key performance indicators. The generic design
and the obtained results show that the proposed model can potentially be applied to several
other load balancing problem variations, which makes the proposal an attractive option to be
used in real-world scenarios due to its scalability and efficiency.
keywords: Web of Things (WoT) | Representational state transfer (REST) | application programming interface (APIs) | Edge computing | Load balancing | Resource placement | Deep reinforcement leaning | Transformer model | Pointer networks | Actor critic |
مقاله انگلیسی |
2 |
Inventory management in blood supply chain considering fuzzy supply/demand uncertainties and lateral transshipment
مدیریت موجودی در زنجیره تأمین خون با توجه به عدم قطعیت عرضه / تقاضای فازی و حمل و نقل جانبی-2021 Supply and demand uncertainties combined with very short lifetime of blood platelets has led to significant wastage of the total blood collected from the donors. Conversely, great shortage of platelets may be obtained due to the limited number of donors and emergency demands. Therefore, it is of utmost importance to develop appropriate inventory management model to simultaneously minimize both shortage and wastage along the blood supply chain. To achieve this purpose, this paper presents an Inventory Management model for Age- differentiated platelets under supply/demand Uncertainties (IMAU) for Blood Supply Chains with Lateral Transshipment (BSCLT), resulting a new model named IMAU-BSCLT. The proposed model is solved using whale optimization algorithm considering the costs of ordering from blood centers and lateral transshipment, transportation, inventory holding, shortage, and wastage. In order to validate the proposed methodology, a case study of blood supply chain is used to show the usability of the proposed model and claim its benefits over existing models. Simulation results demonstrate that lateral transshipment between different demand nodes has a major impact on load balancing leads to simultaneously reduce both shortage and wastage costs. According to the obtained results, shortage rate (total shortage per total demands) and wastage rate (total wastage per total supply) of the proposed method are 3.4 % and 4.8 %, respectively. Keywords: Blood supply chain | Inventory management | Blood platelet | Lateral transshipment | Fuzzy uncertainty |
مقاله انگلیسی |
3 |
Novel Four-Layered Software Defined 5G Architecture for AI-based Load Balancing and QoS Provisioning
نرم افزار جدید چهار لایه معماری 5G تعریف شده برای تعادل بار مبتنی بر هوش مصنوعی و تأمین کیفیت QoS -2020 Software defined 5G network (SD-5G) is an evolving
networking technology. The integration of SDN and 5G brings
scalability, and efficiency. However, Quality of Service (QoS)
provision is still challenging in SD-5G due to improper load
balancing, traffic unawareness and so on. To overwhelm these
issues this paper designs a novel load balancing scheme using
Artificial Intelligence (AI) techniques. Firstly, novel fourlayered
SD-5G network is designed with user plane, smart data
plane, load balancing plane, and distributed control plane. In
the context to 5G, the data transmission rate must satisfy the
QoS constraints based on the traffic type such as text, audio,
video etc. Thus, the data from the user plane is classified by
Smart Traffic Analyzer in the data plane. For traffic analysis,
Enriched Neuro-Fuzzy (ENF) classifier is proposed. In the load
balancing plane, Primary Load balancer and Secondary Load
Balancer are deployed. This plane is responsible for balancing
the load among controllers. For controller load balancing,
switch migration is presented. Overloaded controller is
predicted by Entropy function. Then decision for migration is
made by Fitness-based Reinforcement Learning (F-RL)
algorithm. Finally, the four-layered SD-5G network is modeled
in the NS-3.26. The observations shows that the proposed work
improves the SD-5G network in terms of Loss Rate, Packet
Delivery Rate, Delay, and round trip time. Keywords: QoS | software defined 5G network | Artificial intelligence | distributed control plane |
مقاله انگلیسی |
4 |
Mobility-aware load Balancing for Reliable Self-Organization Networks : Multi-agent Deep Reinforcement Learning
توازن بار سیار اگاه برای شبکه های خود سازماندهی قابل اعتماد : یادگیری تقویتی عمیق چند عاملی-2020 Self-Organizing Networks (SON) is a collection of functions for automatic configuration, optimization, and
healing of networks and mobility optimization is one of the main functions of self-organized cellular networks.
State of the art Mobility Robustness Optimization (MRO) schemes have relied on rule-based recommended
systems to search the parameter space; yet it is unwieldy to design rules for all possible mobility patterns in any
network. In this regard, we presented a Deep Learning-based MRO solution (DRL-MRO), which learns the required
parameters appropriate values for each mobility pattern in individual cells. Optimal mobility setting for
Handover parameters also depends on the user distribution and their velocities in the network. In this framework,
an effective mobility-aware load balancing approach applied for autonomous methods of configuring the
parameters in accordance with the mobility patterns in which approximately the same quality level is provided
for each subscriber. The simulation results show that the function of mobility robustness optimization not only
learns to optimize HO performance, but also it learns how to distribute excess load throughout the network. The
experimental results prove that this solution minimizes the number of unsatisfied subscribers (Nus) and it can
also guarantee a more balanced network using cell load sharing in addition to increase cell throughput outperform
the current schemes. Keywords: Distributed Learning Automat | Self- Optimization Networking | Mobility | Management | Cognitive Cellular Networks | Load Balancing |
مقاله انگلیسی |
5 |
A cloud resource management framework for multiple online scientific workflows using cooperative reinforcement learning agents
یک چارچوب مدیریت منابع ابری برای چندین جریان علمی آنلاین با استفاده از عوامل یادگیری تقویتی همکاری-2020 Cloud is a common distributed environment to share strong and available resources to increase the efficiency of complex and heavy calculations. In return for the cost paid by cloud users, a variety of services have been provided for them, the quality of which has been guaranteed and the reliability of their corresponding resources have been supplied by cloud service providers. Due to the heterogeneity of resources and their several shared applications, efficient scheduling can increase the productivity of cloud resources. This will reduce users’ costs and energy consumption, considering the quality of service provided for them. Cloud resource management can be conducted to obtain several objectives. Reducing user costs, reducing energy consumption, load balancing of resources, enhancing utilization of resources, and improving availability and security are some of the key objectives in this area. Several methods have been proposed for cloud resource management, most of which are focused on one or more aspects of these objectives of cloud computing. This paper introduces a new framework consisting of multiple cooperative agents, in which, all phases of the task scheduling and resource provisioning is considered and the quality of service provided to the user is controlled. The proposed integrated model provides all task scheduling and resource provisioning processes, and its various parts serve the management of user applications and more efficient use of cloud resources. This framework works well on dependent simultaneous tasks, which have a complicated process of scheduling because of the dependence of its sub-tasks. The results of the experiments show the better performance of the proposed model in comparison with other cloud resource management methods. Keywords: Cloud computing | Resource management | Dependent tasks | Reinforcement learning | Cooperative agents | Markov game |
مقاله انگلیسی |
6 |
MARVEL: Enabling controller load balancing in software-defined networks with multi-agent reinforcement learning
MARVEL: امکان ایجاد توازن بار کنترل کننده در شبکه های تعریف شده توسط نرم افزار با یادگیری تقویتی چند عامل-2020 The control plane plays a significant role in Software-Defined Networking (SDN). A large SDN usually implements its control plane with several distributed controllers, each controlling a subset of switches and synchronizing with other controllers to maintain a consistent network view. Under the fluctuating network traffic, a static controller- switch mapping relationship could lead to imbalanced workload allocation. Controllers may getoverloaded and reject new requests, eventually reducing the control plane’s request processing ability. Most existing schemes have relied heavily on iterative optimization algorithms to manipulate the mapping relationship between con- trollers and switches, which are either time-consuming or less satisfactory in terms of performance. In this paper, we propose a dynamic controller workload balancing scheme, that is termed MARVEL, based on multi-agent re- inforcement learning for generation of switch migration actions. MARVEL works in two phases: offline training and online decision making. In the training phase, each agent learns how to migrate switches through interacting with the network. In the online phase, MARVEL is deployed to make decisions on migrating switches. Experimen- tal results show that MARVEL outperforms competing existing schemes by improving the control plane’s request processing ability at least 27.3% while using 25% less processing time. Index Terms: Multi-agent reinforcement learning | Neural networks | Software-defined networking | Switch migration |
مقاله انگلیسی |
7 |
Optimization of lightweight task offloading strategy for mobile edge computing based on deep reinforcement learning
بهینه سازی استراتژی بارگیری کار سبک برای محاسبات لبه تلفن همراه مبتنی بر یادگیری تقویت عمیق-2020 With the maturity of 5G technology and the popularity of intelligent terminal devices, the traditional
cloud computing service model cannot deal with the explosive growth of business data quickly.
Therefore, the purpose of mobile edge computing (MEC) is to effectively solve problems such as
latency and network load. In this paper, deep reinforcement learning (DRL) is first proposed to solve
the offloading problem of multiple service nodes for the cluster and multiple dependencies for mobile
tasks in large-scale heterogeneous MEC. Then the paper uses the LSTM network layer and the candidate
network set to improve the DQN algorithm in combination with the actual environment of the MEC.
Finally, the task offloading problem is simulated by using iFogSim and Google Cluster Trace. The
simulation results show that the offloading strategy based on the improved IDRQN algorithm has
better performance in energy consumption, load balancing, latency and average execution time than
other algorithms. Keywords: Mobile edge computing | Task offloading | Deep reinforcement learning | LSTM network | Candidate network |
مقاله انگلیسی |
8 |
Saving time and cost on the scheduling of fog-based IoT applications using deep reinforcement learning approach
صرفه جویی در وقت و هزینه در برنامه ریزی کاربردهای اینترنت اشیا-مبتنی بر مه با استفاده از روش یادگیری تقویتی عمیق-2020 Due to the rapid growth of intelligent devices and the Internet of Things (IoT) applications in recent
years, the volume of data that is generated by these devices is increasing ceaselessly. Hence, moving
all of these data to cloud datacenters would be impossible and would lead to more bandwidth usage,
latency, cost, and energy consumption. In such cases, the fog layer would be the best place for data
processing. In the fog layer, the computing equipment dedicates parts of its limited resources to process
the IoT application tasks. Therefore, efficient utilization of computing resources is of great importance
and requires an optimal and intelligent strategy for task scheduling. In this paper, we have focused
on the task scheduling of fog-based IoT applications with the aim of minimizing long-term service
delay and computation cost under the resource and deadline constraints. To address this problem,
we have used the reinforcement learning approach and have proposed a Double Deep Q-Learning
(DDQL)-based scheduling algorithm using the target network and experience replay techniques. The
evaluation results reveal that our proposed algorithm outperforms some baseline algorithms in terms
of service delay, computation cost, energy consumption and task accomplishment and also handles
the Single Point of Failure (SPoF) and load balancing challenges. Keywords: Fog computing | Task scheduling | Deep reinforcement learning | Double Q-Learning | Service delay | Computation cost |
مقاله انگلیسی |
9 |
A new MapReduce solution for associative classification to handle scalability and skewness in vertical data structure
یک راه حل جدید MapReduce برای طبقه بندی انجمنی برای مقابله با مقیاس پذیری و پوستی در ساختار داده های عمودی-2020 Associative classification is a promising methodology in information mining that uses the association
rule discovery procedures to build the classifier. But they have some limitations like: they are not able
to handle big data as they have memory constraints, high time complexity, load imbalance and data
skewness. Data skewness occurs invariably when big data analytics comes in picture and affects the
efficiency of an approach. This paper presents the MapReduce solution for associative classification
in respect of vertical data layout. To handle these problems we have proposed two algorithms MRMCAR-
F (MapReduce-Multi Class Associative Classifier-MapReduce fast algorithm) and MR-MCAR-L
(MapReduce-Multi Class Associative Classifier Load parallel frequent pattern growth algorithm). Also
in this paper, MapReduce solution of Tid List and Database coverage has been proposed. We have used
three type of pruning techniques viz. database coverage, global and distributed pruning. The proposed
approaches have been compared with latest approach from the literature survey in terms of accuracy,
computation time and data skewness. The existing scalable approaches cannot handle skewness while,
our proposed method handles it in a very effective manner. All the experiments have been performed
on six datasets which have been extracted from UCI repositories on the Hadoop framework. Proposed
algorithms are scalable solutions for associative classification to handle big data and data skewness. Keywords: Associative classification | Scalability | Data skewness | Load balancing | Big data | Hadoop |
مقاله انگلیسی |
10 |
تعادل بار در محاسبات ابری: یک تصویر بزرگ
سال انتشار: 2020 - تعداد صفحات فایل pdf انگلیسی: 10 - تعداد صفحات فایل doc فارسی: 27 زمان بندی یا تخصیص درخواست کاربر (وظایف) در محیط ابری یک مساله بهینه سازی NP-hard است. مطابق با زیرساخت ابری و درخواست های کاربران، سیستم ابری همراه با برخی بارها (که ممکن است کم باری یا اضافه بار یا بار متعادل باشد) اختصاص داده می شود. شرایطی همانند کم باری یا اضافه بار سبب خرابی سیستم مرتبط با مصرف توان، زمان اجرا، خرابی ماشین و غیره شود. بنابراین، توازن بار برای غلبه بر تمامی مشکلات اشاره شده فوق مورد نیاز است. این توازن بار کارها (آن ها ممکن است وابسته یا مستقل باشند) بر ماشین های مجازی (VM) جنبه مهمی از زمان بندی کارها در ابرها است. انواع مختلف بارها در شبکه ابری همانند بار حافظه، بار محاسباتی (CPU)، بار شبکه و غیره وجود دارد. توازن بار مکانیزم شناسایی نودهای اضافه بار و کم بار و سپس توزان بار در بین آن ها است. محققان روش های مختلف توازن بار را در محاسبات ابری برای بهینه سازی پارمترهای مختلف عملکرد پیشنهاد داده اند. ما یک طبقه بندی را برای الگوریتم های توزان بار در ابر ارائه کرده ایم. توضیح کوتاهی از پارامترهای عملکرد در ادبیات و اثرات آن ها در این مقاله ارائه شده است. به منظور تحلیل عملکرد الگوریتم های مبتنی بر اکتشاف ، شبیه سازی ها در شبیه ساز CloudSim انجام شده است و نتایج به طور کامل ارائه شده است.
کلید واژه ها: محاسبات ابری | مصرف انرژی | تعادل بار | مجازی سازی | ماشین مجازی | تخصیص وظیفه |
مقاله ترجمه شده |