با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد).
ردیف | عنوان | نوع |
---|---|---|
1 |
SmartFCT: Improving power-efficiency for data center networks with deep reinforcement learning
SmartFCT: بهبود بهره وری انرژی برای شبکه های مرکز داده با یادگیری تقویتی عمیق-2020 Reducing the power consumption of Data Center Networks (DCNs) and guaranteeing the Flow Completion Time (FCT) of applications in DCNs are two major concerns for data center operators. However, existing works cannot realize the two goals together because of two issues: (1) dynamic traffic pattern in DCNs is hard to accurately model; (2) an optimal flow scheduling scheme is computationally expensive. In this paper, we propose SmartFCT, which employs the Deep Reinforcement Learning (DRL) coupled with Software-Defined Networking (SDN) to improve the power efficiency of DCNs and guarantee FCT. SmartFCT dynamically collects traffic distribution from switches to train its DRL model. The well-trained DRL agent of SmartFCT can quickly analyze the complicated traffic characteristics using neural networks and adaptively gen- erate a action for scheduling flows and deliberately configuring margins for different links. Following the gen- erated action, flows are consolidated into a few of active links and switches for saving power, and fine-grained margin configuration for active links avoids FCT violation of unexpected flow bursts. Simulation results show that SmartFCT can guarantee FCT and save up to 12.2% power consumption, compared with the state-of-the-art solutions. Keywords: Data center networks | Software-Defined networking | Power efficiency | Flow completion time | Deep reinforcement learning |
مقاله انگلیسی |
2 |
AI-Assisted Knowledge-Defined Network Orchestration for Energy-Efficient Data Center Networks
ارکستراسیون شبکه تعریف شده توسط دانش به کمک هوش مصنوعی برای شبکه های مرکز داده با مصرف انرژی-2020 In this article, we discuss the design and implementation
of a novel DCN system, which utilizes
a knowledge-defined NO-M to operate a HOEDCN
cost-effectively and energy-efficiently. The
motivations behind the proposed HOE-DCN system
are the urgent need to address the scalability,
energy, and manageability issues in existing
DCN systems. To realize the knowledge-defined
NO-M, we follow the principle of predictive analytics
in the human brain to design three artificial
intelligence modules based on deep learning
and make them operate collaboratively. The proposed
HOE-DCN system is implemented in a network
testbed, and we conduct experiments that
involve both control and data plane operations
to demonstrate its advantages. The experimental
results show that the HOE-DCN simultaneously
achieves high-performance service provisioning
and improved energy efficiency. Furthermore, by
analyzing the pros and cons of the HOE-DCN system,
we also point out several directions to work
on in the future. |
مقاله انگلیسی |
3 |
An automatic algorithm of identifying vulnerable spots of internet data center power systems based on reinforcement learning
یک الگوریتم خودکار برای شناسایی نقاط آسیب پذیر سیستم های قدرت مرکز داده اینترنتی بر اساس یادگیری تقویتی-2020 The internet data center (IDC) power system provides power guarantee for cloud computing and other information
services, so its importance is self-evident. However, the occurrence time of malignant destructive
events such as lightning strikes, errors in operation and cyber-attacks is unpredictable. But the loss can be
minimized by formulating coping strategies in advance. So, identifying the vulnerable spots of the IDC power
system come to be the key to guarantee the normal operation of information systems. Generally, the IDC power
network can be modelled as a graph G, and then, the methods of finding nodes’ centrality can be applied to
analyse the vulnerability. By our experience, it is not the best approach.
Unlike the previous approaches, we do not solve the issue as the traditional graph problem. Instead, we fully
utilize the characteristics of the IDC power network and apply reinforcement learning techniques to identify the
vulnerability of the IDC power network. To our best knowledge, it is the first applying of artificial intelligence in
traditional IDC power network.
In this article, we propose PFEM, a parallel fault evolution model for the IDC power network, which can
accelerate the process of electrical fault evolution. Moreover, we designed an algorithm which can automatically
find the vulnerable spots of the IDC power network.
The experiment on a real IDC power network demonstrate that the impact of vulnerable devices derived from
our proposed algorithm after failure is about 5% higher than that of other algorithms, and tripping single-digit
electrical devices of the IDC power system with our proposed algorithm will lead to loss of all loads. Keywords: Internet data center | Power system | Vulnerability | Reinforcement learning | Maintenance |
مقاله انگلیسی |
4 |
QoS provisioning for various types of deadline-constrained bulk data transfers between data centers
تامین کیفیت سرویس برای انواع مختلف انتقال داده های فشرده محدود بین مراکز داده-2020 An increasing number of applications in scientific and other domains have moved or are in active
transition to clouds, and the demand for big data transfers between geographically distributed cloudbased
data centers is rapidly growing. Many modern backbone networks leverage logically centralized
controllers based on software-defined networking (SDN) to provide advance bandwidth reservation
for data transfer requests. How to fully utilize the bandwidth resources of the links connecting data
centers with guaranteed quality of service for each user request is an important problem for cloud
service providers. Most existing work focuses on bandwidth scheduling for a single request for data
transfer or multiple requests using the same service model. In this work, we construct rigorous cost
models to quantify user satisfaction degree, and formulate a generic problem of bandwidth scheduling
for multiple deadline-constrained data transfer requests of different types to maximize the request
scheduling success ratio while minimizing the data transfer completion time of each request. We prove
this problem to be not only NP-complete but also non-approximable, and hence design a heuristic
algorithm. For performance evaluation, we establish a proof-of-concept emulated SDN testbed and
also generate large-scale simulation networks. Both experimental and simulation results show that the
proposed scheduling scheme significantly outperforms existing methods in terms of user satisfaction
degree and scheduling success ratio. Keywords: Big data | Data center | High-performance networks | Software-defined networking | Bandwidth scheduling |
مقاله انگلیسی |
5 |
Data Transfer Scheduling for Maximizing Throughput of Big-Data Computing in Cloud Systems
زمانبندی انتقال داده برای به حداکثر رساندن کارایی محاسبات داده های بزرگ درسیستم های ابری-2018 Many big-data computing applications have been deployed in cloud platforms. These applications normally demand
concurrent data transfers among computing nodes for parallel processing. It is important to find the best transfer scheduling leading to
the least data retrieval time—the maximum throughput in other words. However, the existing methods cannot achieve this, because
they ignore link bandwidths and the diversity of data replicas and paths. In this paper, we aim to develop a max-throughput data transfer
scheduling to minimize the data retrieval time of applications. Specifically, the problem is formulated into mixed integer programming,
and an approximation algorithm is proposed, with its approximation ratio analyzed. The extensive simulations demonstrate that our
algorithm can obtain near optimal solutions
Index Terms: Data transfer scheduling, big-data computing, throughput maximization, data center |
مقاله انگلیسی |
6 |
Migration-Based Online CPSCN Big Data Analysis in Data Centers
تحلیل داده های بزرگ CPSCN آنلاین مبتنی بر مهاجرت در مراکز داده-2018 It is critical to schedule online data-intensive jobs effectively for various applications, including
cyber-physical-system and social network system. It is also useful to support timely decision making and
better prediction. In this paper, we investigate the online job scheduling problem with data migration for
global job execution time reduction. We first establish a time model based on the real experimental results,
and propose an online job placement algorithm by taking into account the benefit of both instantaneity and
locality for the jobs. We then introduce data migration to the job placement algorithm. The core idea is to
make a tradeoff between the migration cost and remote access cost. The simulation results demonstrate that
our algorithm has a significant improvement than FIFO, and data migration shows effectiveness on global
job execution time reduction. Our algorithms also provide an acceptable fairness for jobs.
INDEX TERMS : Big data analysis, CPSCN, data center, data placement, online job scheduling |
مقاله انگلیسی |
7 |
Renewable Energy-Aware Big Data Analytics in Geo-distributed Data Centers with Reinforcement Learning
تجزیه و تحلیل داده های انرژی تجدید پذیر در مرکز داده های توزیع شده ژئو با یادگری تقویت-2018 In the age of big data, companies tend to deploy their services in data centers rather than their own servers. The demands
of big data analytics grow significantly, which leads to an extremely high electricity consumption at data centers. In this paper, we
investigate the cost minimization problem of big data analytics on geo-distributed data centers connected to renewable energy sources
with unpredictable capacity. To solve this problem, we propose a Reinforcement Learning (RL) based job scheduling algorithm by
combining RL with neural network (NN). Moreover, two techniques are developed to enhance the performance of our proposal.
Specifically, Random Pool Sampling (RPS) is proposed to retrain the NN via accumulated training data, and a novel Unidirectional
Bridge Network (UBN) structure is designed for further enhancing the training speed by using the historical knowledge stored in the
trained NN. Experiment results on real Google cluster traces and electricity price from Energy Information Administration show that our
approach is able to reduce the data centers’ cost significantly compared with other benchmark algorithms.
Keywords: Big data, load balancing, reinforcement learning, data center |
مقاله انگلیسی |
8 |
Analysing and improving convergence of quantized congestion notification in Data Center Ethernet
تحلیل و بهبود همگرایی اعلان انباشت رقمی شده در شبکه داخلی مرکز داده-2018 Quantized Congestion Notification (QCN) has been approved as the standard congestion management mechanism for the Data Center Ethernet (DCE). However, lots of work pointed out that QCN suffers from the problem of unfairness among different flows. In this paper, we found that QCN could achieve fairness, merely the convergence time to fairness is quite long. Thus, we build a convergence time model to investigate the reasons of the slow convergence process of QCN. We validate the precision of our model by comparing with experimental data on the NetFPGA platform. The results show that the proposed model accurately well characterizes the convergence time to fairness of QCN. Based on the model, the impact of QCN parameters, network parameters, and QCN variants on the convergence time is analysed in detail. Results indicate that the convergence time of QCN can be decreased if sources have the same rate increase probability or the rate increase step becomes larger at steady state. Enlightened by the analysis, we proposed a mechanism called QCN-T, which replaces the original Byte Counter and Timer at sources with a single modified Timer to reduce the convergence time. Finally, evaluations show great improvements of QCN-T in both convergence and stability.
keywords: Data Center Ethernet| Quantized Congestion Notification| Convergence| Modeling |
مقاله انگلیسی |
9 |
Energy cost minimization with job security guarantee in Internet data center
به حداقل رساندن هزینه های انرژی با تضمین امنیت شغلی در مرکز داده های اینترنتی -2017 With the proliferation of various big data applications and resource demand from Internet data centers
(IDCs), the energy cost has been skyrocketing, and it attracts a great deal of attention and brings many
energy optimization management issues. However, the security problem for a wide range of applications,
which has been overlooked, is another critical concern and even ranked as the greatest challenge in IDC.
In this paper, we propose an energy cost minimization (ECM) algorithm with job security guarantee
for IDC in deregulated electricity markets. Randomly arriving jobs are routed to a FIFO queue, and a
heuristic algorithm is devised to select security levels for guaranteeing job risk probability constraint.
Then, the energy optimization problem is formulated by taking the temporal diversity of electricity price
into account. Finally, an online energy cost minimization algorithm is designed to solve the problem by
Lyapunov optimization framework which offers provable energy cost optimization and delay guarantee.
This algorithm can aggressively and adaptively seize the timing of low electricity price to process
workloads and defer delay-tolerant workloads execution when the price is high. Based on the real-life
electricity price, simulation results prove the feasibility and effectiveness of proposed algorithm.
Keywords: Internet data center | Security service | Risk probability constraint | Energy cost minimization | Deregulated electricity markets |
مقاله انگلیسی |
10 |
A game theory-based dynamic resource allocation strategy in Geo-distributed Datacenter Clouds
استراتژی تخصیص منابع پویا مبتنی بر نظریه بازی در ژئو توزیع مرکز داده ابری-2017 Geo-distributed Datacenter Cloud is an effective solution to store, process and transfer the big data
produced by Internet-of-Things (IoT). A key challenge in this distributed system is how to allocate
the bandwidth resources among these geo-distributed datacenters of this cloud efficiently. This paper
aims to address this challenge by optimizing the transfer bandwidth resources among different geo
distributed datacenters. To this end, we firstly analyze the interaction between the traffic of physical
networks and the data flow of Geo-distributed Datacenter Clouds, and then establish a game theory-based
model for cloud resource allocation. Based on this model, a dynamic resource allocation strategy and its
corresponding algorithm that are adaptable to the Internet conditions are proposed. Since the background
traffic, capacity limit of physical networks as well as the flows and resource demands of geo-distributed
datacenters are taken into account, this new strategy can achieve the load balance of the physical networks
and content transferring among different geo-distributed datacenters effectively. The real-world trace
data is adopted to validate the effectiveness and efficiency of the proposed resource allocation strategy.
Compared with existing strategies, the evaluation results demonstrate that our proposed strategy can
balance the workloads of physical networks, reduce the response delay of cloud applications, and possess
an excellent adaptability.
Keywords: Geo-distributed Datacenters | Cloud computing | Resource allocation | Game theory |
مقاله انگلیسی |