با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد).
ردیف | عنوان | نوع |
---|---|---|
1 |
Integrating reinforcement learning and skyline computing for adaptive service composition
یکپارچه سازی یادگیری تقویت و محاسبات خط افقی برای ترکیب خدمات سازگار-2020 In service computing, combining multiple services through service composition to address complex user requirements has become a popular research topic. QoS-aware service com- position aims to find the optimal composition scheme with the QoS attributes that best match user requirements. However, certain QoS attributes may continuously change in a dynamic service environment, so service composition methods need to be adaptive. Fur- thermore, the large number of candidate services poses a key challenge for service com- position, where existing service composition approaches based on reinforcement learning (RL) suffer from low efficiency. To deal with the problems above, in this paper, a new ser- vice composition approach is proposed which combines RL with skyline computing where the latter is used for reducing the search space and computational complexity. A WSC- MDP model is proposed to solve the large-scale service composition within a dynamically changing environment. To verify the proposed method, a series of comparative experi- ments are conducted, and the experimental results demonstrate the effectiveness, scala- bility and adaptability of the proposed approach. Keywords: Service composition | QoS | Reinforcement learning | Skyline computing | Adaptability |
مقاله انگلیسی |
2 |
تخلیه محاسباتی خودکار در لبه متحرک برای برنامه های اینترنت اشیاء
سال انتشار: 2019 - تعداد صفحات فایل pdf انگلیسی: 9 - تعداد صفحات فایل doc فارسی: 22 تخلیه محاسباتی یک توضیح برجسته برای دستگاههای سیار محدود به منابع است که انجام این فرایند مستلزم توانایی محاسباتی بالایی است. وجود ابر متحرک در پلتفرم تخلیه کاملاً شناخته شده است و بطور معمول در راه حلهای شبکهای بسیار دور برای استفاده در محاسبه دستگاههای سیار محدود به منابع بکار میرود. به خاطر راه حل شبکهای بسیار دور، دستگاههای کاربر، تأخیر شبکهای بالایی را تجربه میکنند که بر برنامههای اینترنت اشیاء (IOT) متحرک در زمان واقعی، تأثیر منفی دارد. بنابراین، این مقاله یک راه حل شبکهای بسیار نزدیک را برای تخلیه محاسباتی در مه/لبه متحرک پیشنهاد میدهد. تحرک، تنوع و توزیع جغرافیایی دستگاههای همراه از طریق چالشهای متعددی در تخلیه محاسباتی در مه/لبه متحرک. با این حال، برای پاسخگویی به تقاضای منابع محاسباتی در دستگاههای همراه بزرگ، یک چارچوب مدیریت خودکار مبتنی بر یادگیری عمیق Q مطرح میشود. کنترلگر لبه توزیع شده/ شبکه مه (FOC) که منابع مه/ لبه موجود برای مثال پردازش، حافظه، شبکه را پاکسازی میکند، سرویس محاسباتی مه/لبه را فعال مینماید. تصادفی بودن دسترس پذیری منابع و گزینههای بی شمار برای اختصاص آن منابع به محاسبه تخلیه، با مسئله مناسب برای مدلسازی از طریق روند تصمیم گیری Markov (MDF) و راه حل از طریق یادگیری تقویتی متناسب است. مدل پیشنهادی با توجه به نیازهای متغیر منابع و تحرک دستگاههای کاربر نهایی شبیه سازی شده است. روش پیشنهادی یادگیری عمیقQ، به طور قابل توجهی عملکرد تخلیه محاسباتی را از طریق به حداقل رساندن تأخیر در محاسبات سرویس، بهبود میبخشد. همچنین،کل نیرو با توجه تصمیم گیریهای مختلف تخلیه به منظور بررسیهای مقایسهای مورد مطالعه قرار گرفته است که این رویکرد پیشنهادی را با توجه به راه حلهای تخلیه محاسباتی پیشرفته، به عنوان یک رویکرد دارای مصرف بهینه انرژی نشان میدهد.
واژگان کلیدی: تخلیه محاسباتی | محاسبه خودکار | محاسبه مه/لبه متحرک | یادگیری عمیق Q |
مقاله ترجمه شده |
3 |
Chiminey: Connecting Scientists to HPC, Cloud and Big Data
Chiminey: اتصال دانشمندان به HPC، ابر و داده های بزرگ-2017 The enabling of scientific experiments increasingly includes data, soft
ware, computational and simulation elements, often embarrassingly parallel,
long running and data-intensive. Frequently, such experiments are run in a
cloud environment or on high-end clusters and supercomputers. Many dis
ciplines in sciences and engineering (and outside computer science) find the
requisite computational skills attractive on the one hand but distracting from
their science domain. We developed Chiminey under directions by quantum
physicists and molecular biologists, to ease the steep learning curve in data
management and software platforms, required for the complex computational
target systems. Chiminey is a smart connector mediating running specialist
algorithms developed for workstations with moderately large data set and
relatively small computational grunt. This connector allows the domain sci
entists to choose the target platform and then manages it automatically; it
accepts all the necessary parameters to run many instances of their program
regardless of whether this runs on a peak supercomputer, a commercial cloud
like Amazon EC2 or (in Australia) the national federated university cloud
system NeCTAR. Chiminey negotiates with target system schedulers, dash
boards and data bases and provides an easy-to-use dashboard interface to the
running jobs, regardless of the specific target platform. The smart connector
encapsulates and virtualises a number of further aspects that the domain
scientists directing our effort found necessary or desirable.
In this article we present Chiminey and guide the reader through a hands
on tutorial of this open-source platform. The only requirement is that the
reader has access to one of the supported clouds or cluster platforms - and
very likely there is a matching one. The tutorial stages range in difficulty
from requiring no to little technical background through to advanced sections,
such as programming your own domain-specific extension on top of Chiminey
application programmer interfaces.
The different exercises we demonstrate include: installing the Docker de
ployment environment and Chiminey system; registering resources for file
stores, Hadoop MapReduce and cloud virtual machines; activating hrmclite
and wordcount smart connectors – two demonstrators; running a smart con
nector and investigating the resulting output files; and building a new smart
connector. We also discuss briefly where to find more detailed information
on, and what is involved in, contributing to the Chiminey open source code
base.
Keywords: Big data| cloud| e-science| high performance computing|parallel processing| scientific computing| service computing| simulation |
مقاله انگلیسی |
4 |
Chiminey: Connecting Scientists to HPC, Cloud and Big Data
شومینه: اتصال دانشمندان به HPC، ابر و داده های بزرگ-2017 The enabling of scientific experiments increasingly includes data, soft
ware, computational and simulation elements, often embarrassingly parallel,
long running and data-intensive. Frequently, such experiments are run in a
cloud environment or on high-end clusters and supercomputers. Many dis
ciplines in sciences and engineering (and outside computer science) find the
requisite computational skills attractive on the one hand but distracting from
their science domain. We developed Chiminey under directions by quantum
physicists and molecular biologists, to ease the steep learning curve in data
management and software platforms, required for the complex computational
target systems. Chiminey is a smart connector mediating running specialist
algorithms developed for workstations with moderately large data set and
relatively small computational grunt. This connector allows the domain sci
entists to choose the target platform and then manages it automatically; it
accepts all the necessary parameters to run many instances of their program
regardless of whether this runs on a peak supercomputer, a commercial cloud
like Amazon EC2 or (in Australia) the national federated university cloud
system NeCTAR. Chiminey negotiates with target system schedulers, dash
boards and data bases and provides an easy-to-use dashboard interface to the
running jobs, regardless of the specific target platform. The smart connector
encapsulates and virtualises a number of further aspects that the domain
scientists directing our effort found necessary or desirable.
In this article we present Chiminey and guide the reader through a hands
on tutorial of this open-source platform. The only requirement is that the
reader has access to one of the supported clouds or cluster platforms - and
very likely there is a matching one. The tutorial stages range in difficulty
from requiring no to little technical background through to advanced sections,
such as programming your own domain-specific extension on top of Chiminey
application programmer interfaces.
The different exercises we demonstrate include: installing the Docker de
ployment environment and Chiminey system; registering resources for file
stores, Hadoop MapReduce and cloud virtual machines; activating hrmclite
and wordcount smart connectors – two demonstrators; running a smart con
nector and investigating the resulting output files; and building a new smart
connector. We also discuss briefly where to find more detailed information
on, and what is involved in, contributing to the Chiminey open source code
base.
Keywords: Big data| cloud| e-science| high performance computing|parallel processing| scientific computing| service computing| simulation |
مقاله انگلیسی |