دانلود و نمایش مقالات مرتبط با scientific workflow::صفحه 1
بلافاصله پس از پرداخت دانلود کنید

با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد). 

نتیجه جستجو - scientific workflow

تعداد مقالات یافته شده: 16
ردیف عنوان نوع
1 A cloud resource management framework for multiple online scientific workflows using cooperative reinforcement learning agents
یک چارچوب مدیریت منابع ابری برای چندین جریان علمی آنلاین با استفاده از عوامل یادگیری تقویتی همکاری-2020
Cloud is a common distributed environment to share strong and available resources to increase the efficiency of complex and heavy calculations. In return for the cost paid by cloud users, a variety of services have been provided for them, the quality of which has been guaranteed and the reliability of their corresponding resources have been supplied by cloud service providers. Due to the heterogeneity of resources and their several shared applications, efficient scheduling can increase the productivity of cloud resources. This will reduce users’ costs and energy consumption, considering the quality of service provided for them. Cloud resource management can be conducted to obtain several objectives. Reducing user costs, reducing energy consumption, load balancing of resources, enhancing utilization of resources, and improving availability and security are some of the key objectives in this area. Several methods have been proposed for cloud resource management, most of which are focused on one or more aspects of these objectives of cloud computing. This paper introduces a new framework consisting of multiple cooperative agents, in which, all phases of the task scheduling and resource provisioning is considered and the quality of service provided to the user is controlled. The proposed integrated model provides all task scheduling and resource provisioning processes, and its various parts serve the management of user applications and more efficient use of cloud resources. This framework works well on dependent simultaneous tasks, which have a complicated process of scheduling because of the dependence of its sub-tasks. The results of the experiments show the better performance of the proposed model in comparison with other cloud resource management methods.
Keywords: Cloud computing | Resource management | Dependent tasks | Reinforcement learning | Cooperative agents | Markov game
مقاله انگلیسی
2 Multi-objective scheduling of extreme data scientific workflows in Fog
زمانبندی چند هدفه گردش کار علمی علمی شدید در مه-2020
The concept of ‘‘extreme data’’ is a recent re-incarnation of the ‘‘big data’’ problem, which is distinguished by the massive amounts of information that must be analyzed with strict time requirements. In the past decade, the Cloud data centers have been envisioned as the essential computing architectures for enabling extreme data workflows. However, the Cloud data centers are often geographically distributed. Such geographical distribution increases offloading latency, making it unsuitable for processing of workflows with strict latency requirements, as the data transfer times could be very high. Fog computing emerged as a promising solution to this issue, as it allows partial workflow processing in lower-network layers. Performing data processing on the Fog significantly reduces data transfer latency, allowing to meet the workflows’ strict latency requirements. However, the Fog layer is highly heterogeneous and loosely connected, which affects reliability and response time of task offloading. In this work, we investigate the potential of Fog for scheduling of extreme data workflows with strict response time requirements. Moreover, we propose a novel Pareto-based approach for task offloading in Fog, called Multi-objective Workflow Offloading (MOWO). MOWO considers three optimization objectives, namely response time, reliability, and financial cost. We evaluate MOWO workflow scheduler on a set of real-world biomedical, meteorological and astronomy workflows representing examples of extreme data application with strict latency requirements.
Keywords: Scheduling | Scientific workflows | Fog computing | Task offloading | Monte-Carlo simulation | Multi-objective optimization
مقاله انگلیسی
3 Applying big data paradigms to a large scale scientific workflow: Lessons learned and future directions
اعمال پارادایم های داده های بزرگ به یک جریان کار علمی در مقیاس بزرگ: درس های آموخته شده و جهت های آینده-2018
The increasing amounts of data related to the execution of scientific workflows has raised awareness of their shift towards parallel data-intensive problems. In this paper, we deliver our experience com bining the traditional high-performance computing and grid-based approaches with Big Data analytics paradigms, in the context of scientific ensemble workflows. Our goal was to assess and discuss the suitability of such data-oriented mechanisms for production-ready workflows, especially in terms of scalability. We focused on two key elements in the Big Data ecosystem: the data-centric programming model, and the underlying infrastructure that integrates storage and computation in each node. We experimented with a representative MPI-based iterative workflow from the hydrology domain, EnKF HGS, which we re-implemented using the Spark data analysis framework. We conducted experiments on a local cluster, a private cloud running OpenNebula, and the Amazon Elastic Compute Cloud (AmazonEC2). The results we obtained were analysed to synthesize the lessons we learned from this experience, while discussing promising directions for further research.
Keywords: Scientific workflows ، Big data ، Cloud computing ، Apache spark ، Hydrology
مقاله انگلیسی
4 زمانبندی گردش کار و تأمین منابع در ابرها با استفاده از الگوریتم افزوده جهش قورباغه
سال انتشار: 2017 - تعداد صفحات فایل pdf انگلیسی: 10 - تعداد صفحات فایل doc فارسی: 35
دسترس پذیری منابع و تأمین منابع براساس تقاضا در رایانش ابری، آن را به ابزاری ایده آل برای اجرای برنامه های کاربردی علمی گردش کار تبدیل نموده است. می توان اجرای برنامه های کاربردی را با حداقل تعداد منابع آغاز نمود و هر زمان که نیاز باشد، تعداد منابع را افزایش داد. با اینحال، زمانبندی گردش کار یک مسئله ی NP سخت است و در نتیجه راه حل های مبتنی بر فرااکتشاف بطور گسترده برای این کار مورد استفاده قرار گرفته اند. این مقاله یک تکنیک مبتنی بر الگوریتم افزوده ی جهش قورباغه (ASFLA) برای زمانبندی گردش کار و تأمین منابع در محیط های ابری "زیرساخت به عنوان سرویس" (IaaS) ارائه می دهد. عملکرد ASFLA با به روزترین الگوریتم های PSO و SFLA مقایسه شده است. اثربخشی ASFLA برای تعدادی گردش کار علمی شناخته شده، با اندازه های مختلف با استفاده یک شبیه سازی مبتنی بر جاوای سفارشی ارزیابی شده است. نتایج شبیه سازی بیانگر بهبود قابل ملاحظه ی معیارهای عملکردی در دستیابی به حداقل هزینه های اجرایی و برآورده ساختن ضرب الاجل های زمانبندی است.
کلمات کلیدی: رایانش ابری | تأمین منابع | زمانبندی | گردش کار علمی | الگوریتم جهش قورباغه
مقاله ترجمه شده
5 A workflow runtime environment for manycore parallel architectures
یک محیط زمان اجرای گردش کار برای معماری های موازی manycore -2017
We introduce a new Manycore Workflow Runtime Environment (MWRE) to efficiently enact traditional scientific workflows on modern manycore computing architectures. MWRE is compiler-based and trans lates workflows specified in the XML-based Interoperable Workflow Intermediate Representation (IWIR) into an equivalent C++-based program. This program efficiently enacts the workflow as a stand-alone executable by means of a new callback mechanism that resolves dependencies, transfers data, and handles composite activities. Furthermore, a core feature of MWRE is explicit support for full-ahead scheduling and enactment. Experimental results on a number of real-world workflows demonstrate that MWRE clearly outperforms existing Java-based workflow engines designed for distributed (Grid or Cloud) computing infrastructures in terms of enactment time, is generally better than an existing script-based engine for manycore architectures (Swift), and sometimes gets even close to an artificial baseline implementation of the workflows in the standard OpenMP language for shared memory systems. Experimental results also show that full-ahead scheduling with MWRE using a state-of-the-art heuristic can improve the workflow performance up to 40%.
Keywords: Scientific workflows | Manycores | Workflow execution plan | Full-ahead scheduling
مقاله انگلیسی
6 MidHDC: Advanced topics on middleware services for heterogeneous distributed computing: Part 2✩
MidHDC: Advanced topics on middleware services for heterogeneous distributed computing: Part 2-2017
Currently distributes systems support different computing paradigms like Cluster Computing, Grid Computing, Peer-to-Peer Computing, and Cloud Computing all involving elements of heterogeneity. These computing distributed systems are often characterized by a variety of resources that may or may not be coupled with specific platforms or environments. All these topics challenge today researchers, due to the strong dynamic behavior of the user communities and of resource collections they use. The second part of this special issue presents advances in allocation algorithms, service selection, VM consolidation and mobility policies, scheduling multiple virtual environments and scientific workflows, optimization in scheduling process, energy-aware scheduling models, failure Recovery in shared Big Data processing systems, distributed transaction processing middleware, data storage, trust evaluation, information diffusion, mobile systems, integration of robots in Cloud systems.
Keywords: Middleware services | Resource management | Mobile computing | Cloud computing | HPC | Heterogeneous distributed systems
مقاله انگلیسی
7 ClowdFlows: Online workflows for distributed big data mining
ClowdFlows: گردش کار آنلاین برای کاوش داده های بزرگ توزیع شده-2017
The paper presents a platform for distributed computing, developed using the latest software technologies and computing paradigms to enable big data mining. The platform, called ClowdFlows, is implemented as a cloud-based web application with a graphical user interface which supports the construction and execution of data mining workflows, including web services used as workflow components. As a web application, the ClowdFlows platform poses no software requirements and can be used from any modern browser, including mobile devices. The constructed workflows can be declared either as private or public, which enables sharing the developed solutions, data and results on the web and in scientific publications. The server-side software of ClowdFlows can be multiplied and distributed to any number of computing nodes. From a developer’s perspective the platform is easy to extend and supports distributed development with packages. The paper focuses on big data processing in the batch and real-time processing mode. Big data analytics is provided through several algorithms, including novel ensemble techniques, implemented using the map-reduce paradigm and a special stream mining module for continuous parallel workflow execution. The batch mode and real-time processing mode are demonstrated with practical use cases. Performance analysis shows the benefit of using all available data for learning in distributed mode compared to using only subsets of data in non-distributed mode. The ability of ClowdFlows to handle big data sets and its nearly perfect linear speedup is demonstrated.
Keywords:Data mining platform|Cloud computing|Scientific workflows|Batch processing|Map-reduce|Big data
مقاله انگلیسی
8 Deriving scientific workflows from algebraic experiment lines: A practical approach
اشتقاق گردش کارهای علمی از خطوط تجربی جبری: یک دیدگاه عملی-2017
The exploratory nature of a scientific computational experiment involves executing variations of the same workflow with different approaches, programs, and parameters. However, current approaches do not systematize the derivation process from the experiment definition to the concrete workflows and do not track the experiment provenance down to the workflow executions. Therefore, the composition, execution, and analysis for the entire experiment become a complex task. To address this issue, we propose the Algebraic Experiment Line (AEL). AEL uses a data-centric workflow algebra, which enriches the experiment representation by introducing a uniform data model and its corresponding operators. This representation and the AEL provenance model map concepts from the workflow execution data to the AEL derived workflows with their corresponding experiment abstract definitions. We show how AEL has improved the understanding of a real experiment in the bioinformatics area. By combining provenance data from the experiment and its corresponding executions, AEL provenance queries navigate from experiment concepts defined at high abstraction level to derived workflows and their execution data. It also shows a direct way of querying results from different trials involving activity variations and optionalities, only present at the experiment level of abstraction.
Keywords: Scientific workflows | Software product line | Workflow algebra | Workflow derivation
مقاله انگلیسی
9 MemEFS: A network-aware elastic in-memory runtime distributed file system
MemEFS: یک سیستم فایلی توزیع شده اجرای زمانی الاستیک حافظه ای مطلع از شبکه-2017
Scientific domains such as astronomy or bioinformatics produce increasingly large amounts of data that need to be analyzed. Such analyses are modeled as scientific workflows — applications composed of many individual tasks that exhibit data dependencies. Typically, these applications suffer from significant variability in the interplay between achieved parallelism and data footprint. To efficiently tackle the data deluge, cost effective solutions need to be deployed by extending private computing infrastructures with public cloud resources. To achieve this, two key features for such systems need to be addressed: elasticity and network adaptability. The former improves compute resource utilization efficiency, while the latter improves network utilization efficiency, since public clouds suffer from significant bandwidth variability. This paper extends our previous work on MemEFS, an in-memory elastic distributed file system by adding network adaptability. Our results show that MemEFS’ elasticity increases the resource utilization efficiency by up to 65%. Regarding the network adaptation policy, MemEFS achieves up to 50% speedup compared to its network-agnostic counterpart.
Keywords: In-memory file system | Distributed hashing | Elasticity | calable computing | Network variability | Network adaptation | High-performance I/O | Large-scale scientific computing | Big data and HPC systems | Big data for e-Science | Large-scale systems for computational | sciences
مقاله انگلیسی
10 A Critical Path File Location (CPFL) algorithm for data-aware multiworkflow scheduling on HPC clusters
یک الگوریتم اساسی مکان و مسیر فایل برای زمان بندی چند - گردش کاره مطلع از داده روی خوشه های HPC-2017
A representative set of workflows found in bioinformatics pipelines must deal with large data sets. Most scientific workflows are defined as Direct Acyclic Graphs (DAGs). Despite DAGs are useful to understand dependence relationships, they do not provide any information about input, output and temporal data files. This information about the location of files of data intensive applications helps to avoid performance issues. This paper presents a multiworkflow store-aware scheduler in a cluster environment called Critical Path File Location (CPFL) policy where the access time to disk is more relevant than network, as an extension of the classical list scheduling policies. Our purpose is to find the best location of data files in a hierarchical storage system. The resulting algorithm is tested in an HPC cluster and in a simulated cluster scenario with bioinformatics synthetic workflows, and largely used benchmarks like Montage and Epigenomics. The resulting simulator is tuned and validated with the first test results from the real infrastructure. The evaluation of our proposal shows promising results up to 70% on benchmarks in real HPC clusters using 128 cores and up to 69% of makespan improvement on simulated 512 cores clusters with a deviation between 0.9% and 3% regarding the real HPC cluster.
Keywords: Multiworkflows | Cluster | Scheduler | Simulation | Critical path | Data processing
مقاله انگلیسی
rss مقالات ترجمه شده rss مقالات انگلیسی rss کتاب های انگلیسی rss مقالات آموزشی
logo-samandehi
بازدید امروز: 3016 :::::::: بازدید دیروز: 3097 :::::::: بازدید کل: 37283 :::::::: افراد آنلاین: 43