با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد).
ردیف | عنوان | نوع |
---|---|---|
1 |
Processing big-data with Memristive Technologies: Splitting the Hyperplane Efficiently
پردازش داده های بزرگ با تکنولوژی Memristive: تقسیم Hyperplane به طور موثر-2018 An important cornerstone of data processing is the
ability to efficiently capture structure in data. This entails
treating the input space as a hyperplane that needs partitioning.
We argue that several modern electronic systems can be
understood as carrying out such partitionings: from standard
logic gates to Artificial Neural Networks (ANNs). More recently,
memristive technologies equipped such systems with the benefit
of continuous tuneability directly in hardware, thus rendering
these reconfigurable in a power and space efficient manner.
Here, we demonstrate several proof-of-concept examples where
memristors enable circuits optimised to carry out different
flavours of the fundamental task of splitting the hyperplane.
These include threshold logic and receptive field based classifiers
that are presented within the context of a unified perspective.
Keywords: memristor, Metal Oxide RRAM, Artificial Neural Networks, Threshold Logic Gates, Template Pixel, Texel, Clusterer ,Fuzzy Gate |
مقاله انگلیسی |
2 |
Cost optimization for deadline-aware scheduling of big-data processing jobs on clouds
بهینه سازی هزینه برای زمانبندی دقیق پردازش داده های بزرگ کارها در ابرها-2018 Cloud computing has been widely regarded as a capable solution for big data processing. Nowadays cloud
service providers usually offer users virtual machines with various combinations of configurations and
prices. As this new service scheme emerges, the problem of choosing the cost-minimized combination
under a deadline constraint is becoming more complex for users. The complexity of determining the cost
minimized combination may be resulted from different causes: the characteristics of user applications,
and providers’ setting on the configurations and pricing of virtual machine. In this paper, we proposed
a variety of algorithms to help the users to schedule their big data processing workflow applications on
clouds so that the cost can be minimized and the deadline constraints can be satisfied. The proposed
algorithms were evaluated by extensive simulation experiments with diverse experimental settings.
Keywords: Big-data ، Scheduling ، Cost-efficient ، Cloud computing |
مقاله انگلیسی |
3 |
Improving the Effectiveness of Burst Buffers for Big Data Processing in HPC Systems with Eley
بهبود اثربخشی بافرها پشت سر هم برای پردازش داده های بزرگ در سیستم های HPC با Eley-2018 Burst Buffer is an effective solution for reducing the data transfer time and the I/O interference in
HPC systems. Extending Burst Buffers (BBs) to handle Big Data applications is challenging because BBs
must account for the large data inputs of Big Data applications and the Quality-of-Service (QoS) of HPC
applications—which are considered as first-class citizens in HPC systems. Existing BBs focus on only
intermediate data of Big Data applications and incur a high performance degradation of both Big Data
and HPC applications. We present Eley, a burst buffer solution that helps to accelerate the performance of
Big Data applications while guaranteeing the QoS of HPC applications. To achieve this goal, Eley embraces
interference-aware prefetching technique that makes reading data input faster while introducing low
interference for HPC applications. Evaluations using a wide range of Big Data and HPC applications
demonstrate that Eley improves the performance of Big Data applications by up to 30% compared to
existing BBs while maintaining the QoS of HPC applications.
Keywords: HPC ، MapReduce ، Big data ، Parallel file systems ، Burst buffers ، Interference ، Prefetch |
مقاله انگلیسی |
4 |
BDEv 3:0: Energy efficiency and microarchitectural characterization of Big Data processing frameworks
BDEv 3:0: کارایی انرژی و خصوصیات میکروارساختاری چارچوب پردازش داده های بزرگ-2018 As the size of Big Data workloads keeps increasing, the evaluation of distributed frameworks becomes a crucial
task in order to identify potential performance bottlenecks that may delay the processing of large datasets. While
most of the existing works generally focus only on execution time and resource utilization, analyzing other important
metrics is key to fully understanding the behavior of these frameworks. For example, microarchitecture-level events
can bring meaningful insights to characterize the interaction between frameworks and hardware. Moreover, energy
consumption is also gaining increasing attention as systems scale to thousands of cores. This work discusses the
current state of the art in evaluating distributed processing frameworks, while extending our Big Data Evaluator tool
(BDEv) to extract energy efficiency and microarchitecture-level metrics from the execution of representative Big Data
workloads. An experimental evaluation using BDEv demonstrates its usefulness to bring meaningful information
from popular frameworks such as Hadoop, Spark and Flink.
Keywords: Big Data processing, performance evaluation, energy efficiency, microarchitectural characterization |
مقاله انگلیسی |
5 |
A distributed evolutionary multivariate discretizer for Big Data processing on Apache Spark
یک توزیع تکاملی چند متغیره برای پردازش داده های بزرگ در Apache Spark-2018 Nowadays the phenomenon of Big Data is overwhelming our capacity to extract relevant knowledge through
classical machine learning techniques. Discretization (as part of data reduction) is presented as a real solution to
reduce this complexity. However, standard discretizers are not designed to perform well with such amounts of
data. This paper proposes a distributed discretization algorithm for Big Data analytics based on evolutionary
optimization. After comparing with a distributed discretizer based on the Minimum Description Length
Principle, we have found that our solution yields more accurate and simpler solutions in reasonable time.
Keywords: Discretizacion , Evolutionary computation , Big Data , Data Mining , Apache Spark |
مقاله انگلیسی |
6 |
داده و دانش کاوی با داده های بزرگ برای تولید هوشمند
سال انتشار: 2018 - تعداد صفحات فایل pdf انگلیسی: 13 - تعداد صفحات فایل doc فارسی: 45 مطابق با پیشرفت نوآورانه فناوری اطلاعات و ارتباطات (ICT) و کاربرد آنها در صنعت تولید، دوران بزرگ داده های تولیدی مطابق با انها است و تکنیک های داده کاوی (DMTs)، راه را برای دستیابی به اهداف تولید هوشمند با کنترل زمان واقعی، پویا، خود سازگار و دقیق فراهم می سازد. با این حال، بسیاری از عوامل در محیط همیشه در حال تغییر در صنعت تولید هستند، از جمله، فرآیندهای تولید پیچیده، مقیاس بزرگ و عدم اطمینان، محدودیت پیچیده تر، ترکیب عملکردی عملیات، و غیره، که تولید مدیریت را با چالش های بزرگی همراه می سازد. ورودی پویا تعداد زیادی از داده های خام که از مکان های تولید فیزیکی جمع آوری شده یا تولید شده است؛ در سیستم های مختلف مربوط به اطلاعات، موجب شد تا مشکلات سنگین اطلاعات بیش از حد فراهم شود. در واقع، بسیاری از DMT های سنتی هنوز به اندازه کافی برای پردازش داده های بزرگ در تولید مدیریت هوشمند نیستند. بنابراین، در این مقاله، توسعه DMT ها در دوران بزگی از داده ها را مورد بررسی قرار می دهیم و از سال 2010 با انتخاب و تجزیه و تحلیل مقالات مربوطه در مورد کاربرد DMT ها در مدیریت تولید بحث می کنیم. در عین حال، در این مقاله ما محدودیت ها را مطرح می کنیم و برخی از پیشنهادات را در مورد هوشمند بودن و کاربرد بیشتر DMT ها که در مدیریت تولید به کار می رود را ارائه می دهیم.
کلمات کلیدی: داده های بزرگ | تکنیک های داده کاوی (DMTs) | مدیریت تولید | تولید هوشمند | تجزیه و تحلیل آماری | کشف دانش |
مقاله ترجمه شده |
7 |
A multi-factor monitoring fault tolerance model based on a GPU cluster for big data processing
مدل تحمل نظارت بر گسل چند عامل بر اساس یک خوشه GPU برای پردازش داده های بزرگ-2018 High-performance computing clusters are widely used in large-scale data mining applica
tions, and have higher requirements for persistence, stability and real-time use and sre
therefore computationally intensive. To support large-scale data processing, we design a
multi-factor real-time monitoring fault tolerance (MRMFT) model based on a GPU clus
ter. However, the higher clock frequency of GPU chips results in excessively high energy
consumption in computing systems. Moreover, the ability to support a long-lasting high
temperature operation varies greatly between different GPUs owing to the individual dif
ferences between the chips. In this paper, we design a GPU cluster energy consumption
monitoring system based on wireless sensor networks (WSNs) and propose an energy con
sumption aware checkpointing (ECAC) for high energy consumption problems with the
following two advantages: the system sets checkpoints according to actual energy con
sumption and the device temperature to improve the utilization of checkpoints and re
duce time cost; and it exploits the parallel computing features of CPU and GPU to hide
the CPU detection overhead in GPU parallel computation, and further reduce the time and
energy consumption overhead in the fault tolerance phase. Using ECAC as the constraint
and aiming for a persistent and reliable operation, the dynamic task migration mechanism
is designed, and the reliability of the cluster is greatly improved. The theoretical analysis
and experiment results show that the model improves the persistence and stability of the
computing system while reducing checkpoint overhead.
Keywords: Big data processing ، GPU cluster ، Persistence computing ، Energy consumption ، Fault tolerance ، Energy consumption aware heckpointing ، Task migration |
مقاله انگلیسی |
8 |
Optimal Decision Making for Big Data Processing at Edge-Cloud Environment: An SDN Perspective
تصمیم گیری بهینه برای پردازش داده های بزرگ در محیط لبه-ابر: چشم انداز SDN-2018 With the evolution of Internet and extensive usage of smart devices for computing and storage, cloud computing has become popular. It provides seamless services
such as e-commerce, e-health, e-banking, etc., to the end
users. These services are hosted on massive geodistributed
data centers (DCs), which may be managed by different service providers. For faster response time, such a data explosion creates the need to expand DCs. So, to ease the load on DCs, some of the applications may be executed on the edge
devices near to the proximity of the end users. However,
such a multiedge-cloud environment involves huge data
migrations across the underlying network infrastructure,
which may generate long migration delay and cost. Hence,
in this paper, an efficient workload slicing scheme is proposed for handling data-intensive applications in multiedgecloud environment using software-defined networks (SDN).
To handle the inter-DC migrations efficiently, an SDN-based
control scheme is presented, which provides energy-aware
network traffic flow scheduling. Finally, a multileader multifollower Stackelberg game is proposed to provide costeffective inter-DC migrations. The efficacy of the proposed
scheme is evaluated on Google workload traces using various parameters. The results obtained show the effectiveness of the proposed scheme.
Index Terms: Cloud data centers, edge computing, energy efficiency, software-defined networks (SDNs), Stackel berg game |
مقاله انگلیسی |
9 |
FASTEN: An FPGA Based Secure System for Big Data Processing
FASTEN: یک سیستم امن بر اساس FPGA برای پردازش داده های بزرگ-2018 In cloud computing framework, the data security and protection is one of the most important aspects for optimization and concrete implementation. This paper proposes a reliable yet efficient FPGA-based security system via crypto engines and Physical Unclonable Functions (PUFs) for big data applications. Considering that FPGA or GPU-based accelerators are popular in data centers, we believe the proposed approach is very practical and effective method for data security in cloud computing.
Keywords: FPGA, Security,Big Data, Cloud Computing, Hadoop MapReduce |
مقاله انگلیسی |
10 |
pipsCloud: High performance cloud computing for remote sensing big data management and processing
pipsCloud: محاسبات ابری با عملکرد بالا برای سنجش از راه دور مدیریت و پردازش داده های بزرگ-2018 Massive, large-region coverage, multi-temporal, multi-spectral remote sensing (RS) datasets are
employed widely due to the increasing requirements for accurate and up-to-date information about
resources and the environment for regional and global monitoring. In general, RS data processing involves
a complex multi-stage processing sequence, which comprises several independent processing steps
according to the type of RS application. RS data processing for regional environmental and disaster
monitoring is recognized as being computationally intensive and data intensive.
We propose pipsCloud to address these issues in an efficient manner, which combines recent cloud
computing and HPC techniques to obtain a large-scale RS data processing system that is suitable
for on-demand real-time services. Due to the ubiquity, elasticity, and high-level transparency of the
cloud computing model, massive RS data management and data processing for dynamic environmental
monitoring can all be performed on the cloud via Web interfaces. A Hilbert-R+-based data indexing
method is employed for the optimal querying and access of RS images, RS data products, and interim
data. In the core platform beneath the cloud services, we provide a parallel file system for massive high
dimensional RS data, as well as interfaces for accessing irregular RS data to improve data locality and
optimize the I/O performance. Moreover, we use an adaptive RS data analysis workflow management
system for on-demand workflow construction and the collaborative processing of a distributed complex
chain of RS data, e.g., for forest fire detection, mineral resources detection, and coastline monitoring. Our
experimental analysis demonstrated the efficiency of the pipsCloud platform.
Keywords: Big data ، Cloud computing ، Data-intensive computing ، High performance computing ، Remote sensing |
مقاله انگلیسی |