کارابرن عزیز، مقالات isi بالاترین کیفیت ترجمه را دارند، ترجمه آنها کامل و دقیق می باشد (محتوای جداول و شکل های نیز ترجمه شده اند) و از بهترین مجلات isi انتخاب گردیده اند. همچنین تمامی ترجمه ها دارای ضمانت کیفیت بوده و در صورت عدم رضایت کاربر مبلغ عینا عودت داده خواهد شد.
از نرم افزار winrar برای باز کردن فایل های فشرده استفاده می شود. برای دانلود آن بر روی لینک زیر کلیک کنید
Toward modeling and optimization of features selection in Big Data based social Internet of Things
به سوی مدل سازی و بهینه سازی انتخاب ویژگی ها در داده های بزرگ مبتنی بر اینترنت اشیا اجتماعی-2018
The growing gap between users and the Big Data analytics requires innovative tools that address the challenges faced by big data volume, variety, and velocity. Therefore, it becomes computationally inefficient to analyze and select features from such massive volume of data. Moreover, advancements in the field of Big Data application and data science poses additional challenges, where a selection of appropriate features and High-Performance Computing (HPC) solution has become a key issue and has attracted attention in recent years. Therefore, keeping in view the needs above, there is a requirement for a system that can efficiently select features and analyze a stream of Big Data within their requirements. Hence, this paper presents a system architecture that selects features by using Artificial Bee Colony (ABC). Moreover, a Kalman filter is used in Hadoop ecosystem that is used for removal of noise. Furthermore, traditional MapReduce with ABC is used that enhance the processing efficiency. Moreover, a complete four-tier architecture is also proposed that efficiently aggregate the data, eliminate unnecessary data, and analyze the data by the proposed Hadoop-based ABC algorithm. To check the efficiency of the proposed algorithms exploited in the proposed system architecture, we have implemented our proposed system using Hadoop and MapReduce with the ABC algorithm. ABC algorithm is used to select features, whereas, MapReduce is supported by a parallel algorithm that efficiently processes a huge volume of data sets. The system is implemented using MapReduce tool at the top of the Hadoop parallel nodes with near real time. Moreover, the proposed system is compared with Swarm approaches and is evaluated regarding efficiency, accuracy and throughput by using ten different data sets. The results show that the proposed system is more scalable and efficient in selecting features.
Keywords: SIoT ، Big Data ، ABC algorithm، Feature selection
A new architecture of Internet of Things and big data ecosystem for secured smart healthcare monitoring and alerting system
معماری جدید اینترنت اشیاء و اکوسیستم داده های بزرگ برای نظارت بر سیستم مراقبت سلامت هوشمند و سیستم هشدار دهنده امن-2018
Wearable medical devices with sensor continuously generate enormous data which is often called as big data mixed with structured and unstructured data. Due to the complexity of the data, it is difficult to process and analyze the big data for finding valuable information that can be useful in decision making. On the other hand, data security is a key requirement in healthcare big data system. In order to overcome this issue, this paper proposes a new architecture for the implementation of IoT to store and process scalable sensor data (big data) for health care applications. The Proposed architecture consists of two main sub architectures, namely, Meta Fog-Redirection (MF-R) and Grouping and Choosing (GC) architecture. MF-R architecture uses big data technologies such as Apache Pig and Apache HBase for collection and storage of the sensor data (big data) generated from different sensor devices. The proposed GC architecture is used for securing integration of fog computing with cloud computing. This architecture also uses key management service and data categorization function (Sensitive, Critical and Normal) for providing security services. The framework also uses MapReduce based prediction model to predict the heart diseases. Performance evaluation parameters such as throughput, sensitivity, accuracy, and f-measure are calculated to prove the efficiency of the proposed architecture as well as the prediction model.
Keywords: Wireless sensor networks ، Internet of Things ، Big data analytics ، Cloud computing and health car
High-order possibilistic c-means algorithms based on tensor decompositions for big data in IoT
الگوریتم های c-means احتمالی اولویت بالا بر اساس تجزیه تانسور برای داده های بزرگ در اینترنت اشیا-2018
Internet of Things (IoT) connects the physical world and the cyber world to offer intelligent services by data mining for big data. Each big data sample typically involves a large number of attributes, posing a remarkable challenge on the high-order possibilistic c-means algorithm (HOPCM). Specially, HOPCM requires high-performance servers with a large-scale memory and a powerful computing unit, to cluster big samples, limiting its applicability in IoT systems with low-end devices such as portable computing units and embedded devises which have only limited memory space and computing power. In this paper, we propose two high-order possibilistic c-means algorithms based on the canonical polyadic decomposition (CP-HOPCM) and the tensor-train network (TT-HOPCM) for clustering big data. In detail, we use the canonical polyadic decomposition and the tensor-train network to compress the attributes of each big data sample. To evaluate the performance of our algorithms, we conduct the experiments on two representative big data datasets, i.e., NUS-WIDE-14 and SNAE2, by comparison with the conventional highorder possibilistic c-means algorithm in terms of attributes reduction, execution time, memory usage and clustering accuracy. Results imply that CP-HOPCM and TT-HOPCM are potential for big data clustering in IoT systems with low-end devices since they can achieve a high compression rate for heterogeneous samples to save the memory space significantly without a significant clustering accuracy drop.
Keywords: Big data ، IoT ، Possibilistic c-means clustering ، Canonical polyadic decomposition ، Tensor-train network
BDWatchdog: Real-time monitoring and profiling of Big Data applications and frameworks
BDWatchdog: نظارت بر زمان واقعی و چارچوب و پروفایل برنامه های کاربردی داده های بزرگ -2018
Current Big Data applications are characterized by a heavy use of system resources (e.g., CPU, disk) generally distributed across a cluster. To effectively improve their performance there is a critical need for an accurate analysis of both Big Data workloads and frameworks. This means to fully understand how the system resources are being used in order to identify potential bottlenecks, from resource to code bottlenecks. This paper presents BDWatchdog, a novel framework that allows real-time and scalable analysis of Big Data applications by combining time series for resource monitorization and flame graphs for code profiling, focusing on the processes that make up the workload rather than the underlying instances on which they are executed. This shift from the traditional system-based monitorization to a process-based analysis is interesting for new paradigms such as software containers or serverless computing, where the focus is put on applications and not on instances. BDWatchdog has been evaluated on a Big Data cloud-based service deployed at the CESGA supercomputing center. The experimental results show that a process-based analysis allows for a more effective visualization and overall improves the understanding of Big Data workloads. BDWatchdog is publicly available at http://bdwatchdog.dec.udc.es.
Keywords: Big data ، Monitoring ، Profiling ، Time series ، Flame graphs ، Process-based analysis
HEPart: A balanced hypergraph partitioning algorithm for big data applications
HEPart: یک الگوریتم پارتیشن بندی فوق العاده گرافیکی متعادل برای برنامه های داده بزرگ-2018
Minimizing the query cost among multi-hosts is important to data processing for big data applications. Hypergraph is good at modeling data and data relationships of complex networks, the typical big data applications, by representing multi-way relationships or interactions as hyperedges. Hypergraph parti tioning (HP) helps to partition the query loads on several hosts, enabling the horizontal scaling of large scale networks. Existing heuristic HP algorithms are generally vertex hypergraph partitioning, designed to minimize the number of cut hyperedges while satisfying the balance requirements of part weights regarding vertices. However, since workloads are mainly produced by group operations, minimizing query costs landing on hyperedges and balancing the workloads should be the objectives in horizontal scaling. We thus propose a heuristic hyperedge partitioning algorithm, HEPart. Specifically, HEPart directly partitions the hypergraph into K sub-hypergraphs with a minimum cutsize for vertices, while satisfying the balance constraint on hyperedge weights, based on the effective move of hyperedges. The performance of HEPart is evaluated using several complex network datasets modeled by undirected hypergraphs, under different cutsize metrics. The partitioning quality of HEPart is then compared with alternative hyperedge partitioners and vertex hypergraph partitioning algorithms. The experimental findings demonstrate the utility of HEPart (e.g. low cut cost while keeping load balancing as required, especially over scale-free networks).
Keywords: Hypergraph partitioning ، Hyperedge partitioning ، Load balancing ، Big data
A unique feature extraction using MRDWT for automatic classification of abnormal heartbeat from ECG big data with Multilayered Probabilistic Neural Network classifier
استخراج ویژگی منحصر به فرد با استفاده از MRDWT برای طبقه بندی خودکارضربان قلب غیر طبیعی از داده های بزرگ ECG با چند لایه طبقه بندی احتمالی شبکه عصبی-2018
This paper employs a novel adaptive feature extraction techniques of electrocardiogram (ECG) signal for detection of cardiac arrhythmias using multiresolution discrete wavelet transform from ECG big data. In this paper, five types ECG arrhythmias including normal beats have been classified. The MIT-BIH database of 48 patient records is utilized for detection and analysis of cardiac arrhythmias. Proposed feature extraction utilizes Daubechies as wavelet function and extracts 21 feature points which include the QRS complex of ECG signal. The Multilayered Probabilistic Neural Network (MPNN) classifier is pro posed as the best-suited classifier for the proposed feature. Total 1700 ECG betas were tested using MPNN classifier and compared with other three classifiers Back Propagation (BPNN), Multilayered Perceptron (MLP) and Support Vector Machine (SVM). The system efficiency and performance have been evaluated using seven types of evaluation criteria: precision (PR), F-Score, positive predictivity (PP), sensitivity (SE), classification error rate (CER) and specificity (SP). The overall system accuracy, using MPNN technique utilizing the proposed feature, obtained is 99.53% whereas using BPNN, MLP and SVM provide 97.94%, 98.53%, and 99%. The processing time using MPNN classifier is only 3 s which show that the proposed techniques not only very accurate and efficient but also very quick.
Keywords: Signal processing ، Artificial intelligence ، Pattern recognition ، Soft computing ، Wavelet transform
Improving the Effectiveness of Burst Buffers for Big Data Processing in HPC Systems with Eley
بهبود اثربخشی بافرها پشت سر هم برای پردازش داده های بزرگ در سیستم های HPC با Eley-2018
Burst Buffer is an effective solution for reducing the data transfer time and the I/O interference in HPC systems. Extending Burst Buffers (BBs) to handle Big Data applications is challenging because BBs must account for the large data inputs of Big Data applications and the Quality-of-Service (QoS) of HPC applications—which are considered as first-class citizens in HPC systems. Existing BBs focus on only intermediate data of Big Data applications and incur a high performance degradation of both Big Data and HPC applications. We present Eley, a burst buffer solution that helps to accelerate the performance of Big Data applications while guaranteeing the QoS of HPC applications. To achieve this goal, Eley embraces interference-aware prefetching technique that makes reading data input faster while introducing low interference for HPC applications. Evaluations using a wide range of Big Data and HPC applications demonstrate that Eley improves the performance of Big Data applications by up to 30% compared to existing BBs while maintaining the QoS of HPC applications.
Keywords: HPC ، MapReduce ، Big data ، Parallel file systems ، Burst buffers ، Interference ، Prefetch
From big data to smart energy services: An application for intelligent energy management
از داده های بزرگ به سرویس های هوشمند انرژی: یک برنامه کاربردی برای مدیریت انرژی هوشمند-2018
Big data is an ascendant technological concepts and includes smart energy services, such as intelligent energy management, energy consumption prediction and exploitation of Internet of Things (IoT) solutions. As a result, big data technologies will have a significant impact in the energy sector. This paper proposes a high level architecture of a big data platform that can support the creation, development, maintenance and exploitation of smart energy services through the utilisation of cross-domain data. The proposed platform enables the simplification of the procedure followed for the information gathering by multiple sources, turning into actionable recommendations and meaningful operational insights for city authorities and local administrations, energy managers and consultants, energy service companies, utilities and energy providers. Α web-based Decision Support System (DSS) has been developed according to the proposed architecture, exploiting multi-sourced data within a smart city context towards the creation of energy management action plans. The pilot application of the developed DSS in three European cities is presented and discussed. This “data-driven” DSS can support energy managers and city authorities for managing their building facilities’ energy performance.
Keywords: Big Data; Decision Support System; Energy Services; Intelligent Management; Smart Cities.
Simulation methodology and performance analysis of network coding based transport protocol in wireless big data networks
روش شبیه سازی و تجزیه و تحلیل کارایی پروتکل انتقال مبتنی بر کدگذاری شبکه در شبکه های داده های بزرگ بی سیم-2018
The Multi-Path, Multi-Hop (MPMH) communications have been extensively used in wire less network. It is especially suitable to big data transmissions due to its high throughput. To provide congestion and end-to-end reliability control, two types of transport layer pro tocols have been proposed in the literature: the TCP-based protocols and the rateless cod ing based protocols. However, the former is too conservative to explore the capacity of the MPMH networks, and the latter is too aggressive in filling up the communication capac ity and performs poorly when dealing with congestions. To overcome their drawbacks, a novel network coding scheme, namely, Adjustable Batching Coding (ABC), was proposed by us, which uses redundancy coding to overcome random loss and uses retransmissions and window size shrink to relieve congestion. The stratified congestion control strategy makes the ABC scheme especially suitable for big data transmissions. However, there is no simu lation platform built so far that can accurately test the performance of the network coding based transport protocols. We have built a modular, easy-to-customize simulation system based on an event-based programming method, which can simulate the ABC-based MPMH transport layer behaviors. Using the proposed simulator, the optimal parameters of the protocol can be fine-tuned, and the performance is superior to other transport layer pro tocols under the same settings. Furthermore, the proposed simulation methodology can be easily extended to other variants of MPMH communication systems by adjusting the ABC parameters.
Keywords: Network simulator ، Wireless big data networks ، Multi-path multi-hop communications ، Transport layer ، Network coding
BDEv 3:0: Energy efficiency and microarchitectural characterization of Big Data processing frameworks
BDEv 3:0: کارایی انرژی و خصوصیات میکروارساختاری چارچوب پردازش داده های بزرگ-2018
As the size of Big Data workloads keeps increasing, the evaluation of distributed frameworks becomes a crucial task in order to identify potential performance bottlenecks that may delay the processing of large datasets. While most of the existing works generally focus only on execution time and resource utilization, analyzing other important metrics is key to fully understanding the behavior of these frameworks. For example, microarchitecture-level events can bring meaningful insights to characterize the interaction between frameworks and hardware. Moreover, energy consumption is also gaining increasing attention as systems scale to thousands of cores. This work discusses the current state of the art in evaluating distributed processing frameworks, while extending our Big Data Evaluator tool (BDEv) to extract energy efficiency and microarchitecture-level metrics from the execution of representative Big Data workloads. An experimental evaluation using BDEv demonstrates its usefulness to bring meaningful information from popular frameworks such as Hadoop, Spark and Flink.
Keywords: Big Data processing, performance evaluation, energy efficiency, microarchitectural characterization