Data management techniques for Internet of Things
تکنیک های مدیریت داده برای اینترنت اشیاء-2020
Internet of Things (IoT) is a network paradigm in which physical, digital, and virtual objects are equipped with identification, detection, networking, and processing functions to communicate with each other and with other devices and services on the Internet in order to perform the users’ required tasks. Many IoT applications are provided to bring comfort and facilitate the human life. In addition, the application of IoT technologies in the automotive industry has given rise to the concept of Industrial Internet of Things (IIoT) which facilitated using of Cyber Physic Systems, in which machines and humans interact. Due to the diversity, heterogeneity, and large volume of data generated by these entities, the use of traditional database management systems is not suitable in general. In the design of IoT data management systems, many distinctive principles should be considered. These different principles allowed the proposal of several approaches for IoT data management. Some middleware or architecture-oriented solutions facilitate the integration of generated data. Other available solutions provide efficient storage and indexing structured and unstructured data as well as the support to the NoSQL language. Thus, this paper identifies the most relevant concepts of data management in IoT, surveys the current solutions proposed for IoT data management, discusses the most promising solutions, and identifies relevant open research issues on the topic providing guidelines for further contributions.
Keywords: Data management | Internet of Things | IoT applications | Big data | Industrial Internet of Things
TUORIS: A middleware for visualizing dynamic graphics in scalable resolution display environments
TUORIS: واسط برای تجسم گرافیک پویا در محیطهای با وضوح مقیاس پذیر-2020
In the era of big data, large-scale information visualization has become an important challenge. Scalable resolution display environments (SRDEs) have emerged as a technological solution for building high-resolution display systems by tiling lower resolution screens. These systems bring serious advantages, including lower construction cost and better maintainability compared to other alternatives. However, they require specialized software but also purpose-built content to suit the inherently complex underlying systems. This creates several challenges when designing visualizations for big data, such that can be reused across several SRDEs of varying dimensions. This is not yet a common practice but is becoming increasingly popular among those who engage in collaborative visual analytics in data observatories. In this paper, we define three key requirements for systems suitable for such environments, point out limitations of existing frameworks, and introduce Tuoris, a novel open-source middleware for visualizing dynamic graphics in SRDEs. Tuoris manages the complexity of distributing and synchronizing the information among different components of the system, eliminating the need for purpose-built content. This makes it possible for users to seamlessly port existing graphical content developed using standard web technologies, and simplifies the process of developing advanced, dynamic and interactive web applications for large-scale information visualization. Tuoris is designed to work with Scalable Vector Graphics (SVG), reducing bandwidth consumption and achieving high frame rates in visualizations with dynamic animations. It scales independent of the display wall resolution and contrasts with other frameworks that transmit visual information as blocks of images
Keywords: distributed visualization | large-scale visualization | SVG
Industrial blockchain based framework for product lifecycle management in industry 4.0
چارچوب مبتنی بر بلاکچین صنعتی برای مدیریت چرخه عمر محصول در صنعت 4.0-2019
Product lifecycle management (PLM) aims to seamlessly manage all products and information and knowledge generated throughout the product lifecycle for achieving business competitiveness. Conventionally, PLM is implemented based on standalone and centralized systems provided by software vendors. The information of PLM is hardly to be integrated and shared among the cooperating parties. It is difficult to meet the requirements of the openness, interoperability and decentralization of the Industry 4.0 era. To address these challenges, this paper proposed an industrial blockchain-based PLM framework to facilitate the data exchange and service sharing in the product lifecycle. Firstly, we proposed the concept of industrial blockchain as the use of blockchain technology in the industry with the integration of IoT, M2M, and efficient consensus algorithms. It provided an open but secured information storage and exchange platform for the multiple stakeholders to achieve the openness, interoperability and decentralization in era of industry 4.0. Secondly, we proposed and developed customized blockchain information service to fulfill the connection between a single node with the blockchain network. As a middleware, it can not only process the multi-source and heterogeneous data from varied stages in the product lifecycle, but also broadcast the processed data to the blockchain network. Moreover, smart contract is used to automate the alert services in the product lifecycles. Finally, we illustrated the blockchain-based application between the cooperating partners in four emerging product lifecycle stages, including co-design and co-creation, quick and accurate tracking and tracing, proactive maintenance, and regulated recycling. A simulation experiment demonstrated the effectiveness and efficiency of the proposed framework. The results showed that the proposed framework is scalable and efficient, and hence it is feasible to be adopted in industry. With the successful development of the proposed platform, it is promising to provide an effective PLM for improving interoperability and cooperation between stakeholders in the entire product lifecycle.
Keywords: Product lifecycle management | Industrial blockchain | Smart contract | Industry 4.0
تقویت میان افزار بر مبنی کاربردهای اینترنت اشیا از طریق مکانیسم مدیریت زمان اجرای قابل جابجایی کیفیت سرویس کاربرد برای یک M2M سازگار با میان افزار IOT
سال انتشار: 2018 - تعداد صفحات فایل pdf انگلیسی: 9 - تعداد صفحات فایل doc فارسی: 22
در سال های اخیر؛ در مخابرات و شبکه های کامپیوتری از طریق مجازی سازی عملکرد شبکه (NFV) و شبکه های تعریف شده نرم افزار (SDN)، مفاهیم و تکنولوژی های جدیدی را شاهد بوده اید. SDN، به برنامه های کاربردی برای کنترل شبکه اجازه می دهد، و NFV، اجازه می دهد تا توزیع توابع شبکه در محیط های مجازی، امکان پذیر شوند، اینها دو نمونه ای هستند که به طور فزاینده ای برای اینترنت اشیا (IoT) استفاده می شود. این اینترنت (IoT) وعده را به ارمغان می آورد که در چند سال آینده میلیاردها دستگاه را به هم متصل کند، و چالش های علمی متعددی را به ویژه در مورد رضایت از کیفیت خدمات (QoS) مورد نیاز برنامه های کاربردی IOT افزایش دهد. به منظور حل این مشکل، ما دو چالش را با توجه به QoS شناسایی کرده ایم: شبکه های متقاطع و نهادهای میانجی که اجازه می دهد تا برنامه با دستگاه های IoT ارتباط برقرار کند. در این مقاله؛ در ابتدا یک چشم انداز نواورانه از یک "عملکرد شبکه" با توجه به محیط توسعه و استقرار آن ارائه می کنیم. سپس، رویکرد کلی از یک راه حل که شامل گسترش پویا، مستقل و یکپارچه از مکانیزم های مدیریت QoS است، را توصیف می کنیم. همچنین مقررات اجرای چنین رویکردی را توصیف می کنیم. در نهایت؛ یک مکانیزم هدایتگر ارائه می کنیم، که به عنوان یک تابع شبکه اجرا می شود، و اجازه کنترل یکپارچه مسیر داده ها از یک ترافیک میان افزار مشخص را می دهد. این مکانیسم از طریق استفاده مربوط به حمل و نقل خودرو ارزیابی می شود.
کلمات کلیدی: اینترنت اشیا | کیفیت سرویس | میان افزار | چارچوب نمونه | گسترش پویا | عملکرد شبکه | محاسبات خودکار.
|مقاله ترجمه شده|
Context-Aware Computing, Learning, and Big Data in Internet of Things: A Survey
محاسبات متن آگاه، یادگیری و داده های بزرگ در اینترنت اشیا: یک مرور-2018
Internet of Things (IoT) has been growing rapidly due to recent advancements in communications and sensor technologies. Meanwhile, with this revolutionary transformation, researchers, implementers, deployers, and users are faced with many challenges. IoT is a complicated, crowded, and complex field; there are various types of devices, protocols, communication channels, architectures, middleware, and more. Standardization efforts are plenty, and this chaos will continue for quite some time. What is clear, on the other hand, is that IoT deployments are increasing with accelerating speed, and this trend will not stop in the near future. As the field grows in numbers and heterogeneity, “intelligence” becomes a focal point in IoT. Since data now becomes “big data,” understanding, learning, and reasoning with big data is paramount for the future success of IoT. One of the major problems in the path to intelligent IoT is understanding “context,” or making sense of the environment, situation, or status using data from sensors, and then acting accordingly in autonomous ways. This is called “context-aware computing,” and it now requires both sensing and, increasingly, learning, as IoT systems get more data and better learning from this big data. In this survey, we review the field, first, from a historical perspective, covering ubiquitous and pervasive computing, ambient intelligence, and wireless sensor networks, and then, move to context-aware computing studies. Finally, we review learning and big data studies related to IoT. We also identify the open issues and provide an insight for future study areas for IoT researchers
Index Terms: Big data in Internet of Things (IoT), context awareness, data management and analytics, machine learning in IoT
Advancing distributed data management for the HydroShare hydrologic information system
پیشرفت مدیریت داده های توزیع شده برای سیستم اطلاعات هیدرولوژیکی HydroShare-2018
HydroShare (https://www.hydroshare.org) is an online collaborative system to support the open sharing of hydrologic data, analytical tools, and computer models. Hydrologic data and models are often large, extending to multi-gigabyte or terabyte scale, and as a result, the scalability of centralized data man agement poses challenges for a system such as HydroShare. A distributed data management framework that enables distributed physical data storage and management in multiple locations thus becomes a necessity. We use the iRODS (Integrated Rule-Oriented Data System) data grid middleware as the distributed data storage and management back end in HydroShare. iRODS provides a unified virtual file system for distributed physical storages in multiple locations and enables data federation across geographically dispersed institutions around the world. In this paper, we describe the iRODS-based distributed data management approaches implemented in HydroShare to provide a practical demon stration of a production system for supporting big data in the environmental sciences.
Keywords: Distributed data management ، Big data ، Data sharing ، Hydrologic information systems ، Collaborative environment ، iRODS
MIFIM—Middleware solution for service centric anomaly in future internet models
راه حل MIFIM Middleware برای ناهنجاری سرویس مرکزی در آینده مدل های اینترنتی -2017
Internet is shifting at a rapid pace and evolving into a trend called as ‘‘Future Internet (FI)’’. It can be defined as the union and cooperation of paradigms such as Internet of Things (IoT), Internet of Services (IoS) and Internet of Content (IoC). In these paradigms, the role of Service oriented Computing (SoC) deserves special attention. FI can be defined as an association of web services encompassing innovative services such as converged services, intelligent services and related smart services for overcoming the structural limitations of the current internet. Among many key concerns in the services environment, service discovery and optimal service selection are considered to be vital. Service discovery enables the client to get access to the right service at the right time to complete the requested tasks while service selection determines the feasible service composition that fulfils a set of conditions while maintaining a rich Quality of User Experience (QoUE) and good Quality of Service (QoS). This paper proposes the FI middleware named MIFIM (MIddleware for Future Internet Models) incorporated with Aspect Oriented Module (AOM) for addressing the challenges in particular related to the unknown topology and missing data estimation present in IoT service discovery and optimal service selection routine named Composite Service Selection Module (CSSM) for deriving the best service composition in IoS paradigm. The AOM is evaluated in accordance with MUSIC pervasive computing middleware while CSSM is compared with the other optimality approaches. Experimental results were found encouraging and the proposed components were performing reasonably well when compared to the similar solutions.
Keywords: Future Internet (FI) | Middleware | Internet of Things (IoT) | QoS (Quality of Service) | QoUE (Quality of User Experience) | PSO (Particle Swarm Optimization)
Optimized task allocation on private cloud for hybrid simulation of large-scale critical systems
تخصیص کار بهینه شده در ابر خصوصی برای شبیه سازی ترکیبی از سیستم های بحرانی در مقیاس بزرگ-2017
Simulation represents a powerful technique for the analysis of dependability and performance aspects of distributed systems. For large-scale critical systems, simulation demands complex experimentation environments and the integration of different tools, in turn requiring sophisticated modeling skills. Moreover, the criticality of the involved systems implies the set-up of expensive testbeds on private infrastructures. This paper presents a middleware for performing hybrid simulation of large-scale critical systems. The services offered by the middleware allow the integration and interoperability of simulated and emulated subsystems, compliant with the reference interoperability standards, which can provide greater realism of the scenario under test. The hybrid simulation of complex critical systems is a research challenge due to the interoperability issues of emulated and simulated subsystems and to the cost associated with the scenarios to set up, which involve a large number of entities and expensive long running simulations. Therefore, a multi-objective optimization approach is proposed to optimize the simulation task allocation on a private cloud.
Keywords: Large-scale critical systems | Hybrid simulation | Resource optimization | Middleware | Cloud computing
Improving the gossiping effectiveness with distributed strategic learning (Invited paper)
بهبود اثربخشی شایعات با استفاده از استراتژی یادگیری توزیع شده (مقاله دعوت شده)-2017
Gossiping is a widely known and successful approach to reliable communications, tolerating packet losses and link crashes. It has been extensively used in several middleware kinds, such as event notification services and application domains, like infrastructures for air traffic management, power grid control, health information exchange, just to cite some of them. Despite achieving a high loss-tolerance and scalability degrees, gossiping is affected by degraded performances and heavy traffic loads on the network. For this reason, it may be not optimal in applications where reliability must be provided jointly with timeliness and/or in congestion-prone networks. The crucial aspect for improving a gossiping scheme is deciding which nodes should receive a gossiping message, and our driving idea is to adopt a distributed strategic learning logic to determine such nodes in an efficient manner. This is able to resolve gossiping’s weakness points and to achieve better performance and reduced traffic loads. This paper describes how to introduced strategic learning in a gossip scheme so as to determine the best set of nodes that can be used to send gossip messages and to optimize their utility. Such a solution has been experimentally assessed through a set of simulations demonstrating the effectiveness of the proposal.
Keywords: Event-based communications | Gossiping | Reliable multicasting | Strategic learning | Game theory | Reinforcement learning
MidHDC: Advanced topics on middleware services for heterogeneous distributed computing: Part 2✩
MidHDC: Advanced topics on middleware services for heterogeneous distributed computing: Part 2-2017
Currently distributes systems support different computing paradigms like Cluster Computing, Grid Computing, Peer-to-Peer Computing, and Cloud Computing all involving elements of heterogeneity. These computing distributed systems are often characterized by a variety of resources that may or may not be coupled with specific platforms or environments. All these topics challenge today researchers, due to the strong dynamic behavior of the user communities and of resource collections they use. The second part of this special issue presents advances in allocation algorithms, service selection, VM consolidation and mobility policies, scheduling multiple virtual environments and scientific workflows, optimization in scheduling process, energy-aware scheduling models, failure Recovery in shared Big Data processing systems, distributed transaction processing middleware, data storage, trust evaluation, information diffusion, mobile systems, integration of robots in Cloud systems.
Keywords: Middleware services | Resource management | Mobile computing | Cloud computing | HPC | Heterogeneous distributed systems