ساخت مجموعه قالب همبند کارا با حداکثر طول عمر در شبکه های رادیویی شناختی
سال انتشار: 2019 - تعداد صفحات فایل pdf انگلیسی: 7 - تعداد صفحات فایل doc فارسی: 15
مجموعه قالب همبند (CDS)، معرف فناوری طرح ریزی ستون مجازی در شبکه های بی سیم است و نقش مهمی در برنامه های بی سیم مانند پخش برنامه، مسیریابی و غیره دارد. در شبکه های رادیویی شناختی به علت فعالیت های تصادفی کاربران اولیه (PU ها) ، طول عمر و کارایی دو شاخص مهم در اندازه گیری الگوریتم های CDS می باشد. با این حال، بهترین کار در الگوریتم های موجود در ساخت CDS در CRN ها، نادیده گرفتن اثربخشی اجرا در مقابل طول عمر است. در این مقاله، یک الگوریتم توزیع چهار فازی برای به حداکثر رساندن طول عمر CDS با تضمین کارایی الگوریتم ارائه شد. الگوریتم پیشنهادی در زمانبندی O (N3 log N) متوقف می شود که بیشتر از O (N4) موثر است.
کليدواژه: مجموعه قالب همبند | شبکه های رادیویی شناختی | حداکثر طول عمر | الگوریتم توزیع شده
|مقاله ترجمه شده|
Parallel algorithms for flexible pattern matching on big graphs
الگوریتم های موازی برای تطبیق الگوی انعطاف پذیر در نمودارهای بزرگ-2018
Strong simulation is a state-of-the-art approximate scheme in graph pattern matching. This scheme always finds high-quality results compared to other schemes. However, as the Web and social networks are increasingly used in human lives, the scale of the data grows ex tremely large. As a result, such big graphs are often stored in the distributed environment, in order to be managed efficiently. Unfortunately, current distributed algorithm for strong simulation is not efficient and cannot be applied to real applications. In this paper, we propose efficient parallel algorithms for strong simulation in the distributed setting. The contribution includes (1) We convert the calculation of strong simulation to calculating a relative small set of partial results for each partition of pattern suitable for distributed system. (2) We develop a method to reduce data shipment and time complexity of local computation in the distributed setting. (3) We split the distributed calculation of strong simulation into an offline redistribution algorithm and an online matching algorithm. The major data shipment is involved in the offline algorithm, while the online algorithm is highly parallel and much more efficient than current algorithms. (4) By experiments on both real and synthetic data, we verify the efficiency of our distributed algorithms and the effectiveness of our scheme without large intermediate results.
Keywords: Graph query ، Graph simulation ، Distributed algorithms ، Strong simulation
A tutorial on modeling and analysis of dynamic social networks: Part I
آموزش مدل سازی و تجزیه و تحلیل شبکه های اجتماعی پویا: بخش اول-2017
In recent years, we have observed a significant trend towards filling the gap between social network analysis and control. This trend was enabled by the introduction of new mathematical models describing dynamics of social groups, the advancement in complex networks theory and multi-agent systems, and the development of modern computational tools for big data analysis. The aim of this tutorial is to high light a novel chapter of control theory, dealing with applications to social systems, to the attention of the broad research community. This paper is the first part of the tutorial, and it is focused on the most clas sical models of social dynamics and on their relations to the recent achievements in multi-agent systems.
Keywords: Social network | Opinion dynamics | Multi-agent systems | Distributed algorithms
Coordinated multi-aircraft 4D trajectories planning considering buffer safety distance and fuel consumption optimization via pure-strategy game
برنامه ریزی مسیرهای 4D چند هواپیما با توجه به فاصله ایمنی بافر و بهینه سازی مصرف سوخت از طریق بازی استراتژی خالص-2017
In this paper, we consider a coordinated multi-aircraft 4D (3D space plus time) trajectories planning problem which is illustrated by planning 4D trajectories for aircraft traversing an Air Traffic Control (ATC) sector. The planned 4D trajectories need to specify each aircraft’s position at any time, ensuring conflict-free and reducing fuel and delay costs, with possible aircraft maneuvers such as speed adjustment and flight level change. Different from most existing literature, the impact of buffer safety distance is also under consideration, and conflict-free is guaranteed at any given time (not only at discrete time instances). The problem is formulated as a pure-strategy game with aircraft as players and all possible 4D trajectories as strategies. An efficient maximum improvement distributed algorithm is developed to find equilibrium at which every aircraft cannot unilaterally improve further, without enumerating all possible 4D trajectories in advance. Proof of existence of the equilibrium and convergence of the algorithm are given. A case study based on real air traffic data shows that the algorithm is able to solve 4D trajectories for online applica tion with estimated 16.7% reduction in monetary costs, and allocate abundant buffer safety distance at minimum separation point. Scalability of the algorithm is verified by computa tional experiments.
Keywords: 4D trajectory | Conflict-free | Pure-strategy game | Air traffic management
Forming time-stable homogeneous groups into Online Social Networks
تشکیل گروه های همگن زمان پایدار به شبکه های اجتماعی آنلاین-2017
In this work we investigate on the time-stability of the homogeneity – in terms of mutual users’ similarity within groups – into real Online Social Networks by taking into account users’ behavioral information as personal interests. To this purpose, we introduce a con ceptual framework to represents the time evolution of the group formation in an OSN. The framework includes a specific experimental approach that has been adopted along with a flexible, distributed algorithm (U2G) designed to drive group formation by weighting two different measures, mutual trust relationships and similarity, denoted by compactness. An experimental campaign has been carried out on datasets extracted from two social net works, CIAO and EPINIONS, and the results show that the time-stability of similarity mea sure for groups formed by the algorithm U2G based on the sole similarity criterion is lower than that of groups formed by considering similarity and trust together, even when the weight assigned to the trust component is small.
Keywords: Online Social Network | Similarity | Homogeneity | Reputation | Trust
DC programming and DCA for enhancing physical layer security via cooperative jamming
برنامه ریزی DC و DCA برای افزایش امنیت لایه فیزیکی از طریق پارازیت تعاونی-2017
Article history:Received 30 September 2015Revised 10 August 2016Accepted 7 November 2016Available online 18 November 2016Keywords:Physical layer security Cooperative jamming Resource allocationDC programming and DCAThe explosive development of computational tools these days is threatening security of cryptographic algorithms, which are regarded as primary traditional methods for ensuring information security. The physical layer security approach is introduced as a method for both improving conﬁdentiality of the se- cret key distribution in cryptography and enabling the data transmission without relaying on higher-layer encryption. In this paper, the cooperative jamming paradigm - one of the techniques used in the phys- ical layer is studied and the resulting power allocation problem with the aim of maximizing the sum of secrecy rates subject to power constraints is formulated as a nonconvex optimization problem. The objective function is a so-called DC (Difference of Convex functions) function, and some constraints are coupling. We propose a new DC formulation and develop an eﬃcient DCA (DC Algorithm) to deal with this nonconvex program. The DCA introduces the elegant concept of approximating the original non- convex program by a sequence of convex ones: at each iteration of DCA requires solution of a convex subproblem. The main advantage of the proposed approach is that it leads to strongly convex quadratic subproblems with separate variables in the objective function, which can be tackled by both distributed and centralized methods. One of the major contributions of the paper is to develop a highly eﬃcient distributed algorithm to solve the convex subproblem. We adopt the dual decomposition method that results in computing iteratively the projection of points onto a very simple structural set which can be determined by an inexpensive procedure. The numerical results show the eﬃciency and the superiority of the new DCA based algorithm compared with existing approaches.© 2016 Elsevier Ltd. All rights reserved.
Keywords: Physical layer security | Cooperative jamming | Resource allocation | DC programming and DCA
A Distributed Decision Tree Algorithm and Its Implementation on Big Data Platforms
الگوریتم درخت تصمیم گیری توزیع شده و پیاده سازی آن در پایگاه داده های بزرگ-2016
Decision tree algorithms are very popular in the field of data mining. This paper proposes a distributed decision tree algorithm and shows examples of its implementation on big data platforms. The major contribution of this paper is the novel KS-Tree algorithm which builds a decision tree in a distributed environment. KS-Tree is applied to some realworld data mining problems and compared with state-of-the-art decision tree techniques that are implemented in R and Apache Spark. The results show that KS-Tree can achieve better results, especially with large data sets. Furthermore, we demonstrate that KS-Tree can be applied to various data mining tasks, such as variable selection.
Keywords: Big data | CHAID | Data Mining | Decision Tree | Distributed Algorithm | KS-Tree
Extracting Kernel Dataset from Big Sensory Data in Wireless Sensor Networks
استخراج مجموعه داده های هسته ای از داده های حسی بزرگ در شبکه های سنسور بی سیم-2016
The amount of sensory data manifests an explosive growth due to the increasing popularity of Wireless Sensor Networks (WSNs). The scale of sensory data in many applications has already exceeded several petabytes annually, which is beyond the computation and transmission capabilities of conventional WSNs. On the other hand, the information carried by big sensory data has high redundancy because of strong correlation among sensory data. In this paper, we introduce the novel concept of ϵ-Kernel Dataset, which is only a small data subset and can represent the vast information carried by big sensory data with the information loss rate being less than ϵ, where ϵ can be arbitrarily small. We prove that drawing the minimum ϵ-Kernel Dataset is polynomial time solvable and provide a centralized algorithm with O(n3) time complexity. Furthermore, a distributed algorithm with constant complexity O(1) is designed. It is shown that the result returned by the distributed algorithm can satisfy the ϵ requirement with a near optimal size. Furthermore, two distributed algorithms of maintaining the correlation coefficients among sensor nodes are developed. Finally, the extensive real experiment results and simulation results are presented. The results indicate that all the proposed algorithms have high performance in terms of accuracy and energy efficiency.
Index Terms: Big Sensory Data | Kernel Dataset | Wireless Sensor Networks
On Traffic-Aware Partition and Aggregation in MapReduce for Big Data Applications
تجمیع و تقسیم بندی ترافیک آگاه در MapReduce برای کاربردهای داده های بزرگ-2016
The MapReduce programming model simplifies large-scale data processing on commodity cluster by exploiting parallel map tasks and reduce tasks. Although many efforts have been made to improve the performance of MapReduce jobs, they ignore the network traffic generated in the shuffle phase, which plays a critical role in performance enhancement. Traditionally, a hash function is used to partition intermediate data among reduce tasks, which, however, is not traffic-efficient because network topology and data size associated with each key are not taken into consideration. In this paper, we study to reduce network traffic cost for a MapReduce job by designing a novel intermediate data partition scheme. Furthermore, we jointly consider the aggregator placement problem, where each aggregator can reduce merged traffic from multiple map tasks. A decomposition-based distributed algorithm is proposed to deal with the large-scale optimization problem for big data application and an online algorithm is also designed to adjust data partition and aggregation in a dynamic manner. Finally, extensive simulation results demonstrate that our proposals can significantly reduce network traffic cost under both offline and online cases.
Index Terms: MapReduce | partition | aggregation | big data | lagrangian decomposition
Overlay live video streaming with heterogeneous bitrate requirements
همپوشانی جریان تصویر ویدئویی زنده با ملزومات میزان ارسال بیت ناهمگن-2014
We study a streaming cloud formed by distributed proxies providing live video service to diverse users (e.g., smart TVs, PCs, tablets, mobile phones, etc.). The proxies form a pushbased overlay network, with each proxy serving a certain video bitrate for users to join. To form a proxy overlay serving heterogeneous bitrates, we consider that the video is encoded into multiple MDC (Multiple-Description Coding) streams with the serving bitrate of proxyi beingki description streams. In order to effectively mitigate stream disruption due to node churns, proxyi also joins an additionalri redundant MDC streams (ri P0) in such a way that all theðki þriÞstreams are supplied bydistinctparents. For live streaming, the critical issue is how to construct theparent-disjointtrees minimizing the assembly delay of the proxies. We present a realistic delay model capturing important system parameters and delay components, formulate the optimization problem and show that it is NP-hard. We propose a centralized algorithm which is useful for a centrally-managed network and serves as a benchmark for comparison (PADTrees-Centralized). For large network, we propose a simple and distributed algorithm which continuously reduces delay through overlay adaptation (PADTrees-Distributed). Through extensive simulation on real Internet topologies, we show that high stream continuity can be achieved with push-based trees in the presence of node churns. Our algorithms are simple and effective, achieving low loss and low delay.