با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد).
ردیف | عنوان | نوع |
---|---|---|
1 |
Intelligent context-aware fog node discovery
کشف گره مه آگاه از زمینه هوشمند-2022 Fog computing has been proposed as a mechanism to address certain issues in
cloud computing such as latency, storage, network bandwidth, etc. Fog computing brings the processing, storage, and networking to the edge of the network
near the edge devices, which we called fog consumers. This decreases latency,
network bandwidth, and response time. Discovering the most relevant fog node,
the nearest one to the fog consumers, is a critical challenge that is yet to be addressed by the research. In this study, we present the Intelligent and Distributed
Fog node Discovery mechanism (IDFD) which is an intelligent approach to enable fog consumers to discover appropriate fog nodes in a context-aware manner.
The proposed approach is based on the distributed fog registries between fog consumers and fog nodes that can facilitate the discovery process of fog nodes. In
this study, the KNN, K-d tree, and brute force algorithms are used to discover
fog nodes based on the context-aware criteria of fog nodes and fog consumers.
The proposed framework is simulated using OMNET++, and the performance of
the proposed algorithms is compared based on performance metrics and execution
time. The accuracy and execution time are the major points of consideration in
the selection of an optimal fog search algorithm. The experiment results show
that the KNN and K-d tree algorithms achieve the same accuracy results of 95 %.
However, the K-d tree method takes less time to find the nearest fog nodes than
KNN and brute force. Thus, the K-d tree is selected as the fog search algorithm
in the IDFD to discover the nearest fog nodes very efficiently and quickly.
keywords: Fog node | Discovery | Context-aware | Intelligent | Fog node discovery |
مقاله انگلیسی |
2 |
Deep convolutional neural networks-based Hardware–Software on-chip system for computer vision application
سیستم سختافزار-نرمافزار روی تراشه مبتنی بر شبکههای عصبی عمیق برای کاربرد بینایی ماشین-2022 Embedded vision systems are the best solutions for high-performance and lightning-fast inspection tasks. As everyday life evolves, it becomes almost imperative to harness artificial
intelligence (AI) in vision applications that make these systems intelligent and able to make
decisions close to or similar to humans. In this context, the AI’s integration on embedded
systems poses many challenges, given that its performance depends on data volume and
quality they assimilate to learn and improve. This returns to the energy consumption and
cost constraints of the FPGA-SoC that have limited processing, memory, and communication
capacity. Despite this, the AI algorithm implementation on embedded systems can drastically
reduce energy consumption and processing times, while reducing the costs and risks associated
with data transmission. Therefore, its efficiency and reliability always depend on the designed
prototypes. Within this range, this work proposes two different designs for the Traffic Sign
Recognition (TSR) application based on the convolutional neural network (CNN) model,
followed by three implantations on PYNQ-Z1. Firstly, we propose to implement the CNN-based
TSR application on the PYNQ-Z1 processor. Considering its runtime result of around 3.55 s,
there is room for improvement using programmable logic (PL) and processing system (PS) in a
hybrid architecture. Therefore, we propose a streaming architecture, in which the CNN layers
will be accelerated to provide a hardware accelerator for each layer where direct memory
access (DMA) interface is used. Thus, we noticed efficient power consumption, decreased
hardware cost, and execution time optimization of 2.13 s, but, there was still room for design
optimizations. Finally, we propose a second co-design, in which the CNN will be accelerated
to be a single computation engine where BRAM interface is used. The implementation results
prove that our proposed embedded TSR design achieves the best performances compared to the
first proposed architectures, in terms of execution time of about 0.03 s, computation roof of
about 36.6 GFLOPS, and bandwidth roof of about 3.2 GByte/s.
keywords: CNN | FPGA | Acceleration | Co-design | PYNQ-Z1 |
مقاله انگلیسی |
3 |
Lossy Data Compression for IoT Sensors: A Review
فشرده سازی داده های از دست رفته برای حسگرهای اینترنت اشیا: مرور-2022 Internet of Things (IoT) can be considered a suitable platform for industrial applications,
enabling large systems that connect a huge number of intelligent sensors and subsequent data
collection for analytical applications. This factor is responsible for the substantial increase in
the current volume of data generated by IoT devices. The large volume of data generated by
IoT sensors can lead to unusual demands on cloud storage and data transmission bandwidths.
A suitable approach to address these issues is through data compression approaches. This
article presents a systematic review of the literature on lossy data compression algorithms that
allows the systems to reduce the data detected by IoT devices. Lossy algorithms have a good
compression ratio, preserving data quality and minimizing compression errors. A taxonomy was
proposed from the review results, and the main works were classified, analyzed, and discussed.
keywords: فشرده سازی داده ها | اینترنت اشیا IOT | از بین رفتن در فشرده سازی | حسگرها | Data compression | IoT Internet of things | Lossy compression | Sensors |
مقاله انگلیسی |
4 |
A Scalable Emulator for Quantum Fourier Transform Using Multiple-FPGAs With High-Bandwidth-Memory
شبیه ساز مقیاس پذیر برای تبدیل فوریه کوانتومی با استفاده از چند FPGA با حافظه با پهنای باند بالا-2022 Quantum computing is regarded as the future of computing that hopefully provides exponentially large processing power compared to the conventional digital computing. However, current quantum
computers do not have the capability to correct errors caused by environmental noise, so that it is difficult
to run useful algorithms that require deep quantum circuits. Therefore, emulation of quantum circuits in
digital computers is essential. However, emulation of large quantum circuits requires enormous amount of
computations, and leads to a very large processing time. To reduce the processing time, we propose an FPGA
emulator with high-bandwidth-memory to emulate quantum Fourier transform (QFT), which is a major
part of many quantum algorithms. The proposed FPGA emulator is scalable in terms of both processing
speed and the number of qubits, and extendable to multiple FPGAs. We performed QFT emulations up
to 30 qubits using two FPGAs. According to the measured results, we have achieved 23:6 ∼ 24:5 times
speed-up compared to a fully optimized 24-core CPU emulator.
INDEX TERMS: Quantum computing | quantum circuits | high-bandwidth memory | FPGA | quantum Fourier transform. |
مقاله انگلیسی |
5 |
ConTrib: Maintaining fairness in decentralized big tech alternatives by accounting work
ConTrib: حفظ انصاف در جایگزین های غیرمتمرکز فناوری بزرگ با کار حسابداری-2021 ‘‘Big Tech’’ companies provide digital services used by billions of people. Recent developments, however,
have shown that these companies often abuse their unprecedented market dominance for selfish interests.
Meanwhile, decentralized applications without central authority are gaining traction. Decentralized applications critically depend on its users working together. Ensuring that users do not consume too many resources
without reciprocating is a crucial requirement for the sustainability of such applications.
We present ConTrib, a universal mechanism to maintain fairness in decentralized applications by accounting the work performed by peers. In ConTrib, participants maintain a personal ledger with tamper-evident
records. A record describes some work performed by a peer and links to other records. Fraud in ConTrib
occurs when a peer illegitimately modifies one of the records in its personal ledger. This is detected through
the continuous exchange of random records between peers and by verifying the consistency of incoming records
against known ones. Our simple fraud detection algorithm is highly scalable, tolerates significant packet loss,
and exhibits relatively low fraud detection times. We experimentally show that fraud is detected within seconds
and with low bandwidth requirements. To demonstrate the applicability of our work, we deploy ConTrib in the
Tribler file-sharing application and successfully address free-riding behaviour. This two-year trial has resulted
in over 160 million records, created by more than 94’000 users.
keywords: عادلانه بودن | برنامه های کاربردی غیر متمرکز | حسابداری | رایگان RiderPrevention | Fairness | Decentralizedapplications | Accountingmechanism | Free-riderprevention |
مقاله انگلیسی |
6 |
063-S0893608020304470
063-S0893608020304470-2021 Deep Neural Networks (DNNs) have become popular for various applications in the domain of image and computer vision due to their well-established performance attributes. DNN algorithms involve powerful multilevel feature extractions resulting in an extensive range of parameters and memory footprints. However, memory bandwidth requirements, memory footprint and the associated power consumption of models are issues to be addressed to deploy DNN models on embedded platforms for real time vision-based applications. We present an optimized DNN model for memory and accuracy for vision-based applications on embedded platforms. In this paper we propose Quantization Friendly MobileNet (QF-MobileNet) architecture. The architecture is optimized for inference accuracy and reduced resource utilization. The optimization is obtained by addressing the redundancy and quantization loss of the existing baseline MobileNet architectures. We verify and validate the per- formance of the QF-MobileNet architecture for image classification task on the ImageNet dataset. The proposed model is tested for inference accuracy and resource utilization and compared to the baseline MobileNet architecture. The inference accuracy of the proposed QF-MobileNetV2 float model attained 73.36% and the quantized model has 69.51%. The MobileNetV3 float model attained an inference accuracy of 68.75% and the quantized model has 67.5% respectively. The proposed model saves 33% of time complexity for QF-MobileNetV2 and QF-MobileNetV3 models against the baseline models. The QF-MobileNet also showed optimized resource utilization with 32% fewer tunable parameters, 30% fewer MAC’s operations per image and reduced inference quantization loss by approximately 5% compared to the baseline models. The model is ported onto the android application using TensorFlow API. The android application performs inference on the native devices viz. smartphones, tablets and handheld devices. Future work is focused on introducing channel-wise and layer-wise quantization schemes to the proposed model. We intend to explore quantization aware training of DNN algorithms to achieve optimized resource utilization and inference accuracy.© 2020 Elsevier Ltd. All rights reserved. Keywords: Deep Neural Network | Classification | MobileNet | Computer vision | Embedded platform | Quantization |
مقاله انگلیسی |
7 |
System-level Power Integrity Optimization Based on High-Density Capacitors for enabling HPC/AI applications
بهینه سازی یکپارچگی قدرت در سطح سیستم مبتنی بر خازن های با چگالی بالا برای فعال کردن برنامه های HPC / AI-2020 In this work, we introduce platform-level power
integrity (PI) solutions to enable high-power core IPs and highbandwidth
memory (HBM) interface for HPC/AI applications.
High-complexity design methodology becomes more significant
to enable high-power operations of CPU/GPU/NPU that
preforms iteratively tremendous computing processes. In order
to achieve high-power performance at larger than 200W class,
system-level PI analysis and design guide at early design stage
is required to prevent drastic voltage variations at the bump
under comprehensive environments including SoC, interposer,
package and board characteristics. PI solutions based on highdensity
on-die capacitors are suitable for mitigating voltage
fluctuations by supplying quickly stored charges to silicon
devices. In adopting 2-/3-plate metal-insulator-metal (MIM)
capacitor with approximately 20nF/mm2 and 40nF/mm2, and
integrated stacked capacitor (ISC) with approximately
300nF/mm2, it is demonstrated that voltage properties (drop
and ripple) are able to be improved by system-level design
optimization such as power delivery network (PDN) design and
RTL-architecture manipulation. Consequently, system-level PI
solutions based on high-density capacitor are anticipated to
contribute to improving target performance of high-power
products in response to customer’s expectation for HPC/AI
applications. Keywords: HPC/AI | high-power applications | power integrity | power delivery network | decoupling capacitor | systemlevel design optimization |
مقاله انگلیسی |
8 |
Explainable AI and Mass Surveillance System-Based Healthcare Framework to Combat COVID-19 Like Pandemics
چارچوب بهداشتی مبتنی بر سیستم نظارت گسترده و هوش مصنوعی برای مبارزه با COVID-19 مانند موارد همه گیر-2020 Tactile edge technology that focuses on 5G
or beyond 5G reveals an exciting approach to
control infectious diseases such as COVID-19
internationally. The control of epidemics such as
COVID-19 can be managed effectively by exploiting
edge computation through the 5G wireless
connectivity network. The implementation of a
hierarchical edge computing system provides
many advantages, such as low latency, scalability,
and the protection of application and training
model data, enabling COVID-19 to be evaluated
by a dependable local edge server. In addition,
many deep learning (DL) algorithms suffer from
two crucial disadvantages: first, training requires
a large COVID-19 dataset consisting of various
aspects, which will pose challenges for local councils;
second, to acknowledge the outcome, the
findings of deep learning require ethical acceptance
and clarification by the health care sector,
as well as other contributors. In this article, we
propose a B5G framework that utilizes the 5G
network’s low-latency, high-bandwidth functionality
to detect COVID-19 using chest X-ray or CT
scan images, and to develop a mass surveillance
system to monitor social distancing, mask wearing,
and body temperature. Three DL models,
ResNet50, Deep tree, and Inception v3, are investigated
in the proposed framework. Furthermore,
blockchain technology is also used to ensure the
security of healthcare data. |
مقاله انگلیسی |
9 |
Towards privacy preserving AI based composition framework in edge networks using fully homomorphic encryption
به سمت حفظ حریم خصوصی و حفظ چارچوب ترکیب مبتنی بر هوش مصنوعی در شبکه های لبه ای با استفاده از رمزنگاری کاملاً همگن-2020 We present a privacy-preserving framework for Artificial Intelligence (AI) enabled composition for the edge
networks. Edge computing is a very promising technology for provisioning realtime AI services due to low
response time and network bandwidth requirements. Due to the lack of computational capabilities, an edge
device alone cannot provide the complex AI services. Complex AI tasks should be divided into multiple subtasks
and distributed among multiple edge devices for efficient service provisioning in the edge network.
AI-enabled or automatic service composition is one of the essential AI tasks in the service provisioning.
In edge computing-based service provisioning, service composition related tasks need to be offloaded to
several edge nodes for efficient service. Edge nodes can be used for monitoring services, storing Qualityof-
Service (QoS) data, and composing services to find the best composite service. Existing service composition
methods use plaintext QoS data. Hence, attackers may compromise edge devices to reveal QoS data of
services and modify them for giving an advantage to particular edge service providers, and the AI-based
service composition becomes biased. From that point of view, a privacy-preserving framework for AI-based
service composition is required for the edge networks. In our proposed framework, we introduce an AI-based
composition model for edge services in the edge networks. Additionally, we present a privacy-preserving
AI service composition framework to perform composition on encrypted QoS data using fully homomorphic
encryption (FHE) algorithm. We conduct several experiments to evaluate the performance of our proposed
privacy-preserving service composition framework using a synthetic QoS dataset. Keywords: Edge-AI | Artificial Intelligence | Privacy in edge networks | Privacy-preserving AI | Privacy-preserving AI-based service | composition | Privacy-preserving service composition |
مقاله انگلیسی |
10 |
Towards self-adaptive bandwidth allocation for low-latency communications with reinforcement learning
به سمت تخصیص پهنای باند خود سازگار برای ارتباطات با تأخیر کم با یادگیری تقویتی-2020 Emerging applications such as remotely-controlled human-to-machine and tactile-haptic applications in the Internet
evolution demand stringent low-latency transmission. In realising these applications, current communication
networks need to reduce their latency towards a millisecond order. In our previous study, we exploited supervised
learning-based machine learning techniques in analysing and optimising bandwidth allocation decisions
in access networks to achieve low latency. In this paper, we propose a reinforcement learning-based solution
to facilitate adaptive bandwidth allocation in access networks, without needing supervised training and prior
knowledge of the underlying networks. In our proposed scheme, the central office estimates the rewards of different
bandwidth decisions based on the network latency resulting from executing these decisions. The reward
estimates are then used to select decisions that reduce the latency in turn. In particular, we discuss the algorithms
that can be used to estimate the rewards and achieve decision selection in the proposed scheme. With
extensive simulations, we analyse the performance of these algorithms in diverse network scenarios and validate
the effectiveness of the proposed scheme in reducing network latency over existing schemes. Keywords: Tactile Internet | Low-latency communication | Reinforcement learning | Resource allocation | Optical access networks |
مقاله انگلیسی |