با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد).
ردیف | عنوان | نوع |
---|---|---|
1 |
Data Mining Strategies for Real-Time Control in New York City
استراتژی داده کاوی برای کنترل زمان واقعی در شهر نیویورک-2105 The Data Mining System (DMS) at New York City Department of Transportation (NYCDOT) mainly consists of four database
systems for traffic and pedestrian/bicycle volumes, crash data, and signal timing plans as well as the Midtown in Motion (MIM)
systems which are used as part of the NYCDOT Intelligent Transportation System (ITS) infrastructure. These database and
control systems are operated by different units at NYCDOT as an independent database or operation system. New York City
experiences heavy traffic volumes, pedestrians and cyclists in each Central Business District (CBD) area and along key arterial
systems. There are consistent and urgent needs in New York City for real-time control to improve mobility and safety for all
users of the street networks, and to provide a timely response and management of random incidents. Therefore, it is necessary to
develop an integrated DMS for effective real-time control and active transportation management (ATM) in New York City.
This paper will present new strategies for New York City suggesting the development of efficient and cost-effective DMS,
involving: 1) use of new technology applications such as tablets and smartphone with Global Positioning System (GPS) and
wireless communication features for data collection and reduction; 2) interface development among existing database and control
systems; and 3) integrated DMS deployment with macroscopic and mesoscopic simulation models in Manhattan. This study
paper also suggests a complete data mining process for real-time control with traditional static data, current real timing data from
loop detectors, microwave sensors, and video cameras, and new real-time data using the GPS data. GPS data, including using taxi
and bus GPS information, and smartphone applications can be obtained in all weather conditions and during anytime of the day.
GPS data and smartphone application in NYCDOT DMS is discussed herein as a new concept.
© 2014 The Authors. Published by Elsevier B.V.
Selection and peer-review under responsibility of Elhadi M. Shakshu
Keywords: Data Mining System (DMS), New York City, real-time control, active transportation management (ATM), GPS data |
مقاله انگلیسی |
2 |
Deep Reinforcement Learning With Quantum-Inspired Experience Replay
یادگیری تقویتی عمیق با تکرار تجربه کوانتومی-2022 In this article, a novel training paradigm inspired
by quantum computation is proposed for deep reinforcement
learning (DRL) with experience replay. In contrast to the traditional experience replay mechanism in DRL, the proposed DRL
with quantum-inspired experience replay (DRL-QER) adaptively
chooses experiences from the replay buffer according to the
complexity and the replayed times of each experience (also
called transition), to achieve a balance between exploration and
exploitation. In DRL-QER, transitions are first formulated in
quantum representations and then the preparation operation
and depreciation operation are performed on the transitions.
In this process, the preparation operation reflects the relationship between the temporal-difference errors (TD-errors) and the
importance of the experiences, while the depreciation operation is
taken into account to ensure the diversity of the transitions. The
experimental results on Atari 2600 games show that DRL-QER
outperforms state-of-the-art algorithms, such as DRL-PER and
DCRL on most of these games with improved training efficiency
and is also applicable to such memory-based DRL approaches
as double network and dueling network.
Index Terms: Deep reinforcement learning (DRL) | quantum computation | quantum-inspired experience replay (QER) | quantum reinforcement learning. |
مقاله انگلیسی |
3 |
Efficient Construction of a Control Modular Adder on a Carry-Lookahead Adder Using Relative-Phase Toffoli Gates
ساخت کارآمد یک جمع کننده ماژولار کنترلی بر روی جمع کننده Carry-Lookahead با استفاده از گیت های توفولی فاز نسبی-2022 Control modular addition is a core arithmetic function, and we must consider the computational cost for actual quantum computers to realize efficient implementation. To achieve a low computational
cost in a control modular adder, we focus on minimizing KQ (where K is the number of logical qubits
required by the algorithm, and Q is the elementary gate step), defined by the product of the number of
qubits and the depth of the circuit. In this article, we construct an efficient control modular adder with small
KQ by using relative-phase Toffoli gates in two major types of quantum computers: fault-tolerant quantum
computers (FTQ) on the logical layer and noisy intermediate-scale quantum computers (NISQ). We give
a more efficient construction compared with Van Meter and Itoh’s, based on a carry-lookahead adder. In
FTQ, T gates incur heavy cost due to distillation, which fabricates ancilla for running T gates with high
accuracy but consumes a lot of especially prepared ancilla qubits and a lot of time. Thus, we must reduce the
number of T gates. We propose a new control modular adder that uses only 20% of the number of T gates
of the original. Moreover, when we take distillation into consideration, we find that we minimize KQT (the
product of the number of qubits and T-depth) by running (n/√log n) T gates simultaneously. In NISQ,
cnot gates are the major error source. We propose a new control modular adder that uses only 35% of the
number of cnotgates of the original. Moreover, we show that the KQCX (the product of the number of qubits
and cnot-depth) of our circuit is 38% of the original. Thus, we realize an efficient control modular adder,
improving prospects for the efficient execution of arithmetic in quantum computers.
INDEX TERMS: Carry-lookahead adder | control modular adder | fault-tolerant quantum computers (FTQ) | noisy intermediate-scale quantum computers (NISQ) | Shor’s algorithm. |
مقاله انگلیسی |
4 |
High-accuracy in the classification of butchery cut marks and crocodile tooth marks using machine learning methods and computer vision algorithms
دقت بالا در طبقه بندی علائم برش قصابی و علائم دندان تمساح با استفاده از روش های یادگیری ماشین و الگوریتم های بینایی کامپیوتری-2022 Some researchers using traditional taphonomic criteria (groove shape and presence/absence of microstriations) have cast some doubts about the potential equifinality presented by crocodile tooth marks and
stone tool butchery cut marks. Other researchers have argued that multivariate methods can efficiently
separate both types of marks. Differentiating both taphonomic agents is crucial for determining the earliest evidence of carcass processing by hominins. Here, we use an updated machine learning approach
(discarding artificially bootstrapping the original imbalanced samples) to show that microscopic features
shaped as categorical variables, corresponding to intrinsic properties of mark structure, can accurately
discriminate both types of bone modifications. We also implement new deep-learning methods that
objectively achieve the highest accuracy in differentiating cut marks from crocodile tooth scores (99%
of testing sets). The present study shows that there are precise ways of differentiating both taphonomic
agents, and this invites taphonomists to apply them to controversial paleontological and archaeological
specimens.
keywords: تافونومی | علائم برش | علائم دندان | فراگیری ماشین | یادگیری عمیق | شبکه های عصبی کانولوشنال | قصابی | Taphonomy | Cut marks | Tooth marks | Machine learning | Deep learning | Convolutional neural networks | Butchery |
مقاله انگلیسی |
5 |
Efficient Quantum State Preparation for the Cauchy Distribution Based on Piecewise Arithmetic
آماده سازی حالت کوانتومی کارآمد برای توزیع کوشی بر اساس حساب تکه ای-2022 The benefits of the quantum Monte Carlo algorithm heavily rely on the efficiency of the
superposition state preparation. So far, most reported Monte Carlo algorithms use the Grover–Rudolph state
preparation method, which is suitable for efficiently integrable distribution functions. Consequently, most reported works are based on log-concave distributions, such as normal distributions. However, non-log-concave
distributions still have many uses, such as in financial modeling. Recently, a new method was proposed
that does not need integration to calculate the rotation angle for state preparation. However, performing
efficient state preparation is still difficult due to the high cost associated with high precision and low error
in the calculation for the rotation angle. Many methods of quantum state preparation use polynomial Taylor
approximations to reduce the computation cost. However, Taylor approximations do not work well with
heavy-tailed distribution functions that are not bounded exponentially. In this article, we present a method
of efficient state preparation for heavy-tailed distribution functions. Specifically, we present a quantum
gate-level algorithm to prepare quantum superposition states based on the Cauchy distribution, which is a
non-log-concave heavy-tailed distribution. Our procedure relies on a piecewise polynomial function instead
of a single Taylor approximation to reduce computational cost and increase accuracy. The Cauchy distribution is an even function, so the proposed piecewise polynomial contains only a quadratic term and a constant
term to maintain the simplest approximation of an even function. Numerical analysis shows that the required
number of subdomains increases linearly as the approximation error decreases exponentially. Furthermore,
the computation complexity of the proposed algorithm is independent of the number of subdomains in the
quantum implementation of the piecewise function due to quantum parallelism. An example of the proposed
algorithm based on a simulation conducted in Qiskit is presented to demonstrate its capability to perform
state preparation based on the Cauchy distribution.
INDEX TERMS: Algorithms | gate operations | quantum computing. |
مقاله انگلیسی |
6 |
Computer vision for anatomical analysis of equipment in civil infrastructure projects: Theorizing the development of regression-based deep neural networks
چشم انداز کامپیوتری برای تجزیه و تحلیل آناتومیکی تجهیزات در پروژه های زیرساختی عمرانی: نظریه پردازی توسعه شبکه های عصبی عمیق مبتنی بر رگرسیون-2022 There is high demand for heavy equipment in civil infrastructure projects and their performance is a determinant
of the successful delivery of site operations. Although manufacturers provide equipment performance hand-
books, additional monitoring mechanisms are required to depart from measuring performance on the sole basis
of unit cost for moved materials. Vision-based tracking and pose estimation can facilitate site performance
monitoring. This research develops several regression-based deep neural networks (DNNs) to monitor equipment
with the aim of ensuring safety, productivity, sustainability and quality of equipment operations. Annotated
image libraries are used to train and test several backbone architectures. Experimental results reveal the pre-
cision of DNNs with depthwise separable convolutions and computational efficiency of DNNs with channel
shuffle. This research provides scientific utility by developing a method for equipment pose estimation with the
ability to detect anatomical angles and critical keypoints. The practical utility of this study is the provision of
potentials to influence current practice of articulated machinery monitoring in projects. keywords: هوش مصنوعی (AI) | سیستم های فیزیکی سایبری | معیارهای ارزیابی خطا | طراحی و آزمایش تجربی | تخمین ژست کامل بدن | صنعت و ساخت 4.0 | الگوریتم های یادگیری ماشین | معماری های ستون فقرات شبکه | Artificial intelligence (AI) | Cyber physical systems | Error evaluation metrics | Experimental design and testing | Full body pose estimation | Industry and construction 4.0 | Machine learning algorithms | Network backbone architectures |
مقاله انگلیسی |
7 |
Implementing Graph-Theoretic Feature Selection by Quantum Approximate Optimization Algorithm
پیاده سازی انتخاب ویژگی گراف-نظری توسط الگوریتم بهینه سازی تقریبی کوانتومی-2022 Feature selection plays a significant role in computer science; nevertheless, this task is intractable since its search space scales exponentially with the number of dimensions. Motivated by the potential advantages of near-term quantum computing, three graph-theoretic feature selection (GTFS) methods, including minimum cut (MinCut)-based, densest k -subgraph (DkS)-based, and maximal-independent set/minimal vertex cover (MIS/MVC)-based, are investigated in this article, where the original graph-theoretic problems are naturally formulated as the quadratic problems in binary variables and then solved using the quantum approximate optimization algorithm (QAOA). Specifically, three separate graphs are created from the raw feature set, where the vertex set consists of individual features and pairwise measure describes the edge. The corresponding feature subset is generated by deriving a subgraph from the established graph using QAOA. For the above three GTFS approaches, the solving procedure and quantum circuit for the corresponding graph-theoretic problems are formulated with the framework of QAOA. In addition, those proposals could be employed as a local solver and integrated with the Tabu search algorithm for solving large-scale GTFS problems utilizing limited quantum bit resource. Finally, extensive numerical experiments are conducted with 20 publicly available datasets and the results demonstrate that each model is superior to its classical scheme. In addition, the complexity of each model is only O(pn2) even in the worst cases, where p is the number of layers in QAOA and n is the number of features.
Index Terms: Feature selection | graph theory | parameterized quantum circuit | quantum approximation optimization algorithm | quantum computing. |
مقاله انگلیسی |
8 |
Incompressibility of Classical Distributions
تراکم ناپذیری توزیع های کلاسیک-2022 In blind compression of quantum states, a sender
Alice is given a specimen of a quantum state ρ drawn from
a known ensemble (but without knowing what ρ is), and she
transmits sufficient quantum data to a receiver Bob so that
he can decode a near perfect specimen of ρ. For many such
states drawn iid from the ensemble, the asymptotically achievable
rate is the number of qubits required to be transmitted per
state. The Holevo information is a lower bound for the achievable rate, and is attained for pure state ensembles, or in the
related scenario of entanglement-assisted visible compression of
mixed states wherein Alice knows what state is drawn. In this
paper, we prove a general and robust lower bound on the
achievable rate for ensembles of classical states, which holds
even in the least demanding setting when Alice and Bob share
free entanglement and a constant per-copy error is allowed.
We apply the bound to a specific ensemble of only two states
and prove a near-maximal separation (saturating the dimension
bound in leading order) between the best achievable rate and
the Holevo information for constant error. This also implies
that the ensemble is incompressible – compression does not
reduce the communication cost by much. Since the states are
classical, the observed incompressibility is not fundamentally
quantum mechanical. We lower bound the difference between
the achievable rate and the Holevo information in terms of
quantitative limitations to clone the specimen or to distinguish
the two classical states.
Index Terms— Blind compression | classical distributions | quantum states | free entanglement | Holevo information | constant error | incompressibility |
مقاله انگلیسی |
9 |
Deep learning based computer vision approaches for smart agricultural applications
رویکردهای بینایی کامپیوتری مبتنی بر یادگیری عمیق برای کاربردهای کشاورزی هوشمند-2022 The agriculture industry is undergoing a rapid digital transformation and is growing powerful by the pillars of cutting-edge approaches like artificial intelligence and allied technologies. At the core of artificial intelligence, deep learning-based computer vision enables various agriculture activities to be performed automatically with utmost precision enabling smart agriculture into reality. Computer vision techniques, in conjunction with high-quality image acquisition using remote cameras, enable non-contact and efficient technology-driven solutions in agriculture. This review contributes to providing state-of-the-art computer vision technologies based on deep learning that can assist farmers in operations starting from land preparation to harvesting. Recent works in the area of computer vision were analyzed in this paper and categorized into (a) seed quality analysis, (b) soil analysis, (c) irrigation water management, (d) plant health analysis, (e) weed management (f) livestock management and (g) yield estimation. The paper also discusses recent trends in computer vision such as generative adversarial networks (GAN), vision transformers (ViT) and other popular deep learning architectures. Additionally, this study pinpoints the challenges in implementing the solutions in the farmer’s field in real-time. The overall finding indicates that convolutional neural networks are the corner stone of modern computer vision approaches and their various architectures provide high-quality solutions across various agriculture activities in terms of precision and accuracy. However, the success of the computer vision approach lies in building the model on a quality dataset and providing real-time solutions.
keywords: Agriculture automation | Computer vision | Deep learning | Machine learning | Smart agriculture | Vision transformers |
مقاله انگلیسی |
10 |
Parameterized Hamiltonian Learning With Quantum Circuit
یادگیری همیلتونی پارامتری شده با مدار کوانتومی-2022 Hamiltonian learning, as an important quantum machine learning technique, provides a significant approach for
determining an accurate quantum system. This paper establishes parameterized Hamiltonian learning (PHL) and explores its
application and implementation on quantum computers. A parameterized quantum circuit for Hamiltonian learning is first created by
decomposing unitary operators to excite the system evolution. Then, a PHL algorithm is developed to prepare a specific Hamiltonian
system by iteratively updating the gradient of the loss function about circuit parameters. Finally, the experiments are conducted on
Origin Pilot, and it demonstrates that the PHL algorithm can deal with the image segmentation problem and provide a segmentation
solution accurately. Compared with the classical Grabcut algorithm, the PHL algorithm eliminates the requirement of early manual
intervention. It provides a new possibility for solving practical application problems with quantum devices, which also assists in solving
increasingly complicated problems and supports a much wider range of application possibilities in the future.
Index Terms: Quantum machine learning | Parameterized Hamiltonian learning (PHL) | parameterized quantum circuit | Hamiltonian learning algorithm | Image segmentation |
مقاله انگلیسی |