با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد).
ردیف | عنوان | نوع |
---|---|---|
1 |
Discriminating Quantum States in the Presence of a Deutschian CTC: A Simulation Analysis
حالت های کوانتومی متمایز در حضور CTC Deutschian: یک تحلیل شبیه سازی-2022 In an article published in 2009, Brun et al. proved that in the presence of a “Deutschian”
closed timelike curve, one can map K distinct nonorthogonal states (hereafter, input set) to the standard
orthonormal basis of a K-dimensional state space. To implement this result, the authors proposed a quantum
circuit that includes, among SWAP gates, a fixed set of controlled operators (boxes) and an algorithm for
determining the unitary transformations carried out by such boxes. To our knowledge, what is still missing
to complete the picture is an analysis evaluating the performance of the aforementioned circuit from an
engineering perspective. The objective of this article is, therefore, to address this gap through an in-depth
simulation analysis, which exploits the approach proposed by Brun et al. in 2017. This approach relies on
multiple copies of an input state, multiple iterations of the circuit until a fixed point is (almost) reached. The
performance analysis led us to a number of findings. First, the number of iterations is significantly high even
if the number of states to be discriminated against is small, such as 2 or 3. Second, we envision that such
a number may be shortened as there is plenty of room to improve the unitary transformation acting in the
aforementioned controlled boxes. Third, we also revealed a relationship between the number of iterations
required to get close to the fixed point and the Chernoff limit of the input set used: the higher the Chernoff
bound, the smaller the number of iterations. A comparison, although partial, with another quantum circuit
discriminating the nonorthogonal states, proposed by Nareddula et al. in 2018, is carried out and differences
are highlighted.
INDEX TERMS: Benchmarking and performance characterization | classical simulation of quantum systems. |
مقاله انگلیسی |
2 |
DOPIV: Post-Quantum Secure Identity-Based Data Outsourcing with Public Integrity Verification in Cloud Storage
DOPIV: برون سپاری داده مبتنی بر هویت امن پس از کوانتومی با تأیید صحت عمومی در فضای ذخیره سازی ابری-2022 Public verification enables cloud users to employ a third party auditor (TPA) to check the data integrity. However, recent
breakthrough results on quantum computers indicate that applying quantum computers in clouds would be realized. A majority of existing
public verification schemes are based on conventional hardness assumptions, which are vulnerable to adversaries equipped with
quantum computers in the near future. Moreover, new security issues need to be solved when an original data owner is restricted or
cannot access the remote cloud server flexibly. In this paper, we propose an efficient identity-based data outsourcing with public integrity
verification scheme (DOPIV) in cloud storage. DOPIV is designed on lattice-based cryptography, which achieves post-quantum security.
DOPIV enables an original data owner to delegate a proxy to generate the signatures of data and outsource them to the cloud server.
Any TPA can perform data integrity verification efficiently on behalf of the original data owner, without retrieving the entire data set.
Additionally, DOPIV possesses the advantages of being identity-based systems, avoiding complex certificate management procedures.
We provide security proofs of DOPIV in the random oracle model, and conduct a comprehensive performance evaluation to show that
DOPIV is more practical in post-quantum secure cloud storage systems.
Index Terms: Cloud storage | public verification | lattice-based cryptography | identity-based data outsourcing | post-quantum security |
مقاله انگلیسی |
3 |
High-accuracy in the classification of butchery cut marks and crocodile tooth marks using machine learning methods and computer vision algorithms
دقت بالا در طبقه بندی علائم برش قصابی و علائم دندان تمساح با استفاده از روش های یادگیری ماشین و الگوریتم های بینایی کامپیوتری-2022 Some researchers using traditional taphonomic criteria (groove shape and presence/absence of microstriations) have cast some doubts about the potential equifinality presented by crocodile tooth marks and
stone tool butchery cut marks. Other researchers have argued that multivariate methods can efficiently
separate both types of marks. Differentiating both taphonomic agents is crucial for determining the earliest evidence of carcass processing by hominins. Here, we use an updated machine learning approach
(discarding artificially bootstrapping the original imbalanced samples) to show that microscopic features
shaped as categorical variables, corresponding to intrinsic properties of mark structure, can accurately
discriminate both types of bone modifications. We also implement new deep-learning methods that
objectively achieve the highest accuracy in differentiating cut marks from crocodile tooth scores (99%
of testing sets). The present study shows that there are precise ways of differentiating both taphonomic
agents, and this invites taphonomists to apply them to controversial paleontological and archaeological
specimens.
keywords: تافونومی | علائم برش | علائم دندان | فراگیری ماشین | یادگیری عمیق | شبکه های عصبی کانولوشنال | قصابی | Taphonomy | Cut marks | Tooth marks | Machine learning | Deep learning | Convolutional neural networks | Butchery |
مقاله انگلیسی |
4 |
Tracking the northern seasonal cap retreat of mars using computer vision
ردیابی عقب نشینی کلاهک فصلی شمالی مریخ با استفاده از بینایی کامپیوتر-2022 Using polar stereographic images from the Mars Color Imager (MARCI), we use Python
to autonomously track the Northern Polar Seasonal Cap (NPSC) recession from Mars Years (MY)
29 to MY 35 between Ls = 10° and Ls = 70°. We outline the cap and find an ellipse of best fit. We
then compare our results to previously published recession rates, that were manually tracked,
and find them to be consistent. Our process benefits from being automated, which increases
the speed of tracking and allows us to monitor the recession with higher Ls fidelity than past
studies. We find that most MYs have a local minimum recession rate at Ls = ~32° and a local
maximum at Ls = ~51°. We also find that MY 30 experiences a rapid latitude increasing event
that involves ~1° Ls of a rapid increase and ~5° Ls of slower recession, which then increases
above the interannual average rate. We interpret this to be the result of a major sublimation
driven by off-polar winds. We also discover divergent effects in the recession and size of the
NPSC following the MY 28 and MY 35 global dust storms. MY 29’s cap is significantly smaller
and retreats slower than the multi-year average, whereas MY 35’s cap is slighter larger and
retreats very close to the average. We hypothesize that the diverging behavior of the caps in
post-storm years can be a result of the differences in the date of onset and the duration of the
storms.
|
مقاله انگلیسی |
5 |
Efficient Quantum Network Communication Using Optimized Entanglement Swapping Trees
ارتباطات شبکه کوانتومی کارآمد با استفاده از درختان درهم تنیدگی بهینه-2022 Quantum network communication is challenging, as the no-cloning theorem in the quantum
regime makes many classical techniques inapplicable; in particular, the direct transmission of qubit states
over long distances is infeasible due to unrecoverable errors. For the long-distance communication of
unknown quantum states, the only viable communication approach (assuming local operations and classical
communications) is the teleportation of quantum states, which requires a prior distribution of the entangled
pairs (EPs) of qubits. The establishment of EPs across remote nodes can incur significant latency due to the
low probability of success of the underlying physical processes. The focus of our work is to develop efficient
techniques that minimize EP generation latency. Prior works have focused on selecting entanglement paths;
in contrast, we select entanglement swapping trees—a more accurate representation of the entanglement
generation structure. We develop a dynamic programming algorithm to select an optimal swapping tree for a
single pair of nodes, under the given capacity and fidelity constraints. For the general setting, we develop an
efficient iterative algorithm to compute a set of swapping trees. We present simulation results, which show
that our solutions outperform the prior approaches by an order of magnitude and are viable for long-distance
entanglement generation.
INDEX TERMS: Quantum communications | quantum networks (QNs). |
مقاله انگلیسی |
6 |
Spatiotemporal flow features in gravity currents using computer vision methods
ویژگی های جریان مکانی-زمانی در جریان های گرانشی با استفاده از روش های بینایی کامپیوتری-2022 Relationships between the features visually identified at the front of the flow’s current and parameters
regarding its velocity and turbulence were observed in early experimental works on the characterization of
gravity currents. Researches have associated front features, like lobes and clefts, with the flow’s turbulence, and
have used these associations ever since. In more recent works using numerical simulations, these connections
were still being validated for various flow parameters at higher front velocities. The majority of works regarding
measurements at the front of a gravity current rely on the front’s images for making its analysis and establish
relationships. Besides that, there is an interdisciplinary field related to computer science called computer vision,
devoted to study how digital images can be analyzed and how these results can be automated. This paper
describes the use of computer vision algorithms, particularly corner detection and optical flow, to automatically
track features at the front of gravity currents, either from physical or numerical experiments. To determine the
proposed approach’s accuracy, we establish a ground-truth method and apply it to numerical simulation results
data sets. The technique used to trace the front features along the flow showed promising results, especially
with higher Reynolds numbers flows.
keywords: جریان های گرانشی | ساختارهای لوب و شکاف | روش های کامپیوتری | ویژگی ردیابی | Gravitycurrents | Lobesandcleftsstructures | Computervisionmethods | Featurepointtracking |
مقاله انگلیسی |
7 |
Tuning of grayscale computer vision systems
تنظیم سیستم های بینایی کامپیوتری در مقیاس خاکستری-2022 Computer vision systems perform based on their design and parameter setting. In computer vision systems
that use grayscale conversion, the conversion of RGB images to a grayscale format influences performance of
the systems in terms of both results quality and computational costs. Appropriate setting of the weights for
the weighted means grayscale conversion, co-estimated with other parameters used in the computer vision
system, helps to approach the desired performance of a system or its subsystem at the cost of a negligible or
no increase in its time-complexity. However, parameter space of the system and subsystem as extended by the
grayscale conversion weights can contain substandard settings. These settings show strong sensitivity of the
system and subsystem to small changes in the distribution of data in a color space of the processed images.
We developed a methodology for Tuning of the Grayscale computer Vision systems (TGV) that exploits the
advantages while compensating for the disadvantages of the weighted means grayscale conversion. We show
that the TGV tuning improves computer vision system performance by up to 16% in the tested case studies.
The methodology provides a universally applicable solution that merges the utility of a fine-tuned computer
vision system with the robustness of its performance against variable input data.
keywords: Computer vision | Parameter optimization | Performance evaluation | WECIA graph | Weighted means grayscale conversion |
مقاله انگلیسی |
8 |
EP-PQM: Efficient Parametric Probabilistic Quantum Memory With Fewer Qubits and Gates
EP-PQM: حافظه کوانتومی احتمالی پارامتریک کارآمد با کیوبیت ها و گیت های کمتر-2022 Machine learning (ML) classification tasks can be carried out on a quantum computer (QC)
using probabilistic quantum memory (PQM) and its extension, parametric PQM (P-PQM), by calculating
the Hamming distance between an input pattern and a database of r patterns containing z features with
a distinct attributes. For PQM and P-PQM to correctly compute the Hamming distance, the feature must
be encoded using one-hot encoding, which is memory intensive for multiattribute datasets with a > 2. We
can represent multiattribute data more compactly by replacing one-hot encoding with label encoding; both
encodings yield the same Hamming distance. Implementing this replacement on a classical computer is
trivial. However, replacing these encoding schemes on a QC is not straightforward because PQM and P-PQM
operate at the bit level, rather than at the feature level (a feature is represented by a binary string of 0’s and
1’s). We present an enhanced P-PQM, called efficient P-PQM (EP-PQM), that allows label encoding of data
stored in a PQM data structure and reduces the circuit depth of the data storage and retrieval procedures.
We show implementations for an ideal QC and a noisy intermediate-scale quantum (NISQ) device. Our
complexity analysis shows that the EP-PQM approach requires O(z log2(a)) qubits as opposed to O(za)
qubits for P-PQM. EP-PQM also requires fewer gates, reducing gate count from O(rza) to O(rz log2(a)).
For five datasets, we demonstrate that training an ML classification model using EP-PQM requires 48% to
77% fewer qubits than P-PQM for datasets with a > 2. EP-PQM reduces circuit depth in the range of 60% to
96%, depending on the dataset. The depth decreases further with a decomposed circuit, ranging between 94%
and 99%. EP-PQM requires less space; thus, it can train on and classify larger datasets than previous PQM
implementations on NISQ devices. Furthermore, reducing the number of gates speeds up the classification
and reduces the noise associated with deep quantum circuits. Thus, EP-PQM brings us closer to scalable ML
on an NISQ device.
INDEX TERMS: Efficient encoding | label encoding | quantum memory. |
مقاله انگلیسی |
9 |
Epsilon-Nets, Unitary Designs, and Random Quantum Circuits
شبکه های اپسیلون، طرح های واحد و مدارهای کوانتومی تصادفی-2022 Epsilon-nets and approximate unitary t-designs are
natural notions that capture properties of unitary operations
relevant for numerous applications in quantum information
and quantum computing. In this work we study quantitative
connections between these two notions. Specifically, we prove
that, for d dimensional Hilbert space, unitaries constituting
δ-approximate t-expanders form -nets for t d5/2 and δ
3d/2 d2. We also show that for arbitrary t, -nets can be used
to construct δ-approximate unitary t-designs for δ t, where
the notion of approximation is based on the diamond norm.
Finally, we prove that the degree of an exact unitary t design
necessary to obtain an -net must grow at least as fast as 1 (for
fixed dimension) and not slower than d2 (for fixed ). This shows
near optimality of our result connecting t-designs and nets.
We apply our findings in the context of quantum computing.
First, we show that that approximate t-designs can be generated
by shallow random circuits formed from a set of universal twoqudit gates in the parallel and sequential local architectures
considered in (Brandão et al., 2016). Importantly, our gate sets
need not to be symmetric (i.e., contains gates together with
their inverses) or consist of gates with algebraic entries. Second,
we consider compilation of quantum gates and prove a nonconstructive Solovay-Kitaev theorem for general universal gate
sets. Our main technical contribution is a new construction of
efficient polynomial approximations to the Dirac delta in the
space of quantum channels, which can be of independent interest.]
Index Terms: Unitary designs, epsilon nets | random quantum circuits | compilation of quantum gates | unitary channels. |
مقاله انگلیسی |
10 |
A novel method of fish tail fin removal for mass estimation using computer vision
یک روش جدید حذف باله دم ماهی برای تخمین جرم با استفاده از بینایی کامپیوتر-2022 Fish mass estimation is extremely important for farmers to get fish biomass information, which could be useful to
optimize daily feeding and control stocking densities and ultimately determine optimal harvest time. However,
fish tail fin mass does not contribute much to total body mass. Additionally, the tail fin of free-swimming fish is
deformed or bent for most of the time, resulting in feature measurement errors and further affecting mass
prediction accuracy by computer vision. To solve this problem, a novel non-supervised method for fish tail fin
removal was proposed to further develop mass prediction models based on ventral geometrical features without
tail fin. Firstly, fish tail fin was fully automatically removed using the Cartesian coordinate system and image
processing. Secondly, the different features were respectively extracted from fish image with and without tail fin.
Finally, the correlational relationship between fish mass and features was estimated by the Partial Least Square
(PLS). In this paper, tail fins were completely automatically removed and mass estimation model based on area
and area square has been the best tested on the test dataset with a high coefficient of determination (R2) of 0.991,
the root mean square error (RMSE) of 7.10 g, the mean absolute error (MAE) of 5.36 g and the maximum relative
error (MaxRE) of 8.46%. These findings indicated that mass prediction model without fish tail fin can more
accurately estimate fish mass than the model with tail fin, which might be extended to estimate biomass of free-
swimming fish underwater in aquaculture. keywords: برداشتن باله دم | اتوماسیون | ماهی | تخمین انبوه | بینایی کامپیوتر | Tail fin removal | Automation | Fish | Mass estimation | Computer vision |
مقاله انگلیسی |