دانلود و نمایش مقالات مرتبط با Computational cost::صفحه 1
بلافاصله پس از پرداخت دانلود کنید

با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد). 

نتیجه جستجو - Computational cost

تعداد مقالات یافته شده: 72
ردیف عنوان نوع
1 Efficient Construction of a Control Modular Adder on a Carry-Lookahead Adder Using Relative-Phase Toffoli Gates
ساخت کارآمد یک جمع کننده ماژولار کنترلی بر روی جمع کننده Carry-Lookahead با استفاده از گیت های توفولی فاز نسبی-2022
Control modular addition is a core arithmetic function, and we must consider the computational cost for actual quantum computers to realize efficient implementation. To achieve a low computational cost in a control modular adder, we focus on minimizing KQ (where K is the number of logical qubits required by the algorithm, and Q is the elementary gate step), defined by the product of the number of qubits and the depth of the circuit. In this article, we construct an efficient control modular adder with small KQ by using relative-phase Toffoli gates in two major types of quantum computers: fault-tolerant quantum computers (FTQ) on the logical layer and noisy intermediate-scale quantum computers (NISQ). We give a more efficient construction compared with Van Meter and Itoh’s, based on a carry-lookahead adder. In FTQ, T gates incur heavy cost due to distillation, which fabricates ancilla for running T gates with high accuracy but consumes a lot of especially prepared ancilla qubits and a lot of time. Thus, we must reduce the number of T gates. We propose a new control modular adder that uses only 20% of the number of T gates of the original. Moreover, when we take distillation into consideration, we find that we minimize KQT (the product of the number of qubits and T-depth) by running (n/√log n) T gates simultaneously. In NISQ, cnot gates are the major error source. We propose a new control modular adder that uses only 35% of the number of cnotgates of the original. Moreover, we show that the KQCX (the product of the number of qubits and cnot-depth) of our circuit is 38% of the original. Thus, we realize an efficient control modular adder, improving prospects for the efficient execution of arithmetic in quantum computers.
INDEX TERMS: Carry-lookahead adder | control modular adder | fault-tolerant quantum computers (FTQ) | noisy intermediate-scale quantum computers (NISQ) | Shor’s algorithm.
مقاله انگلیسی
2 Efficient Quantum State Preparation for the Cauchy Distribution Based on Piecewise Arithmetic
آماده سازی حالت کوانتومی کارآمد برای توزیع کوشی بر اساس حساب تکه ای-2022
The benefits of the quantum Monte Carlo algorithm heavily rely on the efficiency of the superposition state preparation. So far, most reported Monte Carlo algorithms use the Grover–Rudolph state preparation method, which is suitable for efficiently integrable distribution functions. Consequently, most reported works are based on log-concave distributions, such as normal distributions. However, non-log-concave distributions still have many uses, such as in financial modeling. Recently, a new method was proposed that does not need integration to calculate the rotation angle for state preparation. However, performing efficient state preparation is still difficult due to the high cost associated with high precision and low error in the calculation for the rotation angle. Many methods of quantum state preparation use polynomial Taylor approximations to reduce the computation cost. However, Taylor approximations do not work well with heavy-tailed distribution functions that are not bounded exponentially. In this article, we present a method of efficient state preparation for heavy-tailed distribution functions. Specifically, we present a quantum gate-level algorithm to prepare quantum superposition states based on the Cauchy distribution, which is a non-log-concave heavy-tailed distribution. Our procedure relies on a piecewise polynomial function instead of a single Taylor approximation to reduce computational cost and increase accuracy. The Cauchy distribution is an even function, so the proposed piecewise polynomial contains only a quadratic term and a constant term to maintain the simplest approximation of an even function. Numerical analysis shows that the required number of subdomains increases linearly as the approximation error decreases exponentially. Furthermore, the computation complexity of the proposed algorithm is independent of the number of subdomains in the quantum implementation of the piecewise function due to quantum parallelism. An example of the proposed algorithm based on a simulation conducted in Qiskit is presented to demonstrate its capability to perform state preparation based on the Cauchy distribution.
INDEX TERMS: Algorithms | gate operations | quantum computing.
مقاله انگلیسی
3 Tuning of grayscale computer vision systems
تنظیم سیستم های بینایی کامپیوتری در مقیاس خاکستری-2022
Computer vision systems perform based on their design and parameter setting. In computer vision systems that use grayscale conversion, the conversion of RGB images to a grayscale format influences performance of the systems in terms of both results quality and computational costs. Appropriate setting of the weights for the weighted means grayscale conversion, co-estimated with other parameters used in the computer vision system, helps to approach the desired performance of a system or its subsystem at the cost of a negligible or no increase in its time-complexity. However, parameter space of the system and subsystem as extended by the grayscale conversion weights can contain substandard settings. These settings show strong sensitivity of the system and subsystem to small changes in the distribution of data in a color space of the processed images. We developed a methodology for Tuning of the Grayscale computer Vision systems (TGV) that exploits the advantages while compensating for the disadvantages of the weighted means grayscale conversion. We show that the TGV tuning improves computer vision system performance by up to 16% in the tested case studies. The methodology provides a universally applicable solution that merges the utility of a fine-tuned computer vision system with the robustness of its performance against variable input data.
keywords: Computer vision | Parameter optimization | Performance evaluation | WECIA graph | Weighted means grayscale conversion
مقاله انگلیسی
4 Quantum Dilated Convolutional Neural Networks
شبکه های عصبی کانولوشنال اتساع کوانتومی-2022
In recent years, with rapid progress in the development of quantum technologies, quantum machine learning has attracted a lot of interest. In particular, a family of hybrid quantum-classical neural networks, consisting of classical and quantum elements, has been massively explored for the purpose of improving the performance of classical neural networks. In this paper, we propose a novel hybrid quantum classical algorithm called quantum dilated convolutional neural networks (QDCNNs). Our method extends the concept of dilated convolution, which has been widely applied in modern deep learning algorithms, to the context of hybrid neural networks. The proposed QDCNNs are able to capture larger context during the quantum convolution process while reducing the computational cost. We perform empirical experiments on MNIST and Fashion-MNIST datasets for the task of image recognition and demonstrate that QDCNN models generally enjoy better performances in terms of both accuracy and computation efficiency compared to existing quantum convolutional neural networks (QCNNs).
INDEX TERMS: Quantum-classical neural networks | quantum dilated convolution | parameterized quantum circuits.
مقاله انگلیسی
5 Efficient biometric-based identity management on the Blockchain for smart industrial applications
مدیریت هویت مبتنی بر بیومتریک کارآمد در Blockchain برای کاربردهای صنعتی هوشمند-2021
In this work, we propose a new Blockchain-based Identity Management system for smart industry. First, we describe an efficient biometric-based anonymous credential scheme, which supports selective disclosure, suspension/thaw and revocation of credentials/entities. Our system provides non-transferability through a freshly computed hidden biometric attribute, which is generated using a secure fuzzy extractor during each authentication. This mechanism combined with offchain storage guarantees GDPR compliance, which is required for protecting user’s data. We define blinded (Brands) DLRep scheme to provide multi-show unlinkability, which is a lacking feature in Brands’ credential based systems. For larger organizations, we re-design the system by replacing the Merkle Tree with an accumulator to improve scalability. The new system enables auditing by adapting the standard Industrial IoT (IIoT) Identity Management Lifecycle to Blockchain. Finally, we show that the new proposal outperforms BASS, i.e. the most recent blockchain-based anonymous credential scheme designed for smart industry. The computational cost at the user-side (can be a weak IoT device) of our scheme is 8-times less than that of BASS. Thus, our system is more suitable for IIoT.© 2020 Elsevier B.V. All rights reserved.
Keywords: Identity management | Smart industry | Blockchain | Non-transferability | Biometrics | DLRep | Multi-show unlinkability | Selective disclosure | Accumulators
مقاله انگلیسی
6 EBAPy: A Python framework for analyzing the factors that have an influence in the performance of EEG-based applications
EBAPy: یک چارچوب پایتون برای تجزیه و تحلیل عوامل موثر بر عملکرد برنامه های مبتنی بر EEG-2021
EBAPy is an easy-to-use Python framework intended to help in the development of EEG-based applications. It allows performing an in-depth analysis of factors that influence the performance of the system and its computational cost. These factors include recording time, decomposition level of Discrete Wavelet Transform, and classification algorithm. The ease-of-use and flexibility of the presented framework have allowed reducing the development time and evaluating new ideas in developing biometric systems using EEGs. Furthermore, different applications that classify EEG signals can use EBAPy because of the generality of its functions. These new applications will impact human–computer interaction in the near future.Code metadataCurrent code version v1.1Permanent link to code/repository used for this code version https://github.com/SoftwareImpacts/SIMPAC-2021-2Permanent link to Reproducible Capsule https://codeocean.com/capsule/4497139/tree/v1Legal Code License MITCode versioning system used gitSoftware code languages, tools, and services used Python Compilation requirements, operating environments & dependencies If available Link to developer documentation/manualSupport email for questions dustin.carrion@gmail.com
Keywords: EEG-based applications | Recording time | Discrete wavelet transform
مقاله انگلیسی
7 Graph Variational Auto-Encoder for Deriving EEG-based Graph Embedding
رمزگذار خودکار گرافیکی برای استخراج نمودار مبتنی بر EEG-2021
Graph embedding is an effective method for deriving low-dimensional representations of graph data. The power of graph deep learning methods to characterize electroencephalogram (EEG) graph embedding is still in question. We designed a novel graph variational auto-encoder (GVAE) method to extract nodal features of brain functional connections. A new decoder model for the GVAEs network is proposed, which considers the node neighborhood of the reconstructed adjacency matrix. The GVAE is applied and tested on 3 biometric databases which contain 64 to 9 channels’ EEG recordings. For all datasets, promising results with more than 95% accuracy and consider- ably low computational cost are achieved compared to state-of-the-art user identification methods. The proposed GVAE is robust to a limited number of nodes and stable to users’ task performance. Moreover, we developed a Abstract Graph embedding is an effective method for deriving low-dimensional representations of graph data. The power of graph deep learning methods to characterize electroencephalogram (EEG) graph embedding is still in ques- tion. We designed a novel graph variational auto-encoder (GVAE) method to extract nodal features of brain functional connections. A new decoder model for the GVAEs network is proposed, which considers the node neighborhood of the reconstructed adjacency matrix. The GVAE is applied and tested on 3 biometric databases which contain 64 to 9 channels’ EEG recordings. For all datasets, promising results with more than 95% accuracy and consider- ably low computational cost are achieved compared to state-of-the-art user identification methods. The proposed GVAE is robust to a limited number of nodes and stable to users’ task performance. Moreover, we developed atraditional variational auto-encoder to demonstrate that more accurate features can be obtained when observing EEG-based brain connectivity from a graph perspective. Email addresses: tina.behrouzi@mail.utoronto.ca (Tina Behrouzi),dimitris@comm.utoronto.ca (Dimitrios Hatzinakos)Preprint submitted to Pattern Recognition July 20, 2021
Keywords: Biometrics | Functional connectivity | Electroencephalogram (EEG) | Graph Variational Auto Encoder (GVAE) | Graph deep learning
مقاله انگلیسی
8 Optimal carbon storage reservoir management through deep reinforcement learning
مدیریت بهینه ذخیره مخزن کربن از طریق یادگیری تقویتی عمیق-2020
Model-based optimization plays a central role in energy system design and management. The complexity and high-dimensionality of many process-level models, especially those used for geosystem energy exploration and utilization, often lead to formidable computational costs when the dimension of decision space is also large. This work adopts elements of recently advanced deep learning techniques to solve a sequential decisionmaking problem in applied geosystem management. Specifically, a deep reinforcement learning framework was formed for optimal multiperiod planning, in which a deep Q-learning network (DQN) agent was trained to maximize rewards by learning from high-dimensional inputs and from exploitation of its past experiences. To expedite computation, deep multitask learning was used to approximate high-dimensional, multistate transition functions. Both DQN and deep multitask learning are pattern based. As a demonstration, the framework was applied to optimal carbon sequestration reservoir planning using two different types of management strategies: monitoring only and brine extraction. Both strategies are designed to mitigate potential risks due to pressure buildup. Results show that the DQN agent can identify the optimal policies to maximize the reward for given risk and cost constraints. Experiments also show that knowledge the agent gained from interacting with one environment is largely preserved when deploying the same agent in other similar environments.
Keywords: Reinforcement learning | Multistage decision-making | Deep autoregressive model | Deep Q network | Surrogate modeling | Markov decision process | Geological carbon sequestration
مقاله انگلیسی
9 On the scaling law of JKR contact model for coarse-grained cohesive particles
در قانون مقیاس بندی مدل تماس JKR برای ذرات منسجم دانه درشت -2020
The computational cost of using discrete element method (DEM) simulations for particulate processes with fine and cohesive particles is enormously large. To overcome this limitation, various coarse-grain DEM models have been developed which use a smaller number of larger sized particles. Although the computational cost is significantly reduced, the accuracy of the simulations depends on the underlying scaling law. We propose a scaling of the Johnson-Kendall-Roberts (JKR) contact model for adhesive viscoelastic particles. A scaling law using a single Bond number or Cohesion number criterion is insufficient to keep the motion of the coarse-grained particles the same as the original particles. The scaling law in this work is developed based on mass, momentum and energy conservation, and achieves good consistency between the kinematic characteristics of the coarse-grained and original particles. The simulated effective coefficients of restitution were compared for a range of particle-wall impact velocities and validated against experimental data.
Keywords: Discrete element method | Coarse-grain model | JKR contact model | Cohesive materials | Scaling-up
مقاله انگلیسی
10 Combining gaze and AI planning for online human intention recognition
تلفیق برنامه ریزی نگاه و هوش مصنوعی برای تشخیص آنلاین نیت انسان-2020
Intention recognition is the process of using behavioural cues, such as deliberative actions, eye gaze, and gestures, to infer an agent’s goals or future behaviour. In artificial intelligence, one approach for intention recognition is to use a model of possible behaviour to rate intentions as more likely if they are a better ‘fit’ to actions observed so far. In this paper, we draw from literature linking gaze and visual attention, and we propose a novel model of online human intention recognition that combines gaze and model-based AI planning to build probability distributions over a set of possible intentions. In human-behavioural experiments (n =40) involving a multi-player board game, we demonstrate that adding gaze-based priors to model-based intention recognition improved the accuracy of intention recognition by 22% (p <0.05), determined those intentions ≈90 seconds earlier (p <0.05), and at no additional computational cost. We also demonstrate that, when evaluated in the presence of semi-rational or deceptive gaze behaviours, the proposed model is significantly more accurate (9% improvement) (p <0.05) compared to a model-based or gaze only approaches. Our results indicate that the proposed model could be used to design novel human-agent interactions in cases when we are unsure whether a person is honest, deceitful, or semi-rational.
Keywords: Intention recognition | Gaze | Planning
مقاله انگلیسی
rss مقالات ترجمه شده rss مقالات انگلیسی rss کتاب های انگلیسی rss مقالات آموزشی
logo-samandehi
بازدید امروز: 12921 :::::::: بازدید دیروز: 0 :::::::: بازدید کل: 12921 :::::::: افراد آنلاین: 87