دانلود و نمایش مقالات مرتبط با Neural Network::صفحه 1
دانلود بهترین مقالات isi همراه با ترجمه فارسی 2

با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد). 

نتیجه جستجو - Neural Network

تعداد مقالات یافته شده: 882
ردیف عنوان نوع
1 DQRA: Deep Quantum Routing Agent for Entanglement Routing in Quantum Networks
DQRA: عامل مسیریابی کوانتومی عمیق برای مسیریابی درهم تنیده در شبکه های کوانتومی-2022
Quantum routing plays a key role in the development of the next-generation network system. In particular, an entangled routing path can be constructed with the help of quantum entanglement and swapping among particles (e.g., photons) associated with nodes in the network. From another side of computing, machine learning has achieved numerous breakthrough successes in various application domains, including networking. Despite its advantages and capabilities, machine learning is not as much utilized in quantum networking as in other areas. To bridge this gap, in this article, we propose a novel quantum routing model for quantum networks that employs machine learning architectures to construct the routing path for the maximum number of demands (source–destination pairs) within a time window. Specifically, we present a deep reinforcement routing scheme that is called Deep Quantum Routing Agent (DQRA). In short, DQRA utilizes an empirically designed deep neural network that observes the current network states to accommodate the network’s demands, which are then connected by a qubit-preserved shortest path algorithm. The training process of DQRA is guided by a reward function that aims toward maximizing the number of accommodated requests in each routing window. Our experiment study shows that, on average, DQRA is able to maintain a rate of successfully routed requests at above 80% in a qubit-limited grid network and approximately 60% in extreme conditions, i.e., each node can be repeater exactly once in a window. Furthermore, we show that the model complexity and the computational time of DQRA are polynomial in terms of the sizes of the quantum networks.
INDEX TERMS: Deep learning | deep reinforcement learning (DRL) | machine learning | next-generation network | quantum network routing | quantum networks.
مقاله انگلیسی
2 Moving towards intelligent telemedicine: Computer vision measurement of human movement
حرکت به سمت پزشکی از راه دور هوشمند: اندازه گیری بینایی کامپیوتری حرکت انسان-2022
Background: Telemedicine video consultations are rapidly increasing globally, accelerated by the COVID- 19 pandemic. This presents opportunities to use computer vision technologies to augment clinician visual judgement because video cameras are so ubiquitous in personal devices and new techniques, such as DeepLabCut (DLC) can precisely measure human movement from smartphone videos. However, the accuracy of DLC to track human movements in videos obtained from laptop cameras, which have a much lower FPS, has never been investigated; this is a critical gap because patients use laptops for most telemedicine consultations. Objectives: To determine the validity and reliability of DLC applied to laptop videos to measure finger tapping, a validated test of human movement. Method: Sixteen adults completed finger-tapping tests at 0.5 Hz, 1 Hz, 2 Hz, 3 Hz and at maximal speed. Hand movements were recorded simultaneously by a laptop camera at 30 frames per second (FPS) and by Optotrak, a 3D motion analysis system at 250 FPS. Eight DLC neural network architectures (ResNet50, ResNet101, ResNet152, MobileNetV1, MobileNetV2, EfficientNetB0, EfficientNetB3, EfficientNetB6) were applied to the laptop video and extracted movement features were compared to the ground truth Optotrak motion tracking. Results: Over 96% (529/552) of DLC measures were within +∕−0.5 Hz of the Optotrak measures. At tapping frequencies >4 Hz, there was progressive decline in accuracy, attributed to motion blur associated with the laptop camera’s low FPS. Computer vision methods hold potential for moving us towards intelligent telemedicine by providing human movement analysis during consultations. However, further developments are required to accurately measure the fastest movements.
keywords: پزشکی از راه دور | ضربه زدن با انگشت | موتور کنترل | کامپیوتری | Telemedicine | DeepLabCut | Finger tapping | Motor control | Computer vision
مقاله انگلیسی
3 High-accuracy in the classification of butchery cut marks and crocodile tooth marks using machine learning methods and computer vision algorithms
دقت بالا در طبقه بندی علائم برش قصابی و علائم دندان تمساح با استفاده از روش های یادگیری ماشین و الگوریتم های بینایی کامپیوتری-2022
Some researchers using traditional taphonomic criteria (groove shape and presence/absence of microstriations) have cast some doubts about the potential equifinality presented by crocodile tooth marks and stone tool butchery cut marks. Other researchers have argued that multivariate methods can efficiently separate both types of marks. Differentiating both taphonomic agents is crucial for determining the earliest evidence of carcass processing by hominins. Here, we use an updated machine learning approach (discarding artificially bootstrapping the original imbalanced samples) to show that microscopic features shaped as categorical variables, corresponding to intrinsic properties of mark structure, can accurately discriminate both types of bone modifications. We also implement new deep-learning methods that objectively achieve the highest accuracy in differentiating cut marks from crocodile tooth scores (99% of testing sets). The present study shows that there are precise ways of differentiating both taphonomic agents, and this invites taphonomists to apply them to controversial paleontological and archaeological specimens.
keywords: تافونومی | علائم برش | علائم دندان | فراگیری ماشین | یادگیری عمیق | شبکه های عصبی کانولوشنال | قصابی | Taphonomy | Cut marks | Tooth marks | Machine learning | Deep learning | Convolutional neural networks | Butchery
مقاله انگلیسی
4 Deep convolutional neural networks-based Hardware–Software on-chip system for computer vision application
سیستم سخت‌افزار-نرم‌افزار روی تراشه مبتنی بر شبکه‌های عصبی عمیق برای کاربرد بینایی ماشین-2022
Embedded vision systems are the best solutions for high-performance and lightning-fast inspection tasks. As everyday life evolves, it becomes almost imperative to harness artificial intelligence (AI) in vision applications that make these systems intelligent and able to make decisions close to or similar to humans. In this context, the AI’s integration on embedded systems poses many challenges, given that its performance depends on data volume and quality they assimilate to learn and improve. This returns to the energy consumption and cost constraints of the FPGA-SoC that have limited processing, memory, and communication capacity. Despite this, the AI algorithm implementation on embedded systems can drastically reduce energy consumption and processing times, while reducing the costs and risks associated with data transmission. Therefore, its efficiency and reliability always depend on the designed prototypes. Within this range, this work proposes two different designs for the Traffic Sign Recognition (TSR) application based on the convolutional neural network (CNN) model, followed by three implantations on PYNQ-Z1. Firstly, we propose to implement the CNN-based TSR application on the PYNQ-Z1 processor. Considering its runtime result of around 3.55 s, there is room for improvement using programmable logic (PL) and processing system (PS) in a hybrid architecture. Therefore, we propose a streaming architecture, in which the CNN layers will be accelerated to provide a hardware accelerator for each layer where direct memory access (DMA) interface is used. Thus, we noticed efficient power consumption, decreased hardware cost, and execution time optimization of 2.13 s, but, there was still room for design optimizations. Finally, we propose a second co-design, in which the CNN will be accelerated to be a single computation engine where BRAM interface is used. The implementation results prove that our proposed embedded TSR design achieves the best performances compared to the first proposed architectures, in terms of execution time of about 0.03 s, computation roof of about 36.6 GFLOPS, and bandwidth roof of about 3.2 GByte/s.
keywords: CNN | FPGA | Acceleration | Co-design | PYNQ-Z1
مقاله انگلیسی
5 A survey on adversarial attacks in computer vision: Taxonomy, visualization and future directions
بررسی حملات خصمانه در بینایی کامپیوتر: طبقه بندی، تجسم و جهت گیری های آینده-2022
Deep learning has been widely applied in various fields such as computer vision, natural language pro- cessing, and data mining. Although deep learning has achieved significant success in solving complex problems, it has been shown that deep neural networks are vulnerable to adversarial attacks, result- ing in models that fail to perform their tasks properly, which limits the application of deep learning in security-critical areas. In this paper, we first review some of the classical and latest representative adversarial attacks based on a reasonable taxonomy of adversarial attacks. Then, we construct a knowl- edge graph based on the citation relationship relying on the software VOSviewer, visualize and analyze the subject development in this field based on the information of 5923 articles from Scopus. In the end, possible research directions for the development about adversarial attacks are proposed based on the trends deduced by keywords detection analysis. All the data used for visualization are available at: https://github.com/NanyunLengmu/Adversarial- Attack- Visualization .
keywords: یادگیری عمیق | حمله خصمانه | حمله جعبه سیاه | حمله به جعبه سفید | نیرومندی | تجزیه و تحلیل تجسم | Deep learning | Adversarial attack | Black-box attack | White-box attack | Robustness | Visualization analysis
مقاله انگلیسی
6 ChickenNet - an end-to-end approach for plumage condition assessment of laying hens in commercial farms using computer vision
ChickenNet - یک رویکرد انتها به انتها برای ارزیابی وضعیت پرهای مرغ های تخمگذار در مزارع تجاری با استفاده از بینایی کامپیوتر-2022
Regular plumage condition assessment in laying hens is essential to monitor the hens’ welfare status and to detect the occurrence of feather pecking activities. However, in commercial farms this is a labor-intensive, manual task. This study proposes a novel approach for automated plumage condition assessment using com- puter vision and deep learning. It presents ChickenNet, an end-to-end convolutional neural network that detects hens and simultaneously predicts a plumage condition score for each detected hen. To investigate the effect of input image characteristics, the method was evaluated using images with and without depth information in resolutions of 384 × 384, 512 × 512, 896 × 896 and 1216 × 1216 pixels. Further, to determine the impact of subjective human annotations, plumage condition predictions were compared to manual assessments of one observer and to matching annotations of two observers. Among all tested settings, performance metrics based on matching manual annotations of two observers were equal or better than the ones based on annotations of a single observer. The best result obtained among all tested configurations was a mean average precision (mAP) of 98.02% for hen detection while 91.83% of the plumage condition scores were predicted correctly. Moreover, it was revealed that performance of hen detection and plumage condition assessment of ChickenNet was not generally enhanced by depth information. Increasing image resolutions improved plumage assessment up to a resolution of 896 × 896 pixels, while high detection accuracies (mAP > 0.96) could already be achieved using lower resolutions. The results indicate that ChickenNet provides a sufficient basis for automated monitoring of plumage conditions in commercial laying hen farms.
keywords: طیور | ارزیابی پر و بال | بینایی کامپیوتر | یادگیری عمیق | تقسیم بندی نمونه | Poultry | Plumage assessment | Computer vision | Deep learning | Instance segmentation
مقاله انگلیسی
7 Monitoring crop phenology with street-level imagery using computer vision
پایش فنولوژی محصول با تصاویر سطح خیابان با استفاده از بینایی ماشین-2022
Street-level imagery holds a significant potential to scale-up in-situ data collection. This is enabled by combining the use of cheap high-quality cameras with recent advances in deep learning compute solutions to derive relevant thematic information. We present a framework to collect and extract crop type and phenological information from street level imagery using computer vision. Monitoring crop phenology is critical to assess gross primary productivity and crop yield. During the 2018 growing season, high-definition pictures were captured with side- looking action cameras in the Flevoland province of the Netherlands. Each month from March to October, a fixed 200-km route was surveyed collecting one picture per second resulting in a total of 400,000 geo-tagged pictures. At 220 specific parcel locations, detailed on the spot crop phenology observations were recorded for 17 crop types (including bare soil, green manure, and tulips): bare soil, carrots, green manure, grassland, grass seeds, maize, onion, potato, summer barley, sugar beet, spring cereals, spring wheat, tulips, vegetables, winter barley, winter cereals and winter wheat. Furthermore, the time span included specific pre-emergence parcel stages, such as differently cultivated bare soil for spring and summer crops as well as post-harvest cultivation practices, e.g. green manuring and catch crops. Classification was done using TensorFlow with a well-known image recognition model, based on transfer learning with convolutional neural network (MobileNet). A hypertuning methodology was developed to obtain the best performing model among 160 models. This best model was applied on an independent inference set discriminating crop type with a Macro F1 score of 88.1% and main phenological stage at 86.9% at the parcel level. Potential and caveats of the approach along with practical considerations for implementation and improvement are discussed. The proposed framework speeds up high quality in-situ data collection and suggests avenues for massive data collection via automated classification using computer vision.
keywords: Phenology | Plant recognition | Agriculture | Computer vision | Deep learning | Remote sensing | CNN | BBCH | Crop type | Street view imagery | Survey | In-situ | Earth observation | Parcel | In situ
مقاله انگلیسی
8 Disintegration testing augmented by computer Vision technology
آزمایش تجزیه با فناوری Vision کامپیوتری تقویت شده است-2022
Oral solid dosage forms, specifically immediate release tablets, are prevalent in the pharmaceutical industry. Disintegration testing is often the first step of commercialization and large-scale production of these dosage forms. Current disintegration testing in the pharmaceutical industry, according to United States Pharmacopeia (USP) chapter 〈701〉, only gives information about the duration of the tablet disintegration process. This infor- mation is subjective, variable, and prone to human error due to manual or physical data collection methods via the human eye or contact disks. To lessen the data integrity risk associated with this process, efforts have been made to automate the analysis of the disintegration process using digital lens and other imaging technologies. This would provide a non-invasive method to quantitatively determine disintegration time through computer algorithms. The main challenges associated with developing such a system involve visualization of tablet pieces through cloudy and turbid liquid. The Computer Vision for Disintegration (CVD) system has been developed to be used along with traditional pharmaceutical disintegration testing devices to monitor tablet pieces and distinguish them from the surrounding liquid. The software written for CVD utilizes data captured by cameras or other lenses then uses mobile SSD and CNN, with an OpenCV and FRCNN machine learning model, to analyze and interpret the data. This technology is capable of consistently identifying tablets with ≥ 99.6% accuracy. Not only is the data produced by CVD more reliable, but it opens the possibility of a deeper understanding of disintegration rates and mechanisms in addition to duration.
keywords: از هم پاشیدگی | اشکال خوراکی جامد | تست تجزیه | یادگیری ماشین | شبکه های عصبی | Disintegration | Oral Solid Dosage Forms | Disintegration Test | Machine Learning | Neural Networks
مقاله انگلیسی
9 High-Performance Reservoir Computing With Fluctuations in Linear Networks
محاسبات مخزن با کارایی بالا با نوسانات در شبکه های خطی-2022
Reservoir computing has emerged as a powerful machine learning paradigm for harvesting nontrivial information processing out of disordered physical systems driven by sequential inputs. To this end, the system observables must become nonlinear functions of the input history. We show that encoding the input to quantum or classical fluctuations of a network of interacting harmonic oscillators can lead to a high performance comparable to that of a standard echo state network in several nonlinear benchmark tasks. This equivalence in performance holds even with a linear Hamiltonian and a readout linear in the system observables. Furthermore, we find that the performance of the network of harmonic oscillators in nonlinear tasks is robust to errors both in input and reservoir observables caused by external noise. For any reservoir computing system with a linear readout, the magnitude of trained weights can either amplify or suppress noise added to reservoir observables. We use this general result to explain why the oscillators are robust to noise and why having precise control over reservoir memory is important for noise robustness in general. Our results pave the way toward reservoir computing harnessing fluctuations in disordered linear systems.
Index Terms: Dynamical systems | machine learning | quantum mechanics | recurrent neural networks | reservoir computing | supervised learning.
مقاله انگلیسی
10 Computer vision for anatomical analysis of equipment in civil infrastructure projects: Theorizing the development of regression-based deep neural networks
چشم انداز کامپیوتری برای تجزیه و تحلیل آناتومیکی تجهیزات در پروژه های زیرساختی عمرانی: نظریه پردازی توسعه شبکه های عصبی عمیق مبتنی بر رگرسیون-2022
There is high demand for heavy equipment in civil infrastructure projects and their performance is a determinant of the successful delivery of site operations. Although manufacturers provide equipment performance hand- books, additional monitoring mechanisms are required to depart from measuring performance on the sole basis of unit cost for moved materials. Vision-based tracking and pose estimation can facilitate site performance monitoring. This research develops several regression-based deep neural networks (DNNs) to monitor equipment with the aim of ensuring safety, productivity, sustainability and quality of equipment operations. Annotated image libraries are used to train and test several backbone architectures. Experimental results reveal the pre- cision of DNNs with depthwise separable convolutions and computational efficiency of DNNs with channel shuffle. This research provides scientific utility by developing a method for equipment pose estimation with the ability to detect anatomical angles and critical keypoints. The practical utility of this study is the provision of potentials to influence current practice of articulated machinery monitoring in projects.
keywords: هوش مصنوعی (AI) | سیستم های فیزیکی سایبری | معیارهای ارزیابی خطا | طراحی و آزمایش تجربی | تخمین ژست کامل بدن | صنعت و ساخت 4.0 | الگوریتم های یادگیری ماشین | معماری های ستون فقرات شبکه | Artificial intelligence (AI) | Cyber physical systems | Error evaluation metrics | Experimental design and testing | Full body pose estimation | Industry and construction 4.0 | Machine learning algorithms | Network backbone architectures
مقاله انگلیسی
rss مقالات ترجمه شده rss مقالات انگلیسی rss کتاب های انگلیسی rss مقالات آموزشی
logo-samandehi
بازدید امروز: 2215 :::::::: بازدید دیروز: 0 :::::::: بازدید کل: 2215 :::::::: افراد آنلاین: 56