دانلود و نمایش مقالات مرتبط با Deep learning::صفحه 1
بلافاصله پس از پرداخت دانلود کنید

با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد). 

نتیجه جستجو - Deep learning

تعداد مقالات یافته شده: 551
ردیف عنوان نوع
1 DQRA: Deep Quantum Routing Agent for Entanglement Routing in Quantum Networks
DQRA: عامل مسیریابی کوانتومی عمیق برای مسیریابی درهم تنیده در شبکه های کوانتومی-2022
Quantum routing plays a key role in the development of the next-generation network system. In particular, an entangled routing path can be constructed with the help of quantum entanglement and swapping among particles (e.g., photons) associated with nodes in the network. From another side of computing, machine learning has achieved numerous breakthrough successes in various application domains, including networking. Despite its advantages and capabilities, machine learning is not as much utilized in quantum networking as in other areas. To bridge this gap, in this article, we propose a novel quantum routing model for quantum networks that employs machine learning architectures to construct the routing path for the maximum number of demands (source–destination pairs) within a time window. Specifically, we present a deep reinforcement routing scheme that is called Deep Quantum Routing Agent (DQRA). In short, DQRA utilizes an empirically designed deep neural network that observes the current network states to accommodate the network’s demands, which are then connected by a qubit-preserved shortest path algorithm. The training process of DQRA is guided by a reward function that aims toward maximizing the number of accommodated requests in each routing window. Our experiment study shows that, on average, DQRA is able to maintain a rate of successfully routed requests at above 80% in a qubit-limited grid network and approximately 60% in extreme conditions, i.e., each node can be repeater exactly once in a window. Furthermore, we show that the model complexity and the computational time of DQRA are polynomial in terms of the sizes of the quantum networks.
INDEX TERMS: Deep learning | deep reinforcement learning (DRL) | machine learning | next-generation network | quantum network routing | quantum networks.
مقاله انگلیسی
2 High-accuracy in the classification of butchery cut marks and crocodile tooth marks using machine learning methods and computer vision algorithms
دقت بالا در طبقه بندی علائم برش قصابی و علائم دندان تمساح با استفاده از روش های یادگیری ماشین و الگوریتم های بینایی کامپیوتری-2022
Some researchers using traditional taphonomic criteria (groove shape and presence/absence of microstriations) have cast some doubts about the potential equifinality presented by crocodile tooth marks and stone tool butchery cut marks. Other researchers have argued that multivariate methods can efficiently separate both types of marks. Differentiating both taphonomic agents is crucial for determining the earliest evidence of carcass processing by hominins. Here, we use an updated machine learning approach (discarding artificially bootstrapping the original imbalanced samples) to show that microscopic features shaped as categorical variables, corresponding to intrinsic properties of mark structure, can accurately discriminate both types of bone modifications. We also implement new deep-learning methods that objectively achieve the highest accuracy in differentiating cut marks from crocodile tooth scores (99% of testing sets). The present study shows that there are precise ways of differentiating both taphonomic agents, and this invites taphonomists to apply them to controversial paleontological and archaeological specimens.
keywords: تافونومی | علائم برش | علائم دندان | فراگیری ماشین | یادگیری عمیق | شبکه های عصبی کانولوشنال | قصابی | Taphonomy | Cut marks | Tooth marks | Machine learning | Deep learning | Convolutional neural networks | Butchery
مقاله انگلیسی
3 AI-based computer vision using deep learning in 6G wireless networks
بینایی کامپیوتر مبتنی بر هوش مصنوعی با استفاده از یادگیری عمیق در شبکه های بی سیم 6G-2022
Modern businesses benefit significantly from advances in computer vision technology, one of the important sectors of artificially intelligent and computer science research. Advanced computer vision issues like image processing, object recognition, and biometric authentication can benefit from using deep learning methods. As smart devices and facilities advance rapidly, current net- works such as 4 G and the forthcoming 5 G networks may not adapt to the rapidly increasing demand. Classification of images, object classification, and facial recognition software are some of the most difficult computer vision problems that can be solved using deep learning methods. As a new paradigm for 6Core network design and analysis, artificial intelligence (AI) has recently been used. Therefore, in this paper, the 6 G wireless network is used along with Deep Learning to solve the above challenges by introducing a new methodology named Optimizing Computer Vision with AI-enabled technology (OCV-AI). This research uses deep learning – efficiency al- gorithms (DL-EA) for computer vision to address the issues mentioned and improve the system’s outcome. Therefore, deep learning 6 G proposed frameworks (Dl-6 G) are suggested in this paper to recognize pattern recognition and intelligent management systems and provide driven meth- odology planned to be provisioned automatically. For Advanced analytics wise, 6 G networks can summarize the significant areas for future research and potential solutions, including image enhancement, machine vision, and access control.
keywords: SHG | ارتباطات بی سیم | هوش مصنوعی | فراگیری ماشین | یادگیری عمیق | ارتباطات سیار | 6G | Wireless communication | AI | Machine learning | Deep learning | Mobile communication
مقاله انگلیسی
4 A survey on adversarial attacks in computer vision: Taxonomy, visualization and future directions
بررسی حملات خصمانه در بینایی کامپیوتر: طبقه بندی، تجسم و جهت گیری های آینده-2022
Deep learning has been widely applied in various fields such as computer vision, natural language pro- cessing, and data mining. Although deep learning has achieved significant success in solving complex problems, it has been shown that deep neural networks are vulnerable to adversarial attacks, result- ing in models that fail to perform their tasks properly, which limits the application of deep learning in security-critical areas. In this paper, we first review some of the classical and latest representative adversarial attacks based on a reasonable taxonomy of adversarial attacks. Then, we construct a knowl- edge graph based on the citation relationship relying on the software VOSviewer, visualize and analyze the subject development in this field based on the information of 5923 articles from Scopus. In the end, possible research directions for the development about adversarial attacks are proposed based on the trends deduced by keywords detection analysis. All the data used for visualization are available at: https://github.com/NanyunLengmu/Adversarial- Attack- Visualization .
keywords: یادگیری عمیق | حمله خصمانه | حمله جعبه سیاه | حمله به جعبه سفید | نیرومندی | تجزیه و تحلیل تجسم | Deep learning | Adversarial attack | Black-box attack | White-box attack | Robustness | Visualization analysis
مقاله انگلیسی
5 ChickenNet - an end-to-end approach for plumage condition assessment of laying hens in commercial farms using computer vision
ChickenNet - یک رویکرد انتها به انتها برای ارزیابی وضعیت پرهای مرغ های تخمگذار در مزارع تجاری با استفاده از بینایی کامپیوتر-2022
Regular plumage condition assessment in laying hens is essential to monitor the hens’ welfare status and to detect the occurrence of feather pecking activities. However, in commercial farms this is a labor-intensive, manual task. This study proposes a novel approach for automated plumage condition assessment using com- puter vision and deep learning. It presents ChickenNet, an end-to-end convolutional neural network that detects hens and simultaneously predicts a plumage condition score for each detected hen. To investigate the effect of input image characteristics, the method was evaluated using images with and without depth information in resolutions of 384 × 384, 512 × 512, 896 × 896 and 1216 × 1216 pixels. Further, to determine the impact of subjective human annotations, plumage condition predictions were compared to manual assessments of one observer and to matching annotations of two observers. Among all tested settings, performance metrics based on matching manual annotations of two observers were equal or better than the ones based on annotations of a single observer. The best result obtained among all tested configurations was a mean average precision (mAP) of 98.02% for hen detection while 91.83% of the plumage condition scores were predicted correctly. Moreover, it was revealed that performance of hen detection and plumage condition assessment of ChickenNet was not generally enhanced by depth information. Increasing image resolutions improved plumage assessment up to a resolution of 896 × 896 pixels, while high detection accuracies (mAP > 0.96) could already be achieved using lower resolutions. The results indicate that ChickenNet provides a sufficient basis for automated monitoring of plumage conditions in commercial laying hen farms.
keywords: طیور | ارزیابی پر و بال | بینایی کامپیوتر | یادگیری عمیق | تقسیم بندی نمونه | Poultry | Plumage assessment | Computer vision | Deep learning | Instance segmentation
مقاله انگلیسی
6 Monitoring crop phenology with street-level imagery using computer vision
پایش فنولوژی محصول با تصاویر سطح خیابان با استفاده از بینایی ماشین-2022
Street-level imagery holds a significant potential to scale-up in-situ data collection. This is enabled by combining the use of cheap high-quality cameras with recent advances in deep learning compute solutions to derive relevant thematic information. We present a framework to collect and extract crop type and phenological information from street level imagery using computer vision. Monitoring crop phenology is critical to assess gross primary productivity and crop yield. During the 2018 growing season, high-definition pictures were captured with side- looking action cameras in the Flevoland province of the Netherlands. Each month from March to October, a fixed 200-km route was surveyed collecting one picture per second resulting in a total of 400,000 geo-tagged pictures. At 220 specific parcel locations, detailed on the spot crop phenology observations were recorded for 17 crop types (including bare soil, green manure, and tulips): bare soil, carrots, green manure, grassland, grass seeds, maize, onion, potato, summer barley, sugar beet, spring cereals, spring wheat, tulips, vegetables, winter barley, winter cereals and winter wheat. Furthermore, the time span included specific pre-emergence parcel stages, such as differently cultivated bare soil for spring and summer crops as well as post-harvest cultivation practices, e.g. green manuring and catch crops. Classification was done using TensorFlow with a well-known image recognition model, based on transfer learning with convolutional neural network (MobileNet). A hypertuning methodology was developed to obtain the best performing model among 160 models. This best model was applied on an independent inference set discriminating crop type with a Macro F1 score of 88.1% and main phenological stage at 86.9% at the parcel level. Potential and caveats of the approach along with practical considerations for implementation and improvement are discussed. The proposed framework speeds up high quality in-situ data collection and suggests avenues for massive data collection via automated classification using computer vision.
keywords: Phenology | Plant recognition | Agriculture | Computer vision | Deep learning | Remote sensing | CNN | BBCH | Crop type | Street view imagery | Survey | In-situ | Earth observation | Parcel | In situ
مقاله انگلیسی
7 Barriers to computer vision applications in pig production facilities
موانع برنامه های بینایی کامپیوتری در تاسیسات تولید خوک-2022
Surveillance and analysis of behavior can be used to detect and characterize health disruption and welfare status in animals. The accurate identification of changes in behavior is a time-consuming task for caretakers in large, commercial pig production systems and requires strong observational skills and a working knowledge of animal husbandry and livestock systems operations. In recent years, many studies have explored the use of various technologies and sensors to assist animal caretakers in monitoring animal activity and behavior. Of these technologies, computer vision offers the most consistent promise as an effective aid in animal care, and yet, a systematic review of the state of application of this technology indicates that there are many significant barriers to its widespread adoption and successful utilization in commercial production system settings. One of the most important of these barriers is the recognition of the sources of errors from objective behavior labeling that are not measurable by current algorithm performance evaluations. Additionally, there is a significant disconnect between the remarkable advances in computer vision research interests and the integration of advances and practical needs being instituted by scientific experts working in commercial animal production partnerships. This lack of synergy between experts in the computer vision and animal health and production sectors means that existing and emerging datasets tend to have a very particular focus that cannot be easily pivoted or extended for use in other contexts, resulting in a generality versus particularity conundrum. This goal of this paper is to help catalogue and consider the major obstacles and impediments to the effective use of computer vision associated technologies in the swine industry by offering a systematic analysis of computer vision applications specific to commercial pig management by reviewing and summarizing the following: (i) the purpose and associated challenges of computer vision applications in pig behavior analysis; (ii) the use of computer vision algorithms and datasets for pig husbandry and management tasks; (iii) the process of dataset construction for computer vision algorithm development. In this appraisal, we outline common difficulties and challenges associated with each of these themes and suggest possible solutions. Finally, we highlight the opportunities for future research in computer vision applications that can build upon existing knowledge of pig management by extending our capability to interpret pig behaviors and thereby overcome the current barriers to applying computer vision technologies to pig production systems. In conclusion, we believe productive collaboration between animal-based scientists and computer-based scientists may accelerate animal behavior studies and lead the computer vision technologies to commercial applications in pig production facilities.
keywords: بینایی کامپیوتر | دامپروری دقیق | رفتار - اخلاق | یادگیری عمیق | مجموعه داده | گراز | Computer vision | Precision livestock farming | Behavior | Deep learning | Dataset | Swine
مقاله انگلیسی
8 Computer vision-based illumination-robust and multi-point simultaneous structural displacement measuring method
روش اندازه گیری جابجایی ساختاری همزمان با روشنایی مبتنی بر بینایی کامپیوتری-2022
Computer vision-based techniques for structural displacement measurement are rapidly becoming popular in civil structural engineering. However, most existing computer vision-based displace- ment measurement methods require man-made targets for object matching or tracking, besides usually the measurement accuracies are seriously sensitive to the ambient illumination variations. A computer vision-based illumination robust and multi-point simultaneous measuring method is proposed for structural displacement measurements. The method consists of two part, one is for segmenting the beam body from its background, the segmentation is perfectly carried out by fully convolutional network (FCN) and conditional random field (CRF); another is digital image cor- relation (DIC)-based displacement measurement. A simply supported beam is built in laboratory. The accuracy and illumination robustness are verified through three groups of elaborately designed experiments. Due to the exploitation of FCN and CRF for pixel-wise segmentation, numbers of locations along with the segmented beam body can be chosen and measured simul- taneously. It is verified that the method is illumination robust since the displacement measure- ments are with the smallest fluctuations to the illumination variations. The proposed method does not require any man-made targets attached on the structure, but because of the exploitation of DIC in displacement measurement, the regions centered on the measuring points need to have texture feature.
keywords: پایش سلامت سازه | اندازه گیری جابجایی | بینایی کامپیوتر | یادگیری عمیق | تقسیم بندی شی | همبستگی تصویر دیجیتال | Structural health monitoring | Displacement measurement | Computer vision | Deep learning | Object segmentation | Digital image correlation
مقاله انگلیسی
9 Computer vision-based classification of concrete spall severity using metaheuristic-optimized Extreme Gradient Boosting Machine and Deep Convolutional Neural Network
طبقه بندی مبتنی بر بینایی کامپیوتری شدت پاشش بتن با استفاده از ماشین تقویت کننده گرادیان قویا بهینه شده فراابتکاری و شبکه عصبی پیچیده عمیق-2022
This paper presents alternative solutions for classifying concrete spall severity based on computer vision ap- proaches. Extreme Gradient Boosting Machine (XGBoost) and Deep Convolutional Neural Network (DCNN) are employed for categorizing image samples into two classes: shallow spall and deep spall. To delineate the properties of a concrete surface subject to spall, texture descriptors including local binary pattern, center sym- metric local binary pattern, local ternary pattern, and attractive repulsive center symmetric local binary pattern (ARCS-LBP) are employed as feature extraction methods. In addition, the prediction performance of XGBoost is enhanced by Aquila optimizer metaheuristic. Meanwhile, DCNN is capable of performing image classification directly without the need for texture descriptors. Experimental results with a dataset containing real-world concrete surface images and 20 independent model evaluations point out that the XGBoost optimized by the Aquila metaheuristic and used with ARCS-LBP has achieved an outstanding classification performance with a classification accuracy rate of roughly 99%.
keywords: شدت ریزش بتن | دستگاه افزایش گرادیان | الگوی باینری محلی | فراماسونری | یادگیری عمیق | Concrete spall severity | Gradient boosting machine | Local binary pattern | Metaheuristic | Deep learning
مقاله انگلیسی
10 Detection of loosening angle for mark bolted joints with computer vision and geometric imaging
تشخیص زاویه شل شدن اتصالات پیچ شده با بینایی ماشین و تصویربرداری هندسی-2022
Mark bars drawn on the surfaces of bolted joints are widely used to indicate the severity of loosening. The automatic and accurate determination of the loosening angle of mark bolted joints is a challenging issue that has not been investigated previously. This determination will release workers from heavy workloads. This study proposes an automated method for detecting the loosening angle of mark bolted joints by integrating computer vision and geometric imaging theory. This novel method contained three integrated modules. The first module used a Keypoint Regional Convolutional Neural Network (Keypoint-RCNN)-based deep learning algorithm to detect five keypoints and locate the region of interest (RoI). The second module recognised the mark ellipse and mark points using the transformation of the five detected keypoints and several image processing technologies such as dilation and expansion algorithms, a skeleton algorithm, and the least square method. In the last module, according to the geometric imaging theory, we derived a precise expression to calculate the loosening angle using the information for the mark points and mark ellipse. In lab-scale and real-scale environments, the average relative detection error was only 3.5%. This indicated that our method could accurately calculate the loosening angles of marked bolted joints even when the images were captured from an arbitrary view. In the future, some segmentation algorithms based on deep learning, distortion correction, accurate angle and length measuring instruments, and advanced transformation methods can be applied to further improve detection accuracy.
keywords: Mark bolted joint | Loosening detection | Keypoint-RCNN | Image processing | Geometric imaging
مقاله انگلیسی
rss مقالات ترجمه شده rss مقالات انگلیسی rss کتاب های انگلیسی rss مقالات آموزشی
logo-samandehi
بازدید امروز: 1568 :::::::: بازدید دیروز: 2317 :::::::: بازدید کل: 3885 :::::::: افراد آنلاین: 9