دانلود و نمایش مقالات مرتبط با شبکه عصبی::صفحه 1
بلافاصله پس از پرداخت دانلود کنید

با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد). 

نتیجه جستجو - شبکه عصبی

تعداد مقالات یافته شده: 331
ردیف عنوان نوع
1 ChickenNet - an end-to-end approach for plumage condition assessment of laying hens in commercial farms using computer vision
ChickenNet - یک رویکرد انتها به انتها برای ارزیابی وضعیت پرهای مرغ های تخمگذار در مزارع تجاری با استفاده از بینایی کامپیوتر-2022
Regular plumage condition assessment in laying hens is essential to monitor the hens’ welfare status and to detect the occurrence of feather pecking activities. However, in commercial farms this is a labor-intensive, manual task. This study proposes a novel approach for automated plumage condition assessment using com- puter vision and deep learning. It presents ChickenNet, an end-to-end convolutional neural network that detects hens and simultaneously predicts a plumage condition score for each detected hen. To investigate the effect of input image characteristics, the method was evaluated using images with and without depth information in resolutions of 384 × 384, 512 × 512, 896 × 896 and 1216 × 1216 pixels. Further, to determine the impact of subjective human annotations, plumage condition predictions were compared to manual assessments of one observer and to matching annotations of two observers. Among all tested settings, performance metrics based on matching manual annotations of two observers were equal or better than the ones based on annotations of a single observer. The best result obtained among all tested configurations was a mean average precision (mAP) of 98.02% for hen detection while 91.83% of the plumage condition scores were predicted correctly. Moreover, it was revealed that performance of hen detection and plumage condition assessment of ChickenNet was not generally enhanced by depth information. Increasing image resolutions improved plumage assessment up to a resolution of 896 × 896 pixels, while high detection accuracies (mAP > 0.96) could already be achieved using lower resolutions. The results indicate that ChickenNet provides a sufficient basis for automated monitoring of plumage conditions in commercial laying hen farms.
keywords: طیور | ارزیابی پر و بال | بینایی کامپیوتر | یادگیری عمیق | تقسیم بندی نمونه | Poultry | Plumage assessment | Computer vision | Deep learning | Instance segmentation
مقاله انگلیسی
2 Monitoring crop phenology with street-level imagery using computer vision
پایش فنولوژی محصول با تصاویر سطح خیابان با استفاده از بینایی ماشین-2022
Street-level imagery holds a significant potential to scale-up in-situ data collection. This is enabled by combining the use of cheap high-quality cameras with recent advances in deep learning compute solutions to derive relevant thematic information. We present a framework to collect and extract crop type and phenological information from street level imagery using computer vision. Monitoring crop phenology is critical to assess gross primary productivity and crop yield. During the 2018 growing season, high-definition pictures were captured with side- looking action cameras in the Flevoland province of the Netherlands. Each month from March to October, a fixed 200-km route was surveyed collecting one picture per second resulting in a total of 400,000 geo-tagged pictures. At 220 specific parcel locations, detailed on the spot crop phenology observations were recorded for 17 crop types (including bare soil, green manure, and tulips): bare soil, carrots, green manure, grassland, grass seeds, maize, onion, potato, summer barley, sugar beet, spring cereals, spring wheat, tulips, vegetables, winter barley, winter cereals and winter wheat. Furthermore, the time span included specific pre-emergence parcel stages, such as differently cultivated bare soil for spring and summer crops as well as post-harvest cultivation practices, e.g. green manuring and catch crops. Classification was done using TensorFlow with a well-known image recognition model, based on transfer learning with convolutional neural network (MobileNet). A hypertuning methodology was developed to obtain the best performing model among 160 models. This best model was applied on an independent inference set discriminating crop type with a Macro F1 score of 88.1% and main phenological stage at 86.9% at the parcel level. Potential and caveats of the approach along with practical considerations for implementation and improvement are discussed. The proposed framework speeds up high quality in-situ data collection and suggests avenues for massive data collection via automated classification using computer vision.
keywords: Phenology | Plant recognition | Agriculture | Computer vision | Deep learning | Remote sensing | CNN | BBCH | Crop type | Street view imagery | Survey | In-situ | Earth observation | Parcel | In situ
مقاله انگلیسی
3 Computer vision for anatomical analysis of equipment in civil infrastructure projects: Theorizing the development of regression-based deep neural networks
چشم انداز کامپیوتری برای تجزیه و تحلیل آناتومیکی تجهیزات در پروژه های زیرساختی عمرانی: نظریه پردازی توسعه شبکه های عصبی عمیق مبتنی بر رگرسیون-2022
There is high demand for heavy equipment in civil infrastructure projects and their performance is a determinant of the successful delivery of site operations. Although manufacturers provide equipment performance hand- books, additional monitoring mechanisms are required to depart from measuring performance on the sole basis of unit cost for moved materials. Vision-based tracking and pose estimation can facilitate site performance monitoring. This research develops several regression-based deep neural networks (DNNs) to monitor equipment with the aim of ensuring safety, productivity, sustainability and quality of equipment operations. Annotated image libraries are used to train and test several backbone architectures. Experimental results reveal the pre- cision of DNNs with depthwise separable convolutions and computational efficiency of DNNs with channel shuffle. This research provides scientific utility by developing a method for equipment pose estimation with the ability to detect anatomical angles and critical keypoints. The practical utility of this study is the provision of potentials to influence current practice of articulated machinery monitoring in projects.
keywords: هوش مصنوعی (AI) | سیستم های فیزیکی سایبری | معیارهای ارزیابی خطا | طراحی و آزمایش تجربی | تخمین ژست کامل بدن | صنعت و ساخت 4.0 | الگوریتم های یادگیری ماشین | معماری های ستون فقرات شبکه | Artificial intelligence (AI) | Cyber physical systems | Error evaluation metrics | Experimental design and testing | Full body pose estimation | Industry and construction 4.0 | Machine learning algorithms | Network backbone architectures
مقاله انگلیسی
4 Computer vision-based classification of concrete spall severity using metaheuristic-optimized Extreme Gradient Boosting Machine and Deep Convolutional Neural Network
طبقه بندی مبتنی بر بینایی کامپیوتری شدت پاشش بتن با استفاده از ماشین تقویت کننده گرادیان قویا بهینه شده فراابتکاری و شبکه عصبی پیچیده عمیق-2022
This paper presents alternative solutions for classifying concrete spall severity based on computer vision ap- proaches. Extreme Gradient Boosting Machine (XGBoost) and Deep Convolutional Neural Network (DCNN) are employed for categorizing image samples into two classes: shallow spall and deep spall. To delineate the properties of a concrete surface subject to spall, texture descriptors including local binary pattern, center sym- metric local binary pattern, local ternary pattern, and attractive repulsive center symmetric local binary pattern (ARCS-LBP) are employed as feature extraction methods. In addition, the prediction performance of XGBoost is enhanced by Aquila optimizer metaheuristic. Meanwhile, DCNN is capable of performing image classification directly without the need for texture descriptors. Experimental results with a dataset containing real-world concrete surface images and 20 independent model evaluations point out that the XGBoost optimized by the Aquila metaheuristic and used with ARCS-LBP has achieved an outstanding classification performance with a classification accuracy rate of roughly 99%.
keywords: شدت ریزش بتن | دستگاه افزایش گرادیان | الگوی باینری محلی | فراماسونری | یادگیری عمیق | Concrete spall severity | Gradient boosting machine | Local binary pattern | Metaheuristic | Deep learning
مقاله انگلیسی
5 Hybrid Quantum-Classical Neural Network for Cloud-Supported In-Vehicle Cyberattack Detection
شبکه عصبی ترکیبی کوانتومی کلاسیک برای تشخیص حمله سایبری در خودرو با پشتیبانی از ابر-2022
A classical computer works with ones and zeros, whereas a quantum computer uses ones, zeros, and superpositions of ones and zeros, which enables quantum computers to perform a vast number of calculations simultaneously compared to classical computers. In a cloud-supported cyber−physical system environment, running a machine learning application in quantum computers is often difficult, due to the existing limitations of the current quantum devices. However, with the combination of quantum-classical neural networks (NN), complex and high-dimensional features can be extracted by the classical NN to a reduced but more informative feature space to be processed by the existing quantum computers. In this study, we developed a hybrid quantum-classical NN to detect an amplitude shift cyberattack on an in-vehicle controller area network dataset. We showed that by using the hybrid quantum-classical NN, it is possible to achieve an attack detection accuracy of 94%, which is higher than a long short-term memory NN (88%) or quantum NN alone (62%).
Index Terms: Sensor applications, clouds | cyberattack | sensor applications | quantum computing | quantum neural network (NN).
مقاله انگلیسی
6 Detection of loosening angle for mark bolted joints with computer vision and geometric imaging
تشخیص زاویه شل شدن اتصالات پیچ شده با بینایی ماشین و تصویربرداری هندسی-2022
Mark bars drawn on the surfaces of bolted joints are widely used to indicate the severity of loosening. The automatic and accurate determination of the loosening angle of mark bolted joints is a challenging issue that has not been investigated previously. This determination will release workers from heavy workloads. This study proposes an automated method for detecting the loosening angle of mark bolted joints by integrating computer vision and geometric imaging theory. This novel method contained three integrated modules. The first module used a Keypoint Regional Convolutional Neural Network (Keypoint-RCNN)-based deep learning algorithm to detect five keypoints and locate the region of interest (RoI). The second module recognised the mark ellipse and mark points using the transformation of the five detected keypoints and several image processing technologies such as dilation and expansion algorithms, a skeleton algorithm, and the least square method. In the last module, according to the geometric imaging theory, we derived a precise expression to calculate the loosening angle using the information for the mark points and mark ellipse. In lab-scale and real-scale environments, the average relative detection error was only 3.5%. This indicated that our method could accurately calculate the loosening angles of marked bolted joints even when the images were captured from an arbitrary view. In the future, some segmentation algorithms based on deep learning, distortion correction, accurate angle and length measuring instruments, and advanced transformation methods can be applied to further improve detection accuracy.
keywords: Mark bolted joint | Loosening detection | Keypoint-RCNN | Image processing | Geometric imaging
مقاله انگلیسی
7 PortiK: A computer vision based solution for real-time automatic solid waste characterization – Application to an aluminium stream
PortiK: یک راه حل مبتنی بر بینایی کامپیوتری برای شناسایی خودکار زباله جامد در زمان واقعی - کاربرد در جریان آلومینیوم-2022
In Material Recovery Facilities (MRFs), recyclable municipal solid waste is turned into a precious commodity. However, effective recycling relies on effective waste sorting, which is still a challenge to sustainable develop- ment of our society. To help the operations improve and optimise their process, this paper describes PortiK, a solution for automatic waste analysis. Based on image analysis and object recognition, it allows for continuous, real-time, non-intrusive measurements of mass composition of waste streams. The end-to-end solution is detailed with all the steps necessary for the system to operate, from hardware specifications and data collection to su- pervisory information obtained by deep learning and statistical analysis. The overall system was tested and validated in an operational environment in a material recovery facility. PortiK monitored an aluminium can stream to estimate its purity. Aluminium cans were detected with 91.2% precision and 90.3% recall, respectively, resulting in an underestimation of the number of cans by less than 1%. Regarding contaminants (i.e. other types of waste), precision and recall were 80.2% and 78.4%, respectively, giving an 2.2% underestimation. Based on five sample analyses where pieces of waste were counted and weighed per batch, the detection results were used to estimate purity and its confidence level. The estimation error was calculated to be within ±7% after 5 minutes of monitoring and ±5% after 8 hours. These results have demon- strated the feasibility and the relevance of the proposed solution for online quality control of aluminium can stream.
keywords: امکانات بازیابی مواد | شناسایی مواد زائد جامد | یادگیری عمیق | شبکه عصبی عمیق | بینایی کامپیوتر | Material recovery facilities | MRF | Solid waste characterization | Deep-learning | Deep neural network | Computer vision
مقاله انگلیسی
8 A combined real-time intelligent fire detection and forecasting approach through cameras based on computer vision method
یک رویکرد تشخیص و پیش‌بینی حریق هوشمند ترکیبی در زمان واقعی از طریق دوربین‌ها بر اساس روش بینایی کامپیوتری-2022
Fire is one of the most common hazards in the process industry. Until today, most fire alarms have had very limited functionality. Normally, only a simple alarm is triggered without any specific information about the fire circumstances provided, not to mention fire forecasting. In this paper, a combined real-time intelligent fire detection and forecasting approach through cameras is discussed with extracting and predicting fire development characteristics. Three parameters (fire spread position, fire spread speed and flame width) are used to charac- terize the fire development. Two neural networks are established, i.e., the Region-Convolutional Neural Network (RCNN) for fire characteristic extraction through fire detection and the Residual Network (ResNet) for fire forecasting. By designing 12 sets of cable fire experiments with different fire developing conditions, the accu- racies of fire parameters extraction and forecasting are evaluated. Results show that the mean relative error (MRE) of extraction by RCNN for the three parameters are around 4–13%, 6–20% and 11–37%, respectively. Meanwhile, the MRE of forecasting by ResNet for the three parameters are around 4–13%, 11–33% and 12–48%, respectively. It confirms that the proposed approach can provide a feasible solution for quantifying fire devel- opment and improve industrial fire safety, e.g., forecasting the fire development trends, assessing the severity of accidents, estimating the accident losses in real time and guiding the fire fighting and rescue tactics.
keywords: ایمنی آتش سوزی صنعتی | تشخیص حریق | پیش بینی آتش سوزی | تجزیه و تحلیل آتش سوزی | هوش مصنوعی | Industrial fire safety | Fire detection | Fire forecasting | Fire analysis | Artificial intelligence
مقاله انگلیسی
9 A computer vision framework using Convolutional Neural Networks for airport-airside surveillance
چارچوب بینایی کامپیوتری با استفاده از شبکه‌های عصبی کانولوشن برای نظارت در فرودگاه-2022
Modern airports often have large and complex airside environments featuring multiple runways, with changing configurations, numerous taxiways for effective circulation of flights and tens, if not hundreds, of gates. With inherent uncertainties in gate push-back and taxiway routing, efficient surveillance and management of airport-airside operations is a highly challenging task for air traffic controllers. An increase in air traffic may lead to gate delays, taxiway congestion, taxiway incursions as well as significant increase in the workload of air traffic controllers. With the advent of Digital Towers, airports are increasingly being equipped with surveillance camera systems. This paper proposes a novel computer vision framework for airport-airside surveillance, using cameras to monitor ground movement objects for safety enhancement and operational efficiency improvement. The framework adopts Convolutional Neural Networks and camera calibration techniques for aircraft detection and tracking, push-back prediction, and maneuvering monitoring. The proposed framework is applied on video camera feeds from Houston Airport, USA (for maneuvering monitoring) and Obihiro Airport, Japan (for push-back prediction). The object detection models of the proposed framework achieve up to 73.36% average precision on Houston airport and 87.3% on Obihiro airport. The framework estimates aircraft speed and distance with low error (up to 6 meters), and aircraft push-back is predicted with an average error of 3 min from the time an aircraft arrives with the error-rate reducing until the aircraft’s actual push-back event.
keywords: Air traffic control | Convolutional Neural Network | Computer vision
مقاله انگلیسی
10 In-situ optimization of thermoset composite additive manufacturing via deep learning and computer vision
بهینه سازی درجای تولید افزودنی کامپوزیت ترموست از طریق یادگیری عمیق و بینایی کامپیوتری-2022
With the advent of extrusion additive manufacturing (AM), fabrication of high-performance thermoset com- posites without the need of tooling has become a reality. However, finding an optimal set of printing parameters for these thermoset composites during extrusion requires tedious experimentation as composite ink properties can vary significantly with respect to environmental parameters such as temperature and relative humidity. Addressing this challenge, this study presents a novel optimization framework that utilizes computer vision and deep learning (DL) to optimize the calibration and printing processes of thermoset composite AM. Unlike traditional DL models where printing parameters are determined prior to printing, our proposed framework dynamically and autonomously adjusts the printing parameters during extrusion. A novel DL integrated extrusion AM system is developed to determine the optimal printing parameters including print speed, road width, and layer height for a given composite ink. This closed loop system is consisted of a computer communicating with an extrusion AM system, a camera to perform in-situ imaging and several high accuracy convolution neural net- works (CNNs) selecting the ideal process parameters for composite AM. The results show that our proposed process optimization framework was able to autonomously determine these parameters for a carbon fiber- composite ink. Consequently, specimens with complex geometries could be fabricated without visible defects and with maximum fiber alignment and thus enhancing the mechanical performance of the specimen’s com- posite material. Moreover, our proposed framework minimizes a labor-intensive procedure required to additively manufacture thermoset composites by optimizing the extrusion process without any user intervention.
keywords: یادگیری عمیق | بینایی کامپیوتر | اکستروژن | پرینت سه بعدی کامپوزیت | Deep learning | Computer vision | Extrusion | Composite 3D printing
مقاله انگلیسی
rss مقالات ترجمه شده rss مقالات انگلیسی rss کتاب های انگلیسی rss مقالات آموزشی
logo-samandehi
بازدید امروز: 1673 :::::::: بازدید دیروز: 2317 :::::::: بازدید کل: 3990 :::::::: افراد آنلاین: 18