با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد).
ردیف | عنوان | نوع |
---|---|---|
1 |
ChickenNet - an end-to-end approach for plumage condition assessment of laying hens in commercial farms using computer vision
ChickenNet - یک رویکرد انتها به انتها برای ارزیابی وضعیت پرهای مرغ های تخمگذار در مزارع تجاری با استفاده از بینایی کامپیوتر-2022 Regular plumage condition assessment in laying hens is essential to monitor the hens’ welfare status and to
detect the occurrence of feather pecking activities. However, in commercial farms this is a labor-intensive,
manual task. This study proposes a novel approach for automated plumage condition assessment using com-
puter vision and deep learning. It presents ChickenNet, an end-to-end convolutional neural network that detects
hens and simultaneously predicts a plumage condition score for each detected hen. To investigate the effect of
input image characteristics, the method was evaluated using images with and without depth information in
resolutions of 384 × 384, 512 × 512, 896 × 896 and 1216 × 1216 pixels. Further, to determine the impact of
subjective human annotations, plumage condition predictions were compared to manual assessments of one
observer and to matching annotations of two observers. Among all tested settings, performance metrics based on
matching manual annotations of two observers were equal or better than the ones based on annotations of a
single observer. The best result obtained among all tested configurations was a mean average precision (mAP) of
98.02% for hen detection while 91.83% of the plumage condition scores were predicted correctly. Moreover, it
was revealed that performance of hen detection and plumage condition assessment of ChickenNet was not
generally enhanced by depth information. Increasing image resolutions improved plumage assessment up to a
resolution of 896 × 896 pixels, while high detection accuracies (mAP > 0.96) could already be achieved using
lower resolutions. The results indicate that ChickenNet provides a sufficient basis for automated monitoring of
plumage conditions in commercial laying hen farms. keywords: طیور | ارزیابی پر و بال | بینایی کامپیوتر | یادگیری عمیق | تقسیم بندی نمونه | Poultry | Plumage assessment | Computer vision | Deep learning | Instance segmentation |
مقاله انگلیسی |
2 |
Monitoring crop phenology with street-level imagery using computer vision
پایش فنولوژی محصول با تصاویر سطح خیابان با استفاده از بینایی ماشین-2022 Street-level imagery holds a significant potential to scale-up in-situ data collection. This is enabled by combining
the use of cheap high-quality cameras with recent advances in deep learning compute solutions to derive relevant
thematic information. We present a framework to collect and extract crop type and phenological information
from street level imagery using computer vision. Monitoring crop phenology is critical to assess gross primary
productivity and crop yield. During the 2018 growing season, high-definition pictures were captured with side-
looking action cameras in the Flevoland province of the Netherlands. Each month from March to October, a fixed
200-km route was surveyed collecting one picture per second resulting in a total of 400,000 geo-tagged pictures.
At 220 specific parcel locations, detailed on the spot crop phenology observations were recorded for 17 crop
types (including bare soil, green manure, and tulips): bare soil, carrots, green manure, grassland, grass seeds,
maize, onion, potato, summer barley, sugar beet, spring cereals, spring wheat, tulips, vegetables, winter barley,
winter cereals and winter wheat. Furthermore, the time span included specific pre-emergence parcel stages, such
as differently cultivated bare soil for spring and summer crops as well as post-harvest cultivation practices, e.g.
green manuring and catch crops. Classification was done using TensorFlow with a well-known image recognition
model, based on transfer learning with convolutional neural network (MobileNet). A hypertuning methodology
was developed to obtain the best performing model among 160 models. This best model was applied on an
independent inference set discriminating crop type with a Macro F1 score of 88.1% and main phenological stage
at 86.9% at the parcel level. Potential and caveats of the approach along with practical considerations for
implementation and improvement are discussed. The proposed framework speeds up high quality in-situ data
collection and suggests avenues for massive data collection via automated classification using computer vision. keywords: Phenology | Plant recognition | Agriculture | Computer vision | Deep learning | Remote sensing | CNN | BBCH | Crop type | Street view imagery | Survey | In-situ | Earth observation | Parcel | In situ |
مقاله انگلیسی |
3 |
Detection of loosening angle for mark bolted joints with computer vision and geometric imaging
تشخیص زاویه شل شدن اتصالات پیچ شده با بینایی ماشین و تصویربرداری هندسی-2022 Mark bars drawn on the surfaces of bolted joints are widely used to indicate the severity of loosening. The
automatic and accurate determination of the loosening angle of mark bolted joints is a challenging issue that has
not been investigated previously. This determination will release workers from heavy workloads. This study
proposes an automated method for detecting the loosening angle of mark bolted joints by integrating computer
vision and geometric imaging theory. This novel method contained three integrated modules. The first module
used a Keypoint Regional Convolutional Neural Network (Keypoint-RCNN)-based deep learning algorithm to
detect five keypoints and locate the region of interest (RoI). The second module recognised the mark ellipse and
mark points using the transformation of the five detected keypoints and several image processing technologies
such as dilation and expansion algorithms, a skeleton algorithm, and the least square method. In the last module,
according to the geometric imaging theory, we derived a precise expression to calculate the loosening angle using
the information for the mark points and mark ellipse. In lab-scale and real-scale environments, the average
relative detection error was only 3.5%. This indicated that our method could accurately calculate the loosening
angles of marked bolted joints even when the images were captured from an arbitrary view. In the future, some
segmentation algorithms based on deep learning, distortion correction, accurate angle and length measuring
instruments, and advanced transformation methods can be applied to further improve detection accuracy. keywords: Mark bolted joint | Loosening detection | Keypoint-RCNN | Image processing | Geometric imaging |
مقاله انگلیسی |
4 |
A combined real-time intelligent fire detection and forecasting approach through cameras based on computer vision method
یک رویکرد تشخیص و پیشبینی حریق هوشمند ترکیبی در زمان واقعی از طریق دوربینها بر اساس روش بینایی کامپیوتری-2022 Fire is one of the most common hazards in the process industry. Until today, most fire alarms have had very
limited functionality. Normally, only a simple alarm is triggered without any specific information about the fire
circumstances provided, not to mention fire forecasting. In this paper, a combined real-time intelligent fire
detection and forecasting approach through cameras is discussed with extracting and predicting fire development
characteristics. Three parameters (fire spread position, fire spread speed and flame width) are used to charac-
terize the fire development. Two neural networks are established, i.e., the Region-Convolutional Neural Network
(RCNN) for fire characteristic extraction through fire detection and the Residual Network (ResNet) for fire
forecasting. By designing 12 sets of cable fire experiments with different fire developing conditions, the accu-
racies of fire parameters extraction and forecasting are evaluated. Results show that the mean relative error
(MRE) of extraction by RCNN for the three parameters are around 4–13%, 6–20% and 11–37%, respectively.
Meanwhile, the MRE of forecasting by ResNet for the three parameters are around 4–13%, 11–33% and 12–48%,
respectively. It confirms that the proposed approach can provide a feasible solution for quantifying fire devel-
opment and improve industrial fire safety, e.g., forecasting the fire development trends, assessing the severity of
accidents, estimating the accident losses in real time and guiding the fire fighting and rescue tactics. keywords: ایمنی آتش سوزی صنعتی | تشخیص حریق | پیش بینی آتش سوزی | تجزیه و تحلیل آتش سوزی | هوش مصنوعی | Industrial fire safety | Fire detection | Fire forecasting | Fire analysis | Artificial intelligence |
مقاله انگلیسی |
5 |
A computer vision framework using Convolutional Neural Networks for airport-airside surveillance
چارچوب بینایی کامپیوتری با استفاده از شبکههای عصبی کانولوشن برای نظارت در فرودگاه-2022 Modern airports often have large and complex airside environments featuring multiple runways, with changing configurations, numerous taxiways for effective circulation of flights and tens, if not hundreds, of gates. With inherent uncertainties in gate push-back and taxiway routing, efficient surveillance and management of airport-airside operations is a highly challenging task for air traffic controllers. An increase in air traffic may lead to gate delays, taxiway congestion, taxiway incursions as well as significant increase in the workload of air traffic controllers. With the advent of Digital Towers, airports are increasingly being equipped with surveillance camera systems. This paper proposes a novel computer vision framework for airport-airside surveillance, using cameras to monitor ground movement objects for safety enhancement and operational efficiency improvement. The framework adopts Convolutional Neural Networks and camera calibration techniques for aircraft detection and tracking, push-back prediction, and maneuvering monitoring. The proposed framework is applied on video camera feeds from Houston Airport, USA (for maneuvering monitoring) and Obihiro Airport, Japan (for push-back prediction). The object detection models of the proposed framework achieve up to 73.36% average precision on Houston airport and 87.3% on Obihiro airport. The framework estimates aircraft speed and distance with low error (up to 6 meters), and aircraft push-back is predicted with an average error of 3 min from the time an aircraft arrives with the error-rate reducing until the aircraft’s actual push-back event.
keywords: Air traffic control | Convolutional Neural Network | Computer vision |
مقاله انگلیسی |
6 |
Plant leaf disease detection using computer vision and machine learning algorithms
تشخیص بیماری برگ گیاه با استفاده از بینایی کامپیوتری و الگوریتم های یادگیری ماشین-2022 Agriculture provides food to all the human beings even in case of rapid increase in the population. It is recom-
mended to predict the plant diseases at their early stage in the field of agriculture is essential to cater the food to
the overall population. But it unfortunate to predict the diseases at the early stage of the crops. The idea behind
the paper is to bring awareness amongst the farmers about the cutting-edge technologies to reduces diseases in
plant leaf. Since tomato is merely available vegetable, the approaches of machine learning and image processing
with an accurate algorithm is identified to detect the leaf diseases in the tomato plant. In this investigation, the
samples of tomato leaves having disorders are considered. With these disorder samples of tomato leaves, the farm-
ers will easily find the diseases based on the early symptoms. Firstly, the samples of tomato leaves are resized to
256 × 256 pixels and then Histogram Equalization is used to improve the quality of tomato samples. The K-means
clustering is introduced for partitioning of dataspace into Voronoi cells. The boundary of leaf samples is extracted
using contour tracing. The multiple descriptors viz., Discrete Wavelet Transform, Principal Component Analysis
and Grey Level Co-occurrence Matrix are used to extract the informative features of the leaf samples. Finally,
the extracted features are classified using machine learning approaches such as Support Vector Machine (SVM),
Convolutional Neural Network (CNN) and K-Nearest Neighbor (K-NN). The accuracy of the proposed model is
tested using SVM (88%), K-NN (97%) and CNN (99.6%) on tomato disordered samples. keywords: شبکه های عصبی کانولوشنال | تبدیل موجک گسسته | تجزیه و تحلیل مؤلفه های اصلی | نزدیکترین همسایه | بیماری برگ | Convolutional Neural Networks | Discrete Wavelet Transform | Principal Component Analysis | Nearest Neighbor | Leaf disease |
مقاله انگلیسی |
7 |
Face mask recogniser using image processing and computer vision approach
تشخیص ماسک صورت با استفاده از پردازش تصویر و رویکرد بینایی کامپیوتری-2022 The world saw a health crisis with the onset of the COVID-19 virus outbreak. The mask has been identified as
the most efficient way to prevent the spread of virus [1]. This has driven the necessity for a face mask recogniser
that not only detects the presence of a mask but also gives the accuracy to which a person is wearing the face
mask. Also, the face mask should be recognised in all angles as well. The goal of this study is to create a new
and improved real time face mask recogniser using image processing and computer vision approach. A Kaggle
dataset which consisted of images with and without masks was used. For the purpose of this study a pre-trained
convolutional neural network Mobile Net V2 was used. The performance of the given model was assessed. The
model presented in this paper can detect the face mask with 98% precision. This Face mask recogniser can effi-
ciently detect the face mask in side wise direction which makes it more useful. A comparison of the performance
metrics of the existing algorithms is also presented. Now with the spread of the infectious variant OMICRON, it
is necessary to implement such a robust face mask recogniser which can help control the spread. keywords: Computer Vision | Convolutional Neural Network | Face mask detection | Image processing | Kaggle dataset | Keras | MobileNetV2 | Open CV | Tensor-Flow |
مقاله انگلیسی |
8 |
Human identification driven by deep CNN and transfer learning based on multiview feature representations of ECG
Human identification driven by deep CNN and transfer learning based on multiview feature representations of ECG-2021 Increasingly smart techniques for counterfeiting face and fingerprint traits have increased the potential threats to information security systems, creating a substantial demand for improved security and better privacy and identity protection. The internet of Things (IoT)-driven fingertip electrocardiogram (ECG) acquisition provides broad application prospects for ECG-based identity systems. This study focused on three major impediments to fingertip ECG: the impact of variations in acquisition status, the high computational complexity of traditional convolutional neural network (CNN) models and the feasibility of model migration, and a lack of sufficient fingertip samples. Our main contribution is a novel fingertip ECG identification system that integrates transfer learning and a deep CNN. The proposed system does not require manual feature extraction or suffer from complex model calculations, which improves its speed, and it is effective even when only a small set of training data exists. Using 1200 ECG recordings from 600 individuals, we consider 5 simulated yet potentially practical scenarios. When analyzing the overall training accuracy of the model, its mean accuracy for the 540 chest- collected ECG from PhysioNet exceeded 97.60 %, and for 60 subjects from the CYBHi fingertip-collected ECG, its mean accuracy reached 98.77 %. When simulating a real-world human recognition system on 5 public datasets, the validation accuracy of the proposed model can nearly reach 100 % recognition, outperforming the original GoogLeNet network by a maximum of 3.33 %. To some degree, the developed architecture provides a reference for practical applications of fingertip-collected ECG-based biometric systems and for information network security. Keywords: Off-the-person | Fingertip ECG biometric | Human identification | Convolutional neural network (CNN) | Transfer learning |
مقاله انگلیسی |
9 |
Artificial intelligence quality inspection of steel bars installation by integrating mask R-CNN and stereo vision
بازرسی کیفیت هوش مصنوعی نصب میله های فولادی با ادغام ماسک R-CNN و دید استریو-2021 Contractors should conduct strict quality inspection of the steel bars used in concrete structures and need to automate the process of quality inspection. The objective of this study is to develop an Artificial Intelligence Quality Inspection Model (AI-QIM) that can execute quality inspection on steel bars at the construction site. The proposed AI-QIM is built on the Mask Region-based Convolutional Neural Network (Mask R-CNN) technique, which can perform instance segmentation of steel bars. This object detection technique is integrated with a stereo vision camera to generate information on steel bar installation. A contractor can use the proposed AI-QIM to estimate the quantity, spacing, diameter, and length of steel bars during quality inspection. A sample case study indicated that the AI-QIM yielded a maximum relative error of 3% when measuring steel bar spacing and a maximum relative error of 8% when measuring steel bar lengths within a range of 1–2 m from a stereo camera. Keywords: Steel bar | Quality inspection | Artificial intelligence | Convolutional Neural Network (CNN) | Mask R-CNN | Stereo vision | Object detection | Object mask | Instance segmentation |
مقاله انگلیسی |
10 |
Animal species image classification
طبقه بندی تصویر گونه های جانوری-2021 Animal species image classification is used in forests to classifiy animals in real time. In past, many computer vision techniques were introduced but they couldn’t fulfill the requirements as the accuracy got depreciated since the technology advanced. But as per the requirement many techniques were introduced where accuracy got drastically improved where we could perform the image classification, image recognition and segmentation. This project aims to introduces efficient technique for animal species image classification with the goal of achieving good amount of accuracy. Convolutional neural network is been engineered for the image classification process. Bottleneck features are trained and synched to the pretrained architecture to achieve high accuracy. Numerous deep learning architectures are com- pared with the dataset.© 2021 Elsevier Ltd. All rights reserved. Selection and peer-review under responsibility of the scientific committee of the Emerging Trends in Materials Science, Technology and Engineering. Keywords: Computer vision | Convolutional neural network | Bottleneck features | Deep learning architectures |
مقاله انگلیسی |