با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد).
ردیف | عنوان | نوع |
---|---|---|
1 |
Moving towards intelligent telemedicine: Computer vision measurement of human movement
حرکت به سمت پزشکی از راه دور هوشمند: اندازه گیری بینایی کامپیوتری حرکت انسان-2022 Background: Telemedicine video consultations are rapidly increasing globally, accelerated by the COVID-
19 pandemic. This presents opportunities to use computer vision technologies to augment clinician visual
judgement because video cameras are so ubiquitous in personal devices and new techniques, such as
DeepLabCut (DLC) can precisely measure human movement from smartphone videos. However, the accuracy
of DLC to track human movements in videos obtained from laptop cameras, which have a much lower FPS, has
never been investigated; this is a critical gap because patients use laptops for most telemedicine consultations.
Objectives: To determine the validity and reliability of DLC applied to laptop videos to measure finger tapping,
a validated test of human movement.
Method: Sixteen adults completed finger-tapping tests at 0.5 Hz, 1 Hz, 2 Hz, 3 Hz and at maximal speed. Hand
movements were recorded simultaneously by a laptop camera at 30 frames per second (FPS) and by Optotrak,
a 3D motion analysis system at 250 FPS. Eight DLC neural network architectures (ResNet50, ResNet101,
ResNet152, MobileNetV1, MobileNetV2, EfficientNetB0, EfficientNetB3, EfficientNetB6) were applied to the
laptop video and extracted movement features were compared to the ground truth Optotrak motion tracking.
Results: Over 96% (529/552) of DLC measures were within +∕−0.5 Hz of the Optotrak measures. At tapping
frequencies >4 Hz, there was progressive decline in accuracy, attributed to motion blur associated with
the laptop camera’s low FPS. Computer vision methods hold potential for moving us towards intelligent
telemedicine by providing human movement analysis during consultations. However, further developments
are required to accurately measure the fastest movements.
keywords: پزشکی از راه دور | ضربه زدن با انگشت | موتور کنترل | کامپیوتری | Telemedicine | DeepLabCut | Finger tapping | Motor control | Computer vision |
مقاله انگلیسی |
2 |
Image2Triplets: A computer vision-based explicit relationship extraction framework for updating construction activity knowledge graphs
Image2Triplets: چارچوب استخراج رابطه صریح مبتنی بر بینایی ماشین برای به روز رسانی نمودارهای دانش فعالیت های ساخت-2022 Knowledge graph (KG) is an effective tool for knowledge management, particularly in the architecture,
engineering and construction (AEC) industry, where knowledge is fragmented and complicated. However,
research on KG updates in the industry is scarce, with most current research focusing on text-based KG
updates. Considering the superiority of visual data over textual data in terms of accuracy and timeliness, the
potential of computer vision technology for explicit relationship extraction in KG updates is yet to be ex-
plored. This paper combines zero-shot human-object interaction detection techniques with general KGs to
propose a novel framework called Image2Triplets that can extract explicit visual relationships from images
to update the construction activity KG. Comprehensive experiments on the images of architectural dec-
oration processes have been performed to validate the proposed framework. The results and insights will
contribute new knowledge and evidence to human-object interaction detection, KG update and construc-
tion informatics from the theoretical perspective.
© 2022 Elsevier B.V. All rights reserved. keywords: یادگیری شات صفر | تشخیص تعامل انسان و شی | بینایی ماشین| استخراج رابطه صریح | نمودار دانش | Zero-shot learning | Human-object interaction detection | Computer vision | Explicit relationship extraction | Knowledge graph |
مقاله انگلیسی |
3 |
ChickenNet - an end-to-end approach for plumage condition assessment of laying hens in commercial farms using computer vision
ChickenNet - یک رویکرد انتها به انتها برای ارزیابی وضعیت پرهای مرغ های تخمگذار در مزارع تجاری با استفاده از بینایی کامپیوتر-2022 Regular plumage condition assessment in laying hens is essential to monitor the hens’ welfare status and to
detect the occurrence of feather pecking activities. However, in commercial farms this is a labor-intensive,
manual task. This study proposes a novel approach for automated plumage condition assessment using com-
puter vision and deep learning. It presents ChickenNet, an end-to-end convolutional neural network that detects
hens and simultaneously predicts a plumage condition score for each detected hen. To investigate the effect of
input image characteristics, the method was evaluated using images with and without depth information in
resolutions of 384 × 384, 512 × 512, 896 × 896 and 1216 × 1216 pixels. Further, to determine the impact of
subjective human annotations, plumage condition predictions were compared to manual assessments of one
observer and to matching annotations of two observers. Among all tested settings, performance metrics based on
matching manual annotations of two observers were equal or better than the ones based on annotations of a
single observer. The best result obtained among all tested configurations was a mean average precision (mAP) of
98.02% for hen detection while 91.83% of the plumage condition scores were predicted correctly. Moreover, it
was revealed that performance of hen detection and plumage condition assessment of ChickenNet was not
generally enhanced by depth information. Increasing image resolutions improved plumage assessment up to a
resolution of 896 × 896 pixels, while high detection accuracies (mAP > 0.96) could already be achieved using
lower resolutions. The results indicate that ChickenNet provides a sufficient basis for automated monitoring of
plumage conditions in commercial laying hen farms. keywords: طیور | ارزیابی پر و بال | بینایی کامپیوتر | یادگیری عمیق | تقسیم بندی نمونه | Poultry | Plumage assessment | Computer vision | Deep learning | Instance segmentation |
مقاله انگلیسی |
4 |
Human perception of color differences using computer vision system measurements of raw pork loin
درک انسان از تفاوتهای رنگی با استفاده از اندازهگیریهای سیستم بینایی کامپیوتری گوشت خوک خام-2022 In the food industry, product color plays an important role in influencing consumer choices. Yet, there remains
little research on the human ability to perceive differences in product color; therefore, preference testing is
subjective rather than based on quantitative colors. Using a de-centralized computer-aided systematic discrim-
ination testing method, we ascertain consumers’ ability to discern between systematically varied colors. As a case
study, the colors represent the color variability of fresh pork as measured by a computer vision system. Our
results indicate that a total color difference (ΔE) of approximately 1 is discriminable by consumers. Furthermore,
we ascertain that a change in color along the b*-axis (yellowness) in CIELAB color space is most discernable,
followed by the a*-axis (redness) and then the L*-axis (lightness). As developed, our web-based discrimination
testing approach allows for large scale evaluation of human color perception, while these quantitative findings
on meat color discrimination are of value for future research on consumer preferences of meat color and beyond. keywords: تست تبعیض | تست مثلث | ترجیح رنگ | ظاهر غذا | رنگ گوشت | Discrimination testing | Triange test | Color preference | Food appearance | Meat color |
مقاله انگلیسی |
5 |
Disintegration testing augmented by computer Vision technology
آزمایش تجزیه با فناوری Vision کامپیوتری تقویت شده است-2022 Oral solid dosage forms, specifically immediate release tablets, are prevalent in the pharmaceutical industry.
Disintegration testing is often the first step of commercialization and large-scale production of these dosage
forms. Current disintegration testing in the pharmaceutical industry, according to United States Pharmacopeia
(USP) chapter 〈701〉, only gives information about the duration of the tablet disintegration process. This infor-
mation is subjective, variable, and prone to human error due to manual or physical data collection methods via
the human eye or contact disks. To lessen the data integrity risk associated with this process, efforts have been
made to automate the analysis of the disintegration process using digital lens and other imaging technologies.
This would provide a non-invasive method to quantitatively determine disintegration time through computer
algorithms. The main challenges associated with developing such a system involve visualization of tablet pieces
through cloudy and turbid liquid. The Computer Vision for Disintegration (CVD) system has been developed to
be used along with traditional pharmaceutical disintegration testing devices to monitor tablet pieces and
distinguish them from the surrounding liquid. The software written for CVD utilizes data captured by cameras or
other lenses then uses mobile SSD and CNN, with an OpenCV and FRCNN machine learning model, to analyze
and interpret the data. This technology is capable of consistently identifying tablets with ≥ 99.6% accuracy. Not
only is the data produced by CVD more reliable, but it opens the possibility of a deeper understanding of
disintegration rates and mechanisms in addition to duration. keywords: از هم پاشیدگی | اشکال خوراکی جامد | تست تجزیه | یادگیری ماشین | شبکه های عصبی | Disintegration | Oral Solid Dosage Forms | Disintegration Test | Machine Learning | Neural Networks |
مقاله انگلیسی |
6 |
Performance analysis of machine learning algorithm of detection and classification of brain tumor using computer vision
تحلیل عملکرد الگوریتم یادگیری ماشین تشخیص و طبقه بندی تومور مغزی با استفاده از بینایی کامپیوتر-2022 Brain tumor is one of the undesirables, uncontrolled growth of cells in all age groups. Classification of tumors
depends no its origin and degree of its aggressiveness, it also helps the physician for proper diagnosis and
treatment plan. This research demonstrates the analysis of various state-of-art techniques in Machine Learning
such as Logistic, Multilayer Perceptron, Decision Tree, Naive Bayes classifier and Support Vector Machine for
classification of tumors as Benign and Malignant and the Discreet wavelet transform for feature extraction on the
synthetic data that is available data on the internet source OASIS and ADNI. The research also reveals that the
Logistic Regression and the Multilayer Perceptron gives the highest accuracy of 90%. It mimics the human
reasoning that learns, memorizes and is capable of reasoning and performing parallel computations. In future
many more AI techniques can be trained to classify the multimodal MRI Brain scan to more than two classes of
tumors. keywords: هوش مصنوعی | ام آر آی | رگرسیون لجستیک | پرسپترون چند لایه | Artificial Intelligence | MRI | Logistic regression | OASIS | Multilayer Perceptron |
مقاله انگلیسی |
7 |
An overview of Human Action Recognition in sports based on Computer Vision
مروری بر تشخیص کنش انسانی در ورزش بر اساس بینایی کامپیوتری-2022 Human Action Recognition (HAR) is a challenging task used in sports such as volleyball, basketball, soccer, and
tennis to detect players and recognize their actions and teams activities during training, matches, warm-ups, or
competitions. HAR aims to detect the person performing the action on an unknown video sequence, determine the
actions duration, and identify the action type. The main idea of HAR in sports is to monitor a players performance, that is, to detect the player, track their movements, recognize the performed action, compare various
actions, compare different kinds and skills of acting performances, or make automatic statistical analysis.
As an action that can occur in the sports field refers to a set of physical movements performed by a player in
order to complete a task using their body or interacting with objects or other persons, actions can be of different
complexity. Because of that, a novel systematization of actions based on complexity and level of performance and
interactions is proposed.
The overview of HAR research focuses on various methods performed on publicly available datasets, including actions of everyday activities. That is just a good starting point; however, HAR is increasingly represented in sports and is becoming more directed towards recognizing similar actions of a particular sports domain. Therefore, this paper presents an overview of HAR applications in sports primarily based on Computer Vision as the main contribution, along with popular publicly available datasets for this purpose. keywords: یادگیری ماشین | تشخیص عمل انسانی | سیستم سازی اقدام | مجموعه داده های ورزشی | شناخت کنش انسان در ورزش | ورزش | Machine learning | Human Action Recognition | Action systematization | Sports dataset | Human action recognition in sports | Sport |
مقاله انگلیسی |
8 |
VisuaLizations As Intermediate Representations (VLAIR): An approach for applying deep learning-based computer vision to non-image-based data
تجسم ها به عنوان بازنمایی های میانی (VLAIR): رویکردی برای به کارگیری بینایی کامپیوتری مبتنی بر یادگیری عمیق برای داده های غیر مبتنی بر تصویر-2022 Deep learning algorithms increasingly support automated systems in areas such as human activity
recognition and purchase recommendation. We identify a current trend in which data is transformed
first into abstract visualizations and then processed by a computer vision deep learning pipeline. We
call this VisuaLization As Intermediate Representation (VLAIR) and believe that it can be instrumental
to support accurate recognition in a number of fields while also enhancing humans’ ability to
interpret deep learning models for debugging purposes or for personal use. In this paper we describe
the potential advantages of this approach and explore various visualization mappings and deep
learning architectures. We evaluate several VLAIR alternatives for a specific problem (human activity
recognition in an apartment) and show that VLAIR attains classification accuracy above classical
machine learning algorithms and several other non-image-based deep learning algorithms with several
data representations.
keywords: تجسم اطلاعات | شبکه های عصبی کانولوشنال | تشخیص فعالیت های انسانی | خانه های هوشمند | بازنمایی داده ها | نمایندگی های میانی | تفسیر پذیری | یادگیری ماشین | یادگیری عمیق | Information visualization | Convolutional neural networks | Human activity recognition | Smart homes | Data representation | Intermediate representations | Interpretability | Machine learning | Deep learning |
مقاله انگلیسی |
9 |
Plant leaf disease detection using computer vision and machine learning algorithms
تشخیص بیماری برگ گیاه با استفاده از بینایی کامپیوتری و الگوریتم های یادگیری ماشین-2022 Agriculture provides food to all the human beings even in case of rapid increase in the population. It is recom-
mended to predict the plant diseases at their early stage in the field of agriculture is essential to cater the food to
the overall population. But it unfortunate to predict the diseases at the early stage of the crops. The idea behind
the paper is to bring awareness amongst the farmers about the cutting-edge technologies to reduces diseases in
plant leaf. Since tomato is merely available vegetable, the approaches of machine learning and image processing
with an accurate algorithm is identified to detect the leaf diseases in the tomato plant. In this investigation, the
samples of tomato leaves having disorders are considered. With these disorder samples of tomato leaves, the farm-
ers will easily find the diseases based on the early symptoms. Firstly, the samples of tomato leaves are resized to
256 × 256 pixels and then Histogram Equalization is used to improve the quality of tomato samples. The K-means
clustering is introduced for partitioning of dataspace into Voronoi cells. The boundary of leaf samples is extracted
using contour tracing. The multiple descriptors viz., Discrete Wavelet Transform, Principal Component Analysis
and Grey Level Co-occurrence Matrix are used to extract the informative features of the leaf samples. Finally,
the extracted features are classified using machine learning approaches such as Support Vector Machine (SVM),
Convolutional Neural Network (CNN) and K-Nearest Neighbor (K-NN). The accuracy of the proposed model is
tested using SVM (88%), K-NN (97%) and CNN (99.6%) on tomato disordered samples. keywords: شبکه های عصبی کانولوشنال | تبدیل موجک گسسته | تجزیه و تحلیل مؤلفه های اصلی | نزدیکترین همسایه | بیماری برگ | Convolutional Neural Networks | Discrete Wavelet Transform | Principal Component Analysis | Nearest Neighbor | Leaf disease |
مقاله انگلیسی |
10 |
انتخاب چاه برای طراحی بهینه شبکه پایش آب های زیرزمینی بر اساس نقشه اولویت پایش: مطالعه موردی جزیره کیش
سال انتشار: 2022 - تعداد صفحات فایل pdf انگلیسی: 14 - تعداد صفحات فایل doc فارسی: 25 این مقاله یک رویکرد جدید، یعنی ترکیبی از آزمایش گاما و نقشه اولویت پایش، برای طراحی بهینه شبکه پایش آب زیرزمینی (GMN) با در نظر گرفتن اثرات تجمعی صنایع، فعالیتهای انسانی و عوامل طبیعی بر کیفیت آبهای زیرزمینی ارائه میکند. روش پیشنهادی طراحی شبکه بهینه برای پایش شوری آب زیرزمینی در جزیره کیش خلیج فارس با موفقیت به کار گرفته شد. نقشه اولویت پایش شوری آب های زیرزمینی بر اساس شاخص GALDIT و دو عامل جدید شامل میانگین نوسان هدایت الکتریکی آب های زیرزمینی (F) و فاصله از صنایع تخلیه کننده پساب شور (P) به دست آمد. تعداد بهینه چاه های پایش با استفاده از تجزیه و تحلیل داده ها بر اساس روش آزمون گاما تعیین شد. سپس الگوریتمی کاربردی برای تعیین مکان بهینه چاه های پایش ارائه شد. بر اساس نتایج، تعداد بهینه چاه های پایش 110 حلقه است و موقعیت آنها در کل جزیره دارای توزیع عادلانه است.
کلیدواژه ها: شبکه پایش آب زیرزمینی | مدیریت آب | GALDIT | نقشه اولویت پایش | آزمایش گاما | پایش محیطی |
مقاله ترجمه شده |