با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد).
ردیف | عنوان | نوع |
---|---|---|
1 |
Internet of Things-enabled Passive Contact Tracing in Smart Cities
ردیابی تماس غیرفعال با قابلیت اینترنت اشیا در شهرهای هوشمند-2022 Contact tracing has been proven an essential practice during pandemic outbreaks and is a
critical non-pharmaceutical intervention to reduce mortality rates. While traditional con-
tact tracing approaches are gradually being replaced by peer-to-peer smartphone-based
systems, the new applications tend to ignore the Internet-of-Things (IoT) ecosystem that is
steadily growing in smart city environments. This work presents a contact tracing frame-
work that logs smart space users’ co-existence using IoT devices as reference anchors. The
design is non-intrusive as it relies on passive wireless interactions between each user’s
carried equipment (e.g., smartphone, wearable, proximity card) with an IoT device by uti-
lizing received signal strength indicators (RSSI). The proposed framework can log the iden-
tities for the interacting pair, their estimated distance, and the overlapping time duration.
Also, we propose a machine learning-based infection risk classification method to char-
acterize each interaction that relies on RSSI-based attributes and contact details. Finally,
the proposed contact tracing framework’s performance is evaluated through a real-world
case study of actual wireless interactions between users and IoT devices through Bluetooth
Low Energy advertising. The results demonstrate the system’s capability to accurately cap-
ture contact between mobile users and assess their infection risk provided adequate model
training over time.
© 2021 Elsevier B.V. All rights reserved. keywords: بلوتوث کم انرژی | ردیابی تماس | اینترنت اشیا | طبقه بندی خطر عفونت | Bluetooth Low Energy | Contact Tracing | Internet of Things | Infection Risk Classification |
مقاله انگلیسی |
2 |
Is the Internet of Things a helpful employee? An exploratory study of discourses of Canadian farmers
آیا اینترنت اشیا یک کارمند مفید است؟ بررسی اکتشافی گفتمان های کشاورزان کانادایی-2022 The increasing global population and the growing demand for high-quality products have called
for the modernization of agriculture. “Internet of Things” is one of the technologies that is pre-
dicted to offer many solutions. We conducted a discourse analysis of 19 interviews with farmers in
Ontario, Canada, asking them to describe their experience of working with IoT and related
technologies. One main discourse with two opposing tendencies was identified: farmers recognize
their relationship with IoT and related technology and view technology as a kind of “employee”,
but some tend to emphasize (1) an optimistic view which is discourse of technology is a “Helpful
Employee”; while others tend to emphasize (2) a pessimistic view which is a discourse of tech-
nology is an “Untrustworthy Employee”. We examine these tendencies in the light of the literature
on organizational behavior and identify potential outcomes of these beliefs. The results suggest
that a farmer’s style of approaching technology can be assessed on a similar scale as managers’
view of their employees and provide a framework for further research. keywords: فناوری اینترنت اشیا | کشاورزی | تحلیل گفتمان | سبک استفاده از تکنولوژی | Internet of things technology | Agriculture | Discourse analysis | Style of use of technology |
مقاله انگلیسی |
3 |
AI-based computer vision using deep learning in 6G wireless networks
بینایی کامپیوتر مبتنی بر هوش مصنوعی با استفاده از یادگیری عمیق در شبکه های بی سیم 6G-2022 Modern businesses benefit significantly from advances in computer vision technology, one of the
important sectors of artificially intelligent and computer science research. Advanced computer
vision issues like image processing, object recognition, and biometric authentication can benefit
from using deep learning methods. As smart devices and facilities advance rapidly, current net-
works such as 4 G and the forthcoming 5 G networks may not adapt to the rapidly increasing
demand. Classification of images, object classification, and facial recognition software are some
of the most difficult computer vision problems that can be solved using deep learning methods. As
a new paradigm for 6Core network design and analysis, artificial intelligence (AI) has recently
been used. Therefore, in this paper, the 6 G wireless network is used along with Deep Learning to
solve the above challenges by introducing a new methodology named Optimizing Computer
Vision with AI-enabled technology (OCV-AI). This research uses deep learning – efficiency al-
gorithms (DL-EA) for computer vision to address the issues mentioned and improve the system’s
outcome. Therefore, deep learning 6 G proposed frameworks (Dl-6 G) are suggested in this paper
to recognize pattern recognition and intelligent management systems and provide driven meth-
odology planned to be provisioned automatically. For Advanced analytics wise, 6 G networks can
summarize the significant areas for future research and potential solutions, including image
enhancement, machine vision, and access control. keywords: SHG | ارتباطات بی سیم | هوش مصنوعی | فراگیری ماشین | یادگیری عمیق | ارتباطات سیار | 6G | Wireless communication | AI | Machine learning | Deep learning | Mobile communication |
مقاله انگلیسی |
4 |
A computer vision-based method for bridge model updating using displacement influence lines
یک روش مبتنی بر بینایی کامپیوتری برای بهروزرسانی مدل پل با استفاده از خطوط موثر جابجایی-2022 This paper presents a new computer vision-based method that simultaneously provides the moving vehicle’s
tire loads, the location of the loads on a bridge, and the bridge’s response displacements, based on which the
bridge’s influence lines can be constructed. The method employs computer vision techniques to measure the
displacement influence lines of the bridge at different target positions, which is then later used to perform
model updating of the finite element models of the monitored structural system.
The method is enabled by a novel computer vision-based vehicle weigh-in-motion method which the coauthors recently introduced. A correlation discriminating filter tracker is used to estimate the displacements at target points and the location of single or multiple moving loads, while a low-cost, non-contact weigh-in-motion technique evaluates the magnitude of the moving vehicle loads. The method described in this paper is tested and validated using a laboratory bridge model. The system was loaded with a vehicle with pressurized tires and equipped with a monitoring system consisting of laser displacement sensors, accelerometers, and cameras. Both artificial and natural targets were considered in the experimental tests to track the displacements with the cameras and yielded robust results consistent with the laser displacement measurements. The extracted normalized displacement influence lines were then successfully used to perform model updating of the structure. The laser displacement sensors were used to validate the accuracy of the proposed computer vision-based approach in deriving the displacement measurements, while the accelerometers were used to derive the system’s modal properties employed to validate the updated finite element model. As a result, the updated finite element model correctly predicted the bridge’s displacements measured during the tests. Furthermore, the modal parameters estimated by the updated finite element model agreed well with those extracted from the experimental modal analysis carried out on the bridge model. The method described in this paper offers a low-cost non-contact monitoring tool that can be efficiently used without disrupting traffic for bridges in model updating analysis or long-term structural health monitoring. keywords: Computer vision | Displacement influence line | Vehicle weigh-in-motion | Structural identification | Finite element method model | Model updating | Modal analysis | Bridge systems |
مقاله انگلیسی |
5 |
Predicting social media engagement with computer vision: An examination of food marketing on Instagram
پیشبینی تعامل رسانههای اجتماعی با بینایی رایانه: بررسی بازاریابی مواد غذایی در اینستاگرام-2022 In a crowded social media marketplace, restaurants often try to stand out by showcasing elaborate “Insta-
grammable” foods. Using an image classification machine learning algorithm (Google Vision AI) on restaurants’
Instagram posts, this study analyzes how the visual characteristics of product offerings (i.e., their food) relate to
social media engagement. Results demonstrate that food images that are more confidently evaluated by Google
Vision AI (a proxy for food typicality) are positively associated with engagement (likes and comments). A follow-
up experiment shows that exposure to typical-appearing foods elevates positive affect, suggesting they are easier
to mentally process, which drives engagement. Therefore, contrary to conventional social media practices and
food industry trends, the more typical a food appears, the more social media engagement it receives. Using
Google Vision AI to identify what product offerings receive engagement presents an accessible method for
marketers to understand their industry and inform their social media marketing strategies. keywords: بازاریابی از طریق رسانه های اجتماعی | تعامل با مصرف کننده | یادگیری ماشین | غذا | روان بودن پردازش | هوش مصنوعی گوگل ویژن | Social media marketing | Consumer engagement | Machine learning | Food | Processing fluency | Google Vision AI |
مقاله انگلیسی |
6 |
ChickenNet - an end-to-end approach for plumage condition assessment of laying hens in commercial farms using computer vision
ChickenNet - یک رویکرد انتها به انتها برای ارزیابی وضعیت پرهای مرغ های تخمگذار در مزارع تجاری با استفاده از بینایی کامپیوتر-2022 Regular plumage condition assessment in laying hens is essential to monitor the hens’ welfare status and to
detect the occurrence of feather pecking activities. However, in commercial farms this is a labor-intensive,
manual task. This study proposes a novel approach for automated plumage condition assessment using com-
puter vision and deep learning. It presents ChickenNet, an end-to-end convolutional neural network that detects
hens and simultaneously predicts a plumage condition score for each detected hen. To investigate the effect of
input image characteristics, the method was evaluated using images with and without depth information in
resolutions of 384 × 384, 512 × 512, 896 × 896 and 1216 × 1216 pixels. Further, to determine the impact of
subjective human annotations, plumage condition predictions were compared to manual assessments of one
observer and to matching annotations of two observers. Among all tested settings, performance metrics based on
matching manual annotations of two observers were equal or better than the ones based on annotations of a
single observer. The best result obtained among all tested configurations was a mean average precision (mAP) of
98.02% for hen detection while 91.83% of the plumage condition scores were predicted correctly. Moreover, it
was revealed that performance of hen detection and plumage condition assessment of ChickenNet was not
generally enhanced by depth information. Increasing image resolutions improved plumage assessment up to a
resolution of 896 × 896 pixels, while high detection accuracies (mAP > 0.96) could already be achieved using
lower resolutions. The results indicate that ChickenNet provides a sufficient basis for automated monitoring of
plumage conditions in commercial laying hen farms. keywords: طیور | ارزیابی پر و بال | بینایی کامپیوتر | یادگیری عمیق | تقسیم بندی نمونه | Poultry | Plumage assessment | Computer vision | Deep learning | Instance segmentation |
مقاله انگلیسی |
7 |
Assessing surface drainage conditions at the street and neighborhood scale: A computer vision and flow direction method applied to lidar data
ارزیابی شرایط زهکشی سطحی در مقیاس خیابان و محله: یک روش دید کامپیوتری و جهت جریان اعمال شده به داده های لیدار-2022 Surface drainage at the neighborhood and street scales plays an important role in conveying stormwater and
mitigating urban flooding. Surface drainage at the local scale is often ignored due to the lack of up-to-date fine-
scale topographical information. This paper addresses this issue by providing a novel method for evaluating
surface drainage at the neighborhood and street scales based on mobile lidar (light detection and ranging)
measurements. The developed method derives topographical properties and runoff accumulation by applying a
semantic segmentation (SS) model (a computer vision technique) and a flow direction model (a hydrology
technique) to lidar data. Fifty lidar images representing 50 street blocks were used to train, validate, and test the
SS model. Based on the test dataset, the SS model has 80.3% IoU and 88.5% accuracy. The results suggest that the
proposed method can effectively evaluate surface drainage conditions at both the neighborhood and street scales
and identify problematic low points that could be susceptible to water ponding. Municipalities and property
owners can use this information to take targeted corrective maintenance actions. keywords: تقسیم بندی معنایی | جهت جریان | لیدار موبایل | زهکشی سطحی | زیرساخت های زهکشی | Semantic segmentation | Flow direction | Mobile lidar | Surface drainage | Drainage infrastructure |
مقاله انگلیسی |
8 |
Power to the people: Applying citizen science and computer vision to home mapping for rural energy access
قدرت به مردم: به کارگیری علم شهروندی و بینش رایانه در نقشهبرداری خانه برای دسترسی به انرژی روستایی-2022 To implement effective rural electricity access systems, it is fundamental to identify where potential consumers
live. Here, we test the suitability of citizen science paired with satellite imagery and computer vision to map
remote off-grid homes for electrical system design. A citizen science project called “Power to the People” was
completed on the Zooniverse platform to collect home annotations in Uganda, Kenya, and Sierra Leone. Thou-
sands of citizen scientists created a novel dataset of 578,010 home annotations with an average mapping speed of
7 km2/day. These data were post-processed with clustering to determine high-consensus home annotations. The
raw annotations achieved a recall of 93% and precision of 49%; clustering the annotations increased precision to
69%. These were used to train a Faster R-CNN object detection model, producing detections useful as a first pass
for home-level mapping with a feasible mapping rate of 42,938 km2/day. Detections achieved a precision of 67%
and recall of 36%. This research shows citizen science and computer vision to be a promising pipeline for
accelerated rural home-level mapping to enable energy system design. keywords: دانش شهروندی | بینایی کامپیوتر | دسترسی به برق | نقشه برداری روستایی | تصویربرداری ماهواره ای | سنجش از دور | Citizen science | Computer vision | Electricity access | Rural mapping | Satellite imagery | Remote sensing |
مقاله انگلیسی |
9 |
Pork primal cuts recognition method via computer vision
روش تشخیص برش های اولیه گوشت خوک از طریق بینایی کامپیوتری-2022 Pork accounts for more than 33% of global meat consumption and dominates meat consumption in China. With
the improvement of peoples quality of life, people pay more and more attention to the quality of pork. There are
many factors that affect the quality of pork, and the cutting position of pork is also one of them. The quality of
different pork primal cuts varies greatly. Aiming at the difficulty of distinguishing pork primal cuts, this study
proposes a computer vision-based method to identify different pork primal cuts, using images of four different
pork primal cuts (ham, loin, belly, and neck) as the experimental data, the results show that the method proposed
in this paper can identify the original cuts of pork well. It also proves that computer vision technology has the
potential to help people identify pork cuts. keywords: برش های اولیه گوشت خوک | شناسایی برش گوشت خوک | بینایی کامپیوتر | تشخیص برش های اولیه | Pork primal cuts | Identifying pork cut | Computer vision | Primal cuts recognition |
مقاله انگلیسی |
10 |
Disintegration testing augmented by computer Vision technology
آزمایش تجزیه با فناوری Vision کامپیوتری تقویت شده است-2022 Oral solid dosage forms, specifically immediate release tablets, are prevalent in the pharmaceutical industry.
Disintegration testing is often the first step of commercialization and large-scale production of these dosage
forms. Current disintegration testing in the pharmaceutical industry, according to United States Pharmacopeia
(USP) chapter 〈701〉, only gives information about the duration of the tablet disintegration process. This infor-
mation is subjective, variable, and prone to human error due to manual or physical data collection methods via
the human eye or contact disks. To lessen the data integrity risk associated with this process, efforts have been
made to automate the analysis of the disintegration process using digital lens and other imaging technologies.
This would provide a non-invasive method to quantitatively determine disintegration time through computer
algorithms. The main challenges associated with developing such a system involve visualization of tablet pieces
through cloudy and turbid liquid. The Computer Vision for Disintegration (CVD) system has been developed to
be used along with traditional pharmaceutical disintegration testing devices to monitor tablet pieces and
distinguish them from the surrounding liquid. The software written for CVD utilizes data captured by cameras or
other lenses then uses mobile SSD and CNN, with an OpenCV and FRCNN machine learning model, to analyze
and interpret the data. This technology is capable of consistently identifying tablets with ≥ 99.6% accuracy. Not
only is the data produced by CVD more reliable, but it opens the possibility of a deeper understanding of
disintegration rates and mechanisms in addition to duration. keywords: از هم پاشیدگی | اشکال خوراکی جامد | تست تجزیه | یادگیری ماشین | شبکه های عصبی | Disintegration | Oral Solid Dosage Forms | Disintegration Test | Machine Learning | Neural Networks |
مقاله انگلیسی |