با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد).
ردیف | عنوان | نوع |
---|---|---|
1 |
Intelligent authentication of 5G healthcare devices: A survey
احراز هویت هوشمند دستگاه های مراقبت بهداشتی 5G: یک مرور-2022 The dynamic nature of wireless links and the mobility of devices connected to the Internet of
Things (IoT) over fifth-generation (5G) networks (IoT-5G), on the one hand, empowers pervasive
healthcare applications. On the other hand, it allows eavesdroppers and other illegitimate
actors to access secret information. Due to the poor time efficiency and high computational
complexity of conventional cryptographic methods and the heterogeneous technologies used,
it is easy to compromise the authentication of lightweight wearable and healthcare devices.
Therefore, intelligent authentication, which relies on artificial intelligence (AI), and sufficient
network resources are extremely important for securing healthcare devices connected to IoT-
5G. This survey considers intelligent authentication and includes a comprehensive overview of
intelligent authentication mechanisms for securing IoT-5G devices deployed in the healthcare
domain. First, it presents a detailed, thoughtful, and state-of-the-art review of IoT-5G, healthcare
technologies, tools, applications, research trends, challenges, opportunities, and solutions. We
selected 20 technical articles from those surveyed based on their strong overlaps with IoT,
5G, healthcare, device authentication, and AI. Second, IoT-5G device authentication, radiofrequency fingerprinting, and mutual authentication are reviewed, characterized, clustered,
and classified. Third, the review envisions that AI can be used to integrate the attributes
of the physical layer and 5G networks to empower intelligent healthcare devices. Moreover,
methods for developing intelligent authentication models using AI are presented. Finally, the
future outlook and recommendations are introduced for IoT-5G healthcare applications, and
recommendations for further research are presented as well. The remarkable contributions and
relevance of this survey may assist the research community in understanding the research gaps
and the research opportunities relating to the intelligent authentication of IoT-5G healthcare
devices.
keywords: اینترنت اشیا (IoT) | امنیت اینترنت اشیا | احراز هویت دستگاه | هوش مصنوعی | امنیت مراقبت های بهداشتی | شبکه های 5g | InternetofThings(IoT) | InternetofThingssecurity | Deviceauthentication | Artificialintelligence | Healthcaresecurity | 5Gnetworks |
مقاله انگلیسی |
2 |
AI-based computer vision using deep learning in 6G wireless networks
بینایی کامپیوتر مبتنی بر هوش مصنوعی با استفاده از یادگیری عمیق در شبکه های بی سیم 6G-2022 Modern businesses benefit significantly from advances in computer vision technology, one of the
important sectors of artificially intelligent and computer science research. Advanced computer
vision issues like image processing, object recognition, and biometric authentication can benefit
from using deep learning methods. As smart devices and facilities advance rapidly, current net-
works such as 4 G and the forthcoming 5 G networks may not adapt to the rapidly increasing
demand. Classification of images, object classification, and facial recognition software are some
of the most difficult computer vision problems that can be solved using deep learning methods. As
a new paradigm for 6Core network design and analysis, artificial intelligence (AI) has recently
been used. Therefore, in this paper, the 6 G wireless network is used along with Deep Learning to
solve the above challenges by introducing a new methodology named Optimizing Computer
Vision with AI-enabled technology (OCV-AI). This research uses deep learning – efficiency al-
gorithms (DL-EA) for computer vision to address the issues mentioned and improve the system’s
outcome. Therefore, deep learning 6 G proposed frameworks (Dl-6 G) are suggested in this paper
to recognize pattern recognition and intelligent management systems and provide driven meth-
odology planned to be provisioned automatically. For Advanced analytics wise, 6 G networks can
summarize the significant areas for future research and potential solutions, including image
enhancement, machine vision, and access control. keywords: SHG | ارتباطات بی سیم | هوش مصنوعی | فراگیری ماشین | یادگیری عمیق | ارتباطات سیار | 6G | Wireless communication | AI | Machine learning | Deep learning | Mobile communication |
مقاله انگلیسی |
3 |
Predicting social media engagement with computer vision: An examination of food marketing on Instagram
پیشبینی تعامل رسانههای اجتماعی با بینایی رایانه: بررسی بازاریابی مواد غذایی در اینستاگرام-2022 In a crowded social media marketplace, restaurants often try to stand out by showcasing elaborate “Insta-
grammable” foods. Using an image classification machine learning algorithm (Google Vision AI) on restaurants’
Instagram posts, this study analyzes how the visual characteristics of product offerings (i.e., their food) relate to
social media engagement. Results demonstrate that food images that are more confidently evaluated by Google
Vision AI (a proxy for food typicality) are positively associated with engagement (likes and comments). A follow-
up experiment shows that exposure to typical-appearing foods elevates positive affect, suggesting they are easier
to mentally process, which drives engagement. Therefore, contrary to conventional social media practices and
food industry trends, the more typical a food appears, the more social media engagement it receives. Using
Google Vision AI to identify what product offerings receive engagement presents an accessible method for
marketers to understand their industry and inform their social media marketing strategies. keywords: بازاریابی از طریق رسانه های اجتماعی | تعامل با مصرف کننده | یادگیری ماشین | غذا | روان بودن پردازش | هوش مصنوعی گوگل ویژن | Social media marketing | Consumer engagement | Machine learning | Food | Processing fluency | Google Vision AI |
مقاله انگلیسی |
4 |
Computer vision for anatomical analysis of equipment in civil infrastructure projects: Theorizing the development of regression-based deep neural networks
چشم انداز کامپیوتری برای تجزیه و تحلیل آناتومیکی تجهیزات در پروژه های زیرساختی عمرانی: نظریه پردازی توسعه شبکه های عصبی عمیق مبتنی بر رگرسیون-2022 There is high demand for heavy equipment in civil infrastructure projects and their performance is a determinant
of the successful delivery of site operations. Although manufacturers provide equipment performance hand-
books, additional monitoring mechanisms are required to depart from measuring performance on the sole basis
of unit cost for moved materials. Vision-based tracking and pose estimation can facilitate site performance
monitoring. This research develops several regression-based deep neural networks (DNNs) to monitor equipment
with the aim of ensuring safety, productivity, sustainability and quality of equipment operations. Annotated
image libraries are used to train and test several backbone architectures. Experimental results reveal the pre-
cision of DNNs with depthwise separable convolutions and computational efficiency of DNNs with channel
shuffle. This research provides scientific utility by developing a method for equipment pose estimation with the
ability to detect anatomical angles and critical keypoints. The practical utility of this study is the provision of
potentials to influence current practice of articulated machinery monitoring in projects. keywords: هوش مصنوعی (AI) | سیستم های فیزیکی سایبری | معیارهای ارزیابی خطا | طراحی و آزمایش تجربی | تخمین ژست کامل بدن | صنعت و ساخت 4.0 | الگوریتم های یادگیری ماشین | معماری های ستون فقرات شبکه | Artificial intelligence (AI) | Cyber physical systems | Error evaluation metrics | Experimental design and testing | Full body pose estimation | Industry and construction 4.0 | Machine learning algorithms | Network backbone architectures |
مقاله انگلیسی |
5 |
Computer vision for solid waste sorting: A critical review of academic research
بینایی کامپیوتری برای تفکیک زباله جامد: مروری انتقادی تحقیقات دانشگاهی-2022 Waste sorting is highly recommended for municipal solid waste (MSW) management. Increasingly, computer
vision (CV), robotics, and other smart technologies are used for MSW sorting. Particularly, the field of CV-
enabled waste sorting is experiencing an unprecedented explosion of academic research. However, little atten-
tion has been paid to understanding its evolvement path, status quo, and prospects and challenges ahead. To
address the knowledge gap, this paper provides a critical review of academic research that focuses on CV-enabled
MSW sorting. Prevalent CV algorithms, in particular their technical rationales and prediction performance, are
introduced and compared. The distribution of academic research outputs is also examined from the aspects of
waste sources, task objectives, application domains, and dataset accessibility. The review discovers a trend of
shifting from traditional machine learning to deep learning algorithms. The robustness of CV for waste sorting is
increasingly enhanced owing to the improved computation powers and algorithms. Academic studies were un-
evenly distributed in different sectors such as household, commerce and institution, and construction. Too often,
researchers reported some preliminary studies using simplified environments and artificially collected data.
Future research efforts are encouraged to consider the complexities of real-world scenarios and implement CV in
industrial waste sorting practice. This paper also calls for open sharing of waste image datasets for interested
researchers to train and evaluate their CV algorithms. keywords: زباله جامد شهری | تفکیک زباله | بینایی ماشین | تشخیص تصویر | یادگیری ماشین | یادگیری عمیق | Municipal solid waste | Waste sorting | Computer vision | Image recognition | Machine learning | Deep learning |
مقاله انگلیسی |
6 |
A combined real-time intelligent fire detection and forecasting approach through cameras based on computer vision method
یک رویکرد تشخیص و پیشبینی حریق هوشمند ترکیبی در زمان واقعی از طریق دوربینها بر اساس روش بینایی کامپیوتری-2022 Fire is one of the most common hazards in the process industry. Until today, most fire alarms have had very
limited functionality. Normally, only a simple alarm is triggered without any specific information about the fire
circumstances provided, not to mention fire forecasting. In this paper, a combined real-time intelligent fire
detection and forecasting approach through cameras is discussed with extracting and predicting fire development
characteristics. Three parameters (fire spread position, fire spread speed and flame width) are used to charac-
terize the fire development. Two neural networks are established, i.e., the Region-Convolutional Neural Network
(RCNN) for fire characteristic extraction through fire detection and the Residual Network (ResNet) for fire
forecasting. By designing 12 sets of cable fire experiments with different fire developing conditions, the accu-
racies of fire parameters extraction and forecasting are evaluated. Results show that the mean relative error
(MRE) of extraction by RCNN for the three parameters are around 4–13%, 6–20% and 11–37%, respectively.
Meanwhile, the MRE of forecasting by ResNet for the three parameters are around 4–13%, 11–33% and 12–48%,
respectively. It confirms that the proposed approach can provide a feasible solution for quantifying fire devel-
opment and improve industrial fire safety, e.g., forecasting the fire development trends, assessing the severity of
accidents, estimating the accident losses in real time and guiding the fire fighting and rescue tactics. keywords: ایمنی آتش سوزی صنعتی | تشخیص حریق | پیش بینی آتش سوزی | تجزیه و تحلیل آتش سوزی | هوش مصنوعی | Industrial fire safety | Fire detection | Fire forecasting | Fire analysis | Artificial intelligence |
مقاله انگلیسی |
7 |
Performance analysis of machine learning algorithm of detection and classification of brain tumor using computer vision
تحلیل عملکرد الگوریتم یادگیری ماشین تشخیص و طبقه بندی تومور مغزی با استفاده از بینایی کامپیوتر-2022 Brain tumor is one of the undesirables, uncontrolled growth of cells in all age groups. Classification of tumors
depends no its origin and degree of its aggressiveness, it also helps the physician for proper diagnosis and
treatment plan. This research demonstrates the analysis of various state-of-art techniques in Machine Learning
such as Logistic, Multilayer Perceptron, Decision Tree, Naive Bayes classifier and Support Vector Machine for
classification of tumors as Benign and Malignant and the Discreet wavelet transform for feature extraction on the
synthetic data that is available data on the internet source OASIS and ADNI. The research also reveals that the
Logistic Regression and the Multilayer Perceptron gives the highest accuracy of 90%. It mimics the human
reasoning that learns, memorizes and is capable of reasoning and performing parallel computations. In future
many more AI techniques can be trained to classify the multimodal MRI Brain scan to more than two classes of
tumors. keywords: هوش مصنوعی | ام آر آی | رگرسیون لجستیک | پرسپترون چند لایه | Artificial Intelligence | MRI | Logistic regression | OASIS | Multilayer Perceptron |
مقاله انگلیسی |
8 |
A radiological image analysis framework for early screening of the COVID-19 infection: A computer vision-based approach
چارچوب تجزیه و تحلیل تصویر رادیولوژیکی برای غربالگری اولیه عفونت COVID-19: یک رویکرد مبتنی بر بینایی کامپیوتری-2022 Due to the absence of any specialized drugs, the novel coronavirus disease 2019 or COVID-19 is
one of the biggest threats to mankind Although the RT-PCR test is the gold standard to confirm
the presence of this virus, some radiological investigations find some important features from the
CT scans of the chest region, which are helpful to identify the suspected COVID-19 patients. This
article proposes a novel fuzzy superpixel-based unsupervised clustering approach that can be useful
to automatically process the CT scan images without any manual annotation and helpful in the easy
interpretation. The proposed approach is based on artificial cell swarm optimization and will be
known as the SUFACSO (SUperpixel based Fuzzy Artificial Cell Swarm Optimization) and implemented
in the Matlab environment. The proposed approach uses a novel superpixel computation method
which is helpful to effectively represent the pixel intensity information which is beneficial for the
optimization process. Superpixels are further clustered using the proposed fuzzy artificial cell swarm
optimization approach. So, a twofold contribution can be observed in this work which is helpful
to quickly diagnose the patients in an unsupervised manner so that, the suspected persons can be
isolated at an early phase to combat the spread of the COVID-19 virus and it is the major clinical
impact of this work. Both qualitative and quantitative experimental results show the effectiveness of
the proposed approach and also establish it as an effective computer-aided tool to fight against the
COVID-19 virus. Four well-known cluster validity measures Davies–Bouldin, Dunn, Xie–Beni, and β
index are used to quantify the segmented results and it is observed that the proposed approach not
only performs well but also outperforms some of the standard approaches. On average, the proposed
approach achieves 1.709792, 1.473037, 1.752433, 1.709912 values of the Xie–Beni index for 3, 5,7, and
9 clusters respectively and these values are significantly lesser compared to the other state-of-the-art
approaches. The general direction of this research is worthwhile pursuing leading, eventually, to a
contribution to the community.
keywords: کووید-۱۹ | تفسیر تصویر رادیولوژیکی | سوپرپیکسل | سیستم فازی نوع 2 | بهینه سازی ازدحام سلول های مصنوعی | COVID-19 | Radiological image interpretation | Superpixel | Type 2 fuzzy system | Artificial cell swarm optimization |
مقاله انگلیسی |
9 |
Natural Embedding of the Stokes Parameters of Polarimetric Synthetic Aperture Radar Images in a Gate-Based Quantum Computer
جاسازی طبیعی پارامترهای استوکس تصاویر رادار دیافراگم مصنوعی قطبی در یک کامپیوتر کوانتومی مبتنی بر گیت-2022 Quantum algorithms are designed to process quantum data (quantum bits) in a gate-based quantum computer. They
are proven rigorously that they reveal quantum advantages over
conventional algorithms when their inputs are certain quantum
data or some classical data mapped to quantum data. However,
in a practical domain, data are classical in nature, and they are
very big in dimension, size, and so on. Hence, there is a challenge
to map (embed) classical data to quantum data, and even no
quantum advantages of quantum algorithms are demonstrated
over conventional ones when one processes the mapped classical
data in a gate-based quantum computer. For the practical domain
of earth observation (EO), due to the different sensors on remotesensing platforms, we can map directly some types of EO data
to quantum data. In particular, we have polarimetric synthetic
aperture radar (PolSAR) images characterized by polarized
beams. A polarized state of the polarized beam and a quantum
bit are the Doppelganger of a physical state. We map them to
each other, and we name this direct mapping a natural embedding,
otherwise an artificial embedding. Furthermore, we process our
naturally embedded data in a gate-based quantum computer by
using a quantum algorithm regardless of its quantum advantages
over conventional techniques; namely, we use the QML network
as a quantum algorithm to prove that we naturally embedded
our data in input qubits of a gate-based quantum computer.
Therefore, we employed and directly processed PolSAR images
in a QML network. Furthermore, we designed and provided a
QML network with an additional layer of a neural network,
namely, a hybrid quantum-classical network, and demonstrate
how to program (via optimization and backpropagation) this
hybrid quantum-classical network when employing and processing PolSAR images. In this work, we used a gate-based quantum
computer offered by an IBM Quantum and a classical simulator
for a gate-based quantum computer. Our contribution is that
we provided very specific EO data with a natural embedding
feature, the Doppelganger of quantum bits, and processed them
in a hybrid quantum-classical network. More importantly, in the
future, these PolSAR data can be processed by future quantum
algorithms and future quantum computing platforms to obtain
(or demonstrate) some quantum advantages over conventional
techniques for EO problems.
Index Terms: Natural embedding | parameterized quantum circuit | polarimetric synthetic aperture radar (PolSAR) | quantum machine learning (QML). |
مقاله انگلیسی |
10 |
Artificial intelligence versus natural selection: Using computer vision techniques to classify bees and bee mimics
هوش مصنوعی در مقابل انتخاب طبیعی: استفاده از تکنیکهای بینایی کامپیوتری برای طبقهبندی زنبورها و تقلیدهای زنبور عسل-2022 Many groups of stingless insects have independently evolved mimicry of bees to fool would-be predators. To investigate this mimicry, we trained artificial intelligence (AI) algorithms—specifically, computer vision—to classify citizen scientist images of bees, bumble bees, and diverse bee mimics. For detecting bees and bumble bees, our models achieved accuracies of and , respectively. As a proxy for a natural predator, our models were poorest in detecting bee mimics that exhibit both aggressive and defensive mimicry. Using the explainable AI method of class activation maps, we validated that our models learn from appropriate components within the image, which in turn provided anatomical insights. Our t-SNE plot yielded perfect within-group clustering, as well as between-group clustering that grossly replicated the phylogeny. Ultimately, the transdisciplinary approaches herein can enhance global citizen science efforts as well as investigations of mimicry and morphology of bees and other insects.
keywords: Artificial intelligence | Bioinformatics | Computing methodology | Entomology | Zoology |
مقاله انگلیسی |