با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد).
ردیف | عنوان | نوع |
---|---|---|
1 |
High-accuracy in the classification of butchery cut marks and crocodile tooth marks using machine learning methods and computer vision algorithms
دقت بالا در طبقه بندی علائم برش قصابی و علائم دندان تمساح با استفاده از روش های یادگیری ماشین و الگوریتم های بینایی کامپیوتری-2022 Some researchers using traditional taphonomic criteria (groove shape and presence/absence of microstriations) have cast some doubts about the potential equifinality presented by crocodile tooth marks and
stone tool butchery cut marks. Other researchers have argued that multivariate methods can efficiently
separate both types of marks. Differentiating both taphonomic agents is crucial for determining the earliest evidence of carcass processing by hominins. Here, we use an updated machine learning approach
(discarding artificially bootstrapping the original imbalanced samples) to show that microscopic features
shaped as categorical variables, corresponding to intrinsic properties of mark structure, can accurately
discriminate both types of bone modifications. We also implement new deep-learning methods that
objectively achieve the highest accuracy in differentiating cut marks from crocodile tooth scores (99%
of testing sets). The present study shows that there are precise ways of differentiating both taphonomic
agents, and this invites taphonomists to apply them to controversial paleontological and archaeological
specimens.
keywords: تافونومی | علائم برش | علائم دندان | فراگیری ماشین | یادگیری عمیق | شبکه های عصبی کانولوشنال | قصابی | Taphonomy | Cut marks | Tooth marks | Machine learning | Deep learning | Convolutional neural networks | Butchery |
مقاله انگلیسی |
2 |
Deep convolutional neural networks-based Hardware–Software on-chip system for computer vision application
سیستم سختافزار-نرمافزار روی تراشه مبتنی بر شبکههای عصبی عمیق برای کاربرد بینایی ماشین-2022 Embedded vision systems are the best solutions for high-performance and lightning-fast inspection tasks. As everyday life evolves, it becomes almost imperative to harness artificial
intelligence (AI) in vision applications that make these systems intelligent and able to make
decisions close to or similar to humans. In this context, the AI’s integration on embedded
systems poses many challenges, given that its performance depends on data volume and
quality they assimilate to learn and improve. This returns to the energy consumption and
cost constraints of the FPGA-SoC that have limited processing, memory, and communication
capacity. Despite this, the AI algorithm implementation on embedded systems can drastically
reduce energy consumption and processing times, while reducing the costs and risks associated
with data transmission. Therefore, its efficiency and reliability always depend on the designed
prototypes. Within this range, this work proposes two different designs for the Traffic Sign
Recognition (TSR) application based on the convolutional neural network (CNN) model,
followed by three implantations on PYNQ-Z1. Firstly, we propose to implement the CNN-based
TSR application on the PYNQ-Z1 processor. Considering its runtime result of around 3.55 s,
there is room for improvement using programmable logic (PL) and processing system (PS) in a
hybrid architecture. Therefore, we propose a streaming architecture, in which the CNN layers
will be accelerated to provide a hardware accelerator for each layer where direct memory
access (DMA) interface is used. Thus, we noticed efficient power consumption, decreased
hardware cost, and execution time optimization of 2.13 s, but, there was still room for design
optimizations. Finally, we propose a second co-design, in which the CNN will be accelerated
to be a single computation engine where BRAM interface is used. The implementation results
prove that our proposed embedded TSR design achieves the best performances compared to the
first proposed architectures, in terms of execution time of about 0.03 s, computation roof of
about 36.6 GFLOPS, and bandwidth roof of about 3.2 GByte/s.
keywords: CNN | FPGA | Acceleration | Co-design | PYNQ-Z1 |
مقاله انگلیسی |
3 |
A computer vision framework using Convolutional Neural Networks for airport-airside surveillance
چارچوب بینایی کامپیوتری با استفاده از شبکههای عصبی کانولوشن برای نظارت در فرودگاه-2022 Modern airports often have large and complex airside environments featuring multiple runways, with changing configurations, numerous taxiways for effective circulation of flights and tens, if not hundreds, of gates. With inherent uncertainties in gate push-back and taxiway routing, efficient surveillance and management of airport-airside operations is a highly challenging task for air traffic controllers. An increase in air traffic may lead to gate delays, taxiway congestion, taxiway incursions as well as significant increase in the workload of air traffic controllers. With the advent of Digital Towers, airports are increasingly being equipped with surveillance camera systems. This paper proposes a novel computer vision framework for airport-airside surveillance, using cameras to monitor ground movement objects for safety enhancement and operational efficiency improvement. The framework adopts Convolutional Neural Networks and camera calibration techniques for aircraft detection and tracking, push-back prediction, and maneuvering monitoring. The proposed framework is applied on video camera feeds from Houston Airport, USA (for maneuvering monitoring) and Obihiro Airport, Japan (for push-back prediction). The object detection models of the proposed framework achieve up to 73.36% average precision on Houston airport and 87.3% on Obihiro airport. The framework estimates aircraft speed and distance with low error (up to 6 meters), and aircraft push-back is predicted with an average error of 3 min from the time an aircraft arrives with the error-rate reducing until the aircraft’s actual push-back event.
keywords: Air traffic control | Convolutional Neural Network | Computer vision |
مقاله انگلیسی |
4 |
Using social media photos and computer vision to assess cultural ecosystem services and landscape features in urban parks
استفاده از عکس های رسانه های اجتماعی و بینایی کامپیوتری برای ارزیابی خدمات اکوسیستم فرهنگی و ویژگی های چشم انداز در پارک های شهری-2022 Urban parks are important public places that provide an opportunity for city dwellers to interact with nature. In
recent years, social media data have become a promising data source for the assessment of cultural ecosystem
services (CES) and landscape features in urban parks. However, it is a challenging task to identify and classify the
CES and landscape features from social media photos by manual content analysis. In addition, relatively few
studies focused on the differences in landscape preferences between tourists and locals in urban parks. In this
study, we used geotagged social media photos from Flickr and computer vision methods (scene recognition,
image clustering and image labeling) based on the convolutional neural networks (CNN) and the Google Cloud
Vision platform to assess the spatial preferences and landscape preferences (cultural ecosystem services and
landscape features) of tourists and locals in the urban parks of Brussels. The spatial analysis results showed that
the tourists’ photos were spatially concentrated on well-known parks located in the city center while the locals’
photos were rather spatially dispersed across all parks of the city. We identified 10 main landscape themes
(corresponding to 4 CES categories and 10 landscape feature categories) from 20 image clusters by automated
image analysis on social media photos. We also noticed that tourists paid more attention to the place identity
featured by symbolic sculptures and buildings, while locals showed more interest in local species of plants,
flowers, insects, birds, and animals. This research contributes to social media-based user preferences analysis and
CES assessment, which could provide insights for urban park planning and tourism management. keywords: داده های رسانه های اجتماعی | خدمات اکوسیستم فرهنگی | ویژگی های چشم انداز | پارک های شهری | بینایی کامپیوتر | Social media data | Cultural ecosystem services | Landscape features | Urban parks | Computer vision |
مقاله انگلیسی |
5 |
VisuaLizations As Intermediate Representations (VLAIR): An approach for applying deep learning-based computer vision to non-image-based data
تجسم ها به عنوان بازنمایی های میانی (VLAIR): رویکردی برای به کارگیری بینایی کامپیوتری مبتنی بر یادگیری عمیق برای داده های غیر مبتنی بر تصویر-2022 Deep learning algorithms increasingly support automated systems in areas such as human activity
recognition and purchase recommendation. We identify a current trend in which data is transformed
first into abstract visualizations and then processed by a computer vision deep learning pipeline. We
call this VisuaLization As Intermediate Representation (VLAIR) and believe that it can be instrumental
to support accurate recognition in a number of fields while also enhancing humans’ ability to
interpret deep learning models for debugging purposes or for personal use. In this paper we describe
the potential advantages of this approach and explore various visualization mappings and deep
learning architectures. We evaluate several VLAIR alternatives for a specific problem (human activity
recognition in an apartment) and show that VLAIR attains classification accuracy above classical
machine learning algorithms and several other non-image-based deep learning algorithms with several
data representations.
keywords: تجسم اطلاعات | شبکه های عصبی کانولوشنال | تشخیص فعالیت های انسانی | خانه های هوشمند | بازنمایی داده ها | نمایندگی های میانی | تفسیر پذیری | یادگیری ماشین | یادگیری عمیق | Information visualization | Convolutional neural networks | Human activity recognition | Smart homes | Data representation | Intermediate representations | Interpretability | Machine learning | Deep learning |
مقاله انگلیسی |
6 |
Deep learning based computer vision approaches for smart agricultural applications
رویکردهای بینایی کامپیوتری مبتنی بر یادگیری عمیق برای کاربردهای کشاورزی هوشمند-2022 The agriculture industry is undergoing a rapid digital transformation and is growing powerful by the pillars of cutting-edge approaches like artificial intelligence and allied technologies. At the core of artificial intelligence, deep learning-based computer vision enables various agriculture activities to be performed automatically with utmost precision enabling smart agriculture into reality. Computer vision techniques, in conjunction with high-quality image acquisition using remote cameras, enable non-contact and efficient technology-driven solutions in agriculture. This review contributes to providing state-of-the-art computer vision technologies based on deep learning that can assist farmers in operations starting from land preparation to harvesting. Recent works in the area of computer vision were analyzed in this paper and categorized into (a) seed quality analysis, (b) soil analysis, (c) irrigation water management, (d) plant health analysis, (e) weed management (f) livestock management and (g) yield estimation. The paper also discusses recent trends in computer vision such as generative adversarial networks (GAN), vision transformers (ViT) and other popular deep learning architectures. Additionally, this study pinpoints the challenges in implementing the solutions in the farmer’s field in real-time. The overall finding indicates that convolutional neural networks are the corner stone of modern computer vision approaches and their various architectures provide high-quality solutions across various agriculture activities in terms of precision and accuracy. However, the success of the computer vision approach lies in building the model on a quality dataset and providing real-time solutions.
keywords: Agriculture automation | Computer vision | Deep learning | Machine learning | Smart agriculture | Vision transformers |
مقاله انگلیسی |
7 |
Plant leaf disease detection using computer vision and machine learning algorithms
تشخیص بیماری برگ گیاه با استفاده از بینایی کامپیوتری و الگوریتم های یادگیری ماشین-2022 Agriculture provides food to all the human beings even in case of rapid increase in the population. It is recom-
mended to predict the plant diseases at their early stage in the field of agriculture is essential to cater the food to
the overall population. But it unfortunate to predict the diseases at the early stage of the crops. The idea behind
the paper is to bring awareness amongst the farmers about the cutting-edge technologies to reduces diseases in
plant leaf. Since tomato is merely available vegetable, the approaches of machine learning and image processing
with an accurate algorithm is identified to detect the leaf diseases in the tomato plant. In this investigation, the
samples of tomato leaves having disorders are considered. With these disorder samples of tomato leaves, the farm-
ers will easily find the diseases based on the early symptoms. Firstly, the samples of tomato leaves are resized to
256 × 256 pixels and then Histogram Equalization is used to improve the quality of tomato samples. The K-means
clustering is introduced for partitioning of dataspace into Voronoi cells. The boundary of leaf samples is extracted
using contour tracing. The multiple descriptors viz., Discrete Wavelet Transform, Principal Component Analysis
and Grey Level Co-occurrence Matrix are used to extract the informative features of the leaf samples. Finally,
the extracted features are classified using machine learning approaches such as Support Vector Machine (SVM),
Convolutional Neural Network (CNN) and K-Nearest Neighbor (K-NN). The accuracy of the proposed model is
tested using SVM (88%), K-NN (97%) and CNN (99.6%) on tomato disordered samples. keywords: شبکه های عصبی کانولوشنال | تبدیل موجک گسسته | تجزیه و تحلیل مؤلفه های اصلی | نزدیکترین همسایه | بیماری برگ | Convolutional Neural Networks | Discrete Wavelet Transform | Principal Component Analysis | Nearest Neighbor | Leaf disease |
مقاله انگلیسی |
8 |
Quantum Dilated Convolutional Neural Networks
شبکه های عصبی کانولوشنال اتساع کوانتومی-2022 In recent years, with rapid progress in the development of quantum technologies, quantum
machine learning has attracted a lot of interest. In particular, a family of hybrid quantum-classical neural
networks, consisting of classical and quantum elements, has been massively explored for the purpose of
improving the performance of classical neural networks. In this paper, we propose a novel hybrid quantum classical algorithm called quantum dilated convolutional neural networks (QDCNNs). Our method extends
the concept of dilated convolution, which has been widely applied in modern deep learning algorithms,
to the context of hybrid neural networks. The proposed QDCNNs are able to capture larger context during
the quantum convolution process while reducing the computational cost. We perform empirical experiments
on MNIST and Fashion-MNIST datasets for the task of image recognition and demonstrate that QDCNN
models generally enjoy better performances in terms of both accuracy and computation efficiency compared
to existing quantum convolutional neural networks (QCNNs).
INDEX TERMS: Quantum-classical neural networks | quantum dilated convolution | parameterized quantum circuits. |
مقاله انگلیسی |
9 |
Resource efficient AI: Exploring neural network pruning for task specialization
هوش مصنوعی کارآمد منابع: بررسی هرس شبکه عصبی برای تخصص در کار-2022 This paper explores the use of neural network pruning for transfer learning applications for more
resource-efficient inference. The goal is to focus and optimize a neural network on a smaller
specialized target task. With the advent of IoT, we have seen an immense increase in AI-based
applications on mobile and embedded devices, such as wearables and other smart appliances.
However, with the ever-increasing complexity and capabilities of machine learning algorithms,
this push to the edge has led to new challenges due to the constraints imposed by the limited
availability of resources on these devices. Some form of compression is needed to allow for stateof-the-art convolutional neural networks to run on edge devices. In this work, we adapt existing
neural network pruning methods to allow them to specialize networks to only focus on a subset
of what they were originally trained for. This is a transfer learning use-case where we optimize
large pre-trained networks. This differs from standard optimization techniques by allowing the
network to forget certain concepts and allow the network’s footprint to be even smaller. We
compare different pruning criteria, including one from the field of Explainable AI (XAI), to
determine which technique allows for the smallest possible network while maintaining high
performance on the target task. Our results show the benefits of using network specialization
when executing neural networks on embedded devices both with and without GPU acceleration.
keywords: فشرده سازی شبکه عصبی | یادگیری ماشین | هوش مصنوعی قابل توضیح | هرس شبکه عصبی | استنتاج لبه | Neural network compression | Machine learning | Explainable AI | Neural network pruning | Edge inference |
مقاله انگلیسی |
10 |
Deep learning-based transceiver design for multi-user MIMO systems
طراحی فرستنده گیرنده مبتنی بر یادگیری عمیق برای سیستم های MIMO چند کاربره-2022 Multi-user multiple-input multiple-output (MIMO) is a key technique to increase both the
channel capacity and the number of users that can be served simultaneously. One of the main
challenges related to the deployment of such systems is the complexity of the transceiver
processing. Although the conventional optimization algorithms are able to provide excellent
performance, they generally require considerable computational complexity, which gets in the
way of their practical application in real-time systems. In contrast to existing work, we study
a DL-based transceiver design scheme for a downlink MIMO broadcasting channel (MIMO BC)
system, which consists of a base station (BS) serving multi-users. The objective of this work
is to maximize the sum-rate of all users by jointly optimizing the transmitter and receivers
under the total power constraint, while suppressing interference as much as possible. Due to
the inter-user interference in such system, the considered problem is nonconvex and NP-hard.
Different from traditional optimization algorithms, we rely on the convolutional neural networks
(CNNs) to optimize the transceivers in an adaptive way. In the proposed scheme, we develop an
unsupervised learning strategy, where a loss function is constructed innovatively for reducing
the inter-user interference. Simulation results show that the inter-user interference is reduced
effectively by our proposed CNN-based transceiver optimization method.
Keywords: Transceiver design | MIMO BC | Deep learning | Convolutional neural networks |
مقاله انگلیسی |