با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد).
ردیف | عنوان | نوع |
---|---|---|
1 |
Adaptive Management of Multimodal Biometrics—A Deep Learning and Metaheuristic Approach
مدیریت تطبیقی بیومتریک چند حالته - یادگیری عمیق و رویکرد فرا مکاشفه ای-2021 This paper introduces the framework for adaptive rank-level biometric fusion: a new approach towards personal authentication. In this work, a novel attempt has been made to identify the optimal design parameters and framework of a multibiometric system, where the chosen biometric traits are subjected to rank-level fusion. Optimal fusion parameters depend upon the security level demanded by a particular biometric application. The proposed framework makes use of a metaheuristic approach towards adaptive fusion in the pursuit of achieving optimal fusion results at varying levels of security. Rank-level fusion rules have been employed to provide optimum performance by making use of Ant Colony Optimization technique. The novelty of the reported work also lies in the fact that the proposed design engages three biometric traits simultaneously for the first time in the domain of adaptive fusion, so as to test the efficacy of the system in selecting the optimal set of biometric traits from a given set. Literature reveals the unique biometric characteristics of the fingernail plate, which have been exploited in this work for the rigorous experimentation conducted. Index, middle and ring fingernail plates have been taken into consideration, and deep learning feature-sets of the three nail plates have been extracted using three customized pre-trained models, AlexNet, ResNet-18 and DenseNet-201. The adaptive multimodal performance of the three nail plates has also been checked using the already existing methods of adaptive fusion designed for addressing fusion at the score-level and decision- level. Exhaustive experiments have been conducted on the MATLAB R2019a platform using the Deep Learning Toolbox. When the cost of false acceptance is 1.9, experimental results obtained from the proposed framework give values of the average of the minimum weighted error rate as low as 0.0115, 0.0097 and 0.0101 for the AlexNet, ResNet-18 and DenseNet-201 based experiments respectively. Results demonstrate that the proposed system is capable of computing the optimal parameters for rank-level fusion for varying security levels, thus contributing towards optimal performance accuracy.© 2021 Elsevier B.V. All rights reserved. Keywords: Adaptive Biometric Fusion | Ant Colony Optimization | Deep Learning | Fingernail Plate | Multimodal Biometrics | Rank-level Adaptive Fusion |
مقاله انگلیسی |
2 |
Ann trained and WOA optimized feature-level fusion of iris and fingerprint
بهینه سازی شبکههای عصبی مصنوعی و WOA آموزش دیده همجوشی در سطح ویژگی عنبیه و اثر انگشت-2021 ‘‘Uni Uni-modal Biometric systems has been widely implemented for maintaining security and privacy in various applications like mobile phones, banking apps, airport access control, laptop login etc. Due to Advancement in technologies, imposters designed various ways to breach the security and most of the designed biometric applications security can be compromised. The quality of input sample also play an important role to attain the best performance in terms of improved accuracy and reduced FAR & FRR. Researchers has combined the various biometrics modalities to overcome the problems of Uni-modal bio- metrics. In this paper, a multi biometric feature level fusion system of Iris, and Fingerprint is presented. Due to consistency feature of fingerprint and stability feature of iris modality taken into consideration for high security applications. At pre-processing level, the atmospheric light adjustment algorithm is applied to improve the quality of input samples (Iris and Fingerprint). For feature extraction, the nearest neighbor algorithm and speedup robust feature (SURF) is applied to fingerprint and Iris data respectively. Further, for selecting the best features, the extracted features are optimized by GA algorithm. To achieve an excellent recognition rate, the iris and fingerprint data is trained by ANN algorithm. The experimental results show that the proposed system exhibits the improved performance and better security. Finally, the template is secured by applying the AES algorithm and results are compared with DES, 3DES, RSA and RC4 algorithm.© 2021 Elsevier Ltd. All rights reserved. Selection and peer-review under responsibility of the scientific committee of the 1st International Con- ference on Computations in Materials and Applied Engineering – 2021. Keywords: Multimodal biometrics fusion | ANN | SURF | GA | RSA |
مقاله انگلیسی |
3 |
GaitCode: Gait-based continuous authentication using multimodal learning and wearable sensors
GaitCode: احراز هویت پیوسته مبتنی بر راه رفتن با استفاده از یادگیری چند حالته و حسگرهای پوشیدنی-2021 The ever-growing threats of security and privacy loss from unauthorized access to mobile devices have led to the development of various biometric authentication methods for easier and safer data access. Gait-based authentication is a popular biometric authentication as it utilizes the unique patterns of human locomotion and it requires little cooperation from the user. Existing gait-based biometric authentication methods however suffer from degraded performance when using mobile devices such as smart phones as the sensing device, due to multiple reasons, such as increased accelerometer noise, sensor orientation and positioning, and noise from body movements not related to gait. To address these drawbacks, some researchers have adopted methods that fuse information from multiple accelerometer sensors mounted on the human body at different lo- cations. In this work we present a novel gait-based continuous authentication method by applying multimodal learning on jointly recorded accelerometer and ground contact force data from smart wearable devices. Gait cycles are extracted as a basic authentication element, that can continuously authenticate a user. We use a network of auto-encoders with early or late sensor fusion for feature extraction and SVM and soft max for classification. The effectiveness of the proposed approach has been demonstrated through extensive experiments on datasets collected from two case studies, one with commercial off-the-shelf smart socks and the other with a medical-grade research prototype of smart shoes. The evaluation shows that the proposed approach can achieve a very low Equal Error Rate of 0.01% and 0.16% for identification with smart socks and smart shoes respectively, and a False Acceptance Rate of 0.54%–1.96% for leave-one-out authentication. Keywords: Biometric authentication | Gait authentication | Autoencoders | Sensor fusion | Multimodal learning | Wearable sensors |
مقاله انگلیسی |
4 |
Weighted boxes fusion: Ensembling boxes from different object detection models
همجوشی جعبه های توزین شده: جمع آوری جعبه هایی از مدل های مختلف تشخیص شیء-2021 Object detection is a crucial task in computer vision systems with a wide range of applications in autonomous driving, medical imaging, retail, security, face recognition, robotics, and others. Nowadays, neural networks- based models are used to localize and classify instances of objects of particular classes. When real-time inference is not required, ensembles of models help to achieve better results. In this work, we present a novel method for fusing predictions from different object detection models: weighted boxes fusion. Our algorithm utilizes confidence scores of all proposed bounding boxes to construct averaged boxes. We tested the method on several datasets and evaluated it in the context of Open Images and COCO Object Detection challenges, achieving top results in these challenges. The 3D version of boxes fusion was successfully applied by the winning teams of Waymo Open Dataset and Lyft 3D Object Detection for Autonomous Vehicles challenges. The source code is publicly available at GitHub (Solovyev, 2019 [31]).We present a novel method for combining predictions in ensembles of different object detection models: weighted boxes fusion. This method significantly improves the quality of the fused predicted rectangles for an ensemble. We tested the method on several datasets and evaluated it in the context of the Open Images and COCO Object Detection challenges. It helped to achieve top results in these challenges. The source code is publicly available at GitHub.© 2021 Published by Elsevier B.V. Keywords: Object detection | Computer vision | Deep learning |
مقاله انگلیسی |
5 |
BIOFUSE: A framework for multi-biometric fusion on biocryptosystem level
BIOFUSE: چارچوبی برای همجوشی چند بیومتریک در سطح بیو کریپتوسیستم-2021 Biometric cryptosystems or biocryptoystems are gaining prominence for cryptographic key generation, encryption and biometric template protection. However, the most popular state-of-the-art biocryptosystems- fuzzy commitment and fuzzy vault are prone to multiple security attacks. Recently proposed multi-biometric cryptosystems improve security and enhance recognition performance. They perform the fusion of multi-biometric characteristics with either a single biocryptosystem or independently accessed, multiple biocryptosystems. An attack on any of the involved biocryptosystems can weaken the security of the whole system. In our paper, we propose a multi-biometric fusion framework- BIOFUSE, that combines fuzzy commitment and fuzzy vault using the format-preserving encryption scheme. BIOFUSE makes it improbable for an attacker to get unauthorized access to the system without impersonation of all the biometric inputs of the genuine user at the same instant. We present 4 most basic ways of constructing BIOFUSE and found only 1 named S- BIOFUSE (S3) as a secure design. We compare the recognition performance of the proposed scheme with existing multi-biometric cryptosystems on various databases. The results show 0:98 true match rate at 0:01 false match rate on a virtual IITD-DB1 database that indicates that our proposed work achieves significantly good recognition performance while providing high security.© 2020 Elsevier Inc. All rights reserved. Keywords: Biometric cryptosystem | Biometric template protection | Multi-biometric fusion | Fuzzy commitment | Fuzzy vault | Format-preserving encryption |
مقاله انگلیسی |
6 |
Joint discriminative feature learning for multimodal finger recognition
یادگیری ویژگی های تبعیض آمیز مشترک برای تشخیص انگشتان چند حالته-2021 Recently, finger-based multimodal biometrics, due to its high security and stability, has received considerable attention compared with unimodal biometrics. However, existing multimodal finger feature ex- traction approaches separately extract the features of different modalities, at the same time ignoring correlations among these different modalities. Furthermore, most of the conventional finger feature representation approaches are hand-crafted by design, which require strong prior knowledge. It is therefore very important to explore and develop a suitable feature representation and fusion strategy for mul- timodal biometrics recognition. In this paper, we proposed a joint discriminative feature learning (JDFL) framework for multimodal finger recognition by combining finger vein (FV) and finger knuckle print (FKP) patterns. For the FV and FKP images, we first established the informative dominant direction vector by convoluting a bank of Gabor filters and the original finger image. Then, we developed a simple yet effective feature learning algorithm, which simultaneously maximized the distance of between-class samples and minimized the distance of within-class samples, as well as maximized the correlation among inter- modality samples of the within-class. Finally, we integrated the block-wise histograms of the learned feature maps together for multimodal finger fusion recognition. Experimental results demonstrated that the proposed approach has a better recognition performance than state-of-the-art finger recognition methods.© 2020 Elsevier Ltd. All rights reserved. Keywords: Multimodal biometrics | Feature fusion | Inter-modality | Joint feature learning |
مقاله انگلیسی |
7 |
Remote measurement of building usable floor area – Algorithms fusion
اندازه گیری از راه دور مساحت قابل استفاده ساختمان - ترکیب الگوریتم ها-2021 Rapid changes that are taking place in the urban environment have significant impact on urban growth. Most cities and urban regions all over the world compete to increase resident and visitor satisfaction. The growing requirements and rapidity of introducing new technologies to all aspects of residents’ lives force cities and urban regions to implement "smart cities" concepts in their activities. Real estate is one of the principal anthropogenic components of urban environment thus become a subject of thorough multidisciplinary analysis in the field of data requiring spatial information systems. Recent advances in information technology, combined with the increased availability of high-resolution imagery from Earth observation, create an opportunity to use new sources of data that enable to identify, monitor, and solved many of urban environmental problem. The aim of the paper is to elaborate precise, complete and detailed property information with the use of remote sensing observations in a suitable numerical algorithm. The authors concentrate on providing one of the most important, and probably the most lacking, feature describing properties – building usable floor area (BUFA). The solution is elaborated in the form of an automatic algorithm based on machine learning and computer vision technology related to LiDAR (big data), close range images with respect to spatial information systems requirements. The obtained results related to BUFA estimation in comparison to the state-of-the-art results are satisfactory and may increase the reliability of decision-making in investment, fiscal, registration and planning aspects. Keywords: Building usable floor area | Machine learning | Computer vision | Fuzzy theory | Measurement data processing | Algorithms fusion |
مقاله انگلیسی |
8 |
یادگیری عمیق برای تشخیص اشیا و درک صحنه در خودروهای خودران: نظرسنجی ، چالش ها و مسائل باز
سال انتشار: 2021 - تعداد صفحات فایل pdf انگلیسی: 20 - تعداد صفحات فایل doc فارسی: 82 این مقاله یک بررسی جامع از کاربرد های یادگیری عمیق برای تشخیص اشیا و درک صحنه در وسایل نقلیه خودران ارائه میکند. برخلاف مقالات مروری موجود ، ما تئوری زیربنایی وسایل نقلیه خودران را از منظر یادگیری عمیق و پیاده سازی های فعلی بررسی میکنیم وبه دنبال آن ارزیابی های انتقادی آن ها انجام میشود. یادگیری عمیق یکی از راه حل های بالقوه برای مشکلات تشخیص اشیا و درک صحنه است که میتواند خودروهای الگوریتم محور و داده محور را فعال کند. در این مقاله قصد داریم از طریق یک نظرسنجی جامع ، شکاف بین یادگیری عمیق و خودروهای خودران را پر کنیم. ما با مقدمه ای برماشین های خودران، یادگیری عمیق و بینایی کامپیوتری و سپس مروری بر هوش عمومی مصنوعی شروع میکنیم. سپس، کتابخانه های قدرتمند یادگیری عمیق موجود و نقش و اهمیت آنها در رشد یادگیری عمیق را طبقه بندی می کنیم. در نهایت، ما چندین تکنیک را مورد بحث قرار میدهیم که به مسائل مربوط به درک تصویر در رانندگی بلادرنگ میپردازد، و پیاده سازی ها و آزمایش های اخیر انجام شده بر روی خودروهای خودران را به طور انتقادی ارزیابی میکنیم. یافته ها و اقدامات در مراحل مختلف برای ارتباط بین تکنیک های رایج و آینده نگر، و کاربرد، مقیاس پذیری و امکان پذیری یادگیری عمیق در خودروهای خودران برای دستیابی به رانندگی ایمن بدون دخالت انسان خلاصه شده اند. بر اساس نظرسنجی فعلی، چندین توصیه برای تحقیقات بیشتر درپایان این مقاله موردبحث قرار گرفته است.
کلید واژه ها: ماشین ها ی خودران | سطوح اتوماسیون | یادگیری ماشین | یادگیری عمیق | شبکه های عصبی کانولوشنال | درک صحنه | تشخیص اشیا | همجوشی حسگر چند وجهی | LiDAR | بینایی ماشین | ابتکارات رانندگی مستقل |
مقاله ترجمه شده |
9 |
Quality evaluation of Keemun black tea by fusing data obtained from near-infrared reflectance spectroscopy and computer vision sensors
ارزیابی کیفی چای سیاه کیمون با ترکیب داده های بدست آمده از طیف سنجی بازتابنده مادون قرمز نزدیک و حسگرهای بینایی ماشین-2021 Keemun black tea is classified into 7 grades according to the difference in its quality. The appearance and
flavour are crucial indicators of its quality. This research demonstrates a rapid grading method of jointly
using near-infrared reflectance spectroscopy (NIRS) and computer vision systems (CVS) to evaluate the
flavour and appearance quality of tea. A Bruker MPA Fourier Transform near-infrared spectrometer
was used to record the spectrum of samples. A computer vision system was used to capture the image
of tea leaves in an unobstructed manner. 80 tea samples for each grade were analyzed. The performance
of four NIRS feature extraction methods (principal component analysis, local linear embedding, isometric
feature mapping, and convolutional neural network (CNN)) was compared in this study. Histograms of six
geometric features (leaf width, leaf length, leaf area, leaf perimeter, aspect ratio, and rectangularity) of
different tea samples were used to describe their appearance. A feature-level fusion strategy was used
to combine softmax and artificial neural networks (ANN) to classify NIRS and CVS features. The results
indicated that for an individual NIRS signal, CNN achieved the highest classification accuracy with the
softmax classification model. The histograms of the combined shape features indicated that when the
softmax classification model was used, the classification accuracy was also higher than ANN. The fusion
of NIRS and CVS features proved to be the optimal combination; the accuracy of calibration, validation
and testing sets increased from 99.29%, 96.67% and 98.57% (when the optimal features from a singlesensor were used) to 100.00%, 99.29% and 100.00% (when features from multiple-sensors were used).
This study revealed that the combination of NIRS and CVS features can be a useful strategy for classifying
black tea samples of different grades. Keywords: Keemun black tea | Near-infrared reflectance spectroscopy | Computer vision system | Feature fusion | Convolutional neural network | Quality identification |
مقاله انگلیسی |
10 |
Online detection of naturally DON contaminated wheat grains from China using Vis-NIR spectroscopy and computer vision
تشخیص آنلاین دانه های گندم آلوده به DON طبیعی از چین با استفاده از طیف سنجی Vis-NIR و بینایی ماشین-2021 Deoxynivalenol (DON) contamination of wheat grains is a serious problem in China, and it
is necessary to remove contaminated wheat before it enters the consumer market. In this
study, visible-near infrared (Vis-NIR) spectroscopy and computer vision techniques were
combined to simulate online discrimination between normal and DON-contaminated
wheat grains. Naturally growing wheat samples were collected from several of the main
wheat-producing areas in China, the reference DON contents were measured by using
liquid chromatography serial triple quadrupole mass spectrometer (LC-MS), and then
wheat samples were divided into two categories according to the national standard of
1 mg kg1. The characteristic spectral variables, colour and texture features were extracted
and integrated for chemometric analysis. Principal component analysis based on fusion
features indicated better clustering than with just spectral features. Subsequently, linear
discriminant analysis modelling based on spectra and texture features achieved the best
discrimination with an accuracy of 95.06% and 91.36% for calibration and validation sets
respectively, which was 5% higher than with just spectral features, and the false positive
rates (FPR) were the lowest: 3.41% and 10.42% for calibration and validation sets respectively. The internal scanning results of whole wheat flour indicated that the higher the
content of DON, the looser the binding of starch granules, which could cause the textural
change of wheat grains. The research showed that Vis-NIR spectroscopy combined with
computer vision has the potential to be used in the non-destructive and online detection of
DON-contaminated wheat grains; further study on the interference from complex environments is still need for actual online detection.
Keywords: Vis-NIR spectroscopy | Computer vision | Wheat grains | DON | Features fusion |
مقاله انگلیسی |