دانلود و نمایش مقالات مرتبط با Feature extraction::صفحه 1
دانلود بهترین مقالات isi همراه با ترجمه فارسی
نتیجه جستجو - Feature extraction

تعداد مقالات یافته شده: 118
ردیف عنوان نوع
1 Human identification driven by deep CNN and transfer learning based on multiview feature representations of ECG
Human identification driven by deep CNN and transfer learning based on multiview feature representations of ECG-2021
Increasingly smart techniques for counterfeiting face and fingerprint traits have increased the potential threats to information security systems, creating a substantial demand for improved security and better privacy and identity protection. The internet of Things (IoT)-driven fingertip electrocardiogram (ECG) acquisition provides broad application prospects for ECG-based identity systems. This study focused on three major impediments to fingertip ECG: the impact of variations in acquisition status, the high computational complexity of traditional convolutional neural network (CNN) models and the feasibility of model migration, and a lack of sufficient fingertip samples. Our main contribution is a novel fingertip ECG identification system that integrates transfer learning and a deep CNN. The proposed system does not require manual feature extraction or suffer from complex model calculations, which improves its speed, and it is effective even when only a small set of training data exists. Using 1200 ECG recordings from 600 individuals, we consider 5 simulated yet potentially practical scenarios. When analyzing the overall training accuracy of the model, its mean accuracy for the 540 chest- collected ECG from PhysioNet exceeded 97.60 %, and for 60 subjects from the CYBHi fingertip-collected ECG, its mean accuracy reached 98.77 %. When simulating a real-world human recognition system on 5 public datasets, the validation accuracy of the proposed model can nearly reach 100 % recognition, outperforming the original GoogLeNet network by a maximum of 3.33 %. To some degree, the developed architecture provides a reference for practical applications of fingertip-collected ECG-based biometric systems and for information network security.
Keywords: Off-the-person | Fingertip ECG biometric | Human identification | Convolutional neural network (CNN) | Transfer learning
مقاله انگلیسی
2 Person-identification using familiar-name auditory evoked potentials from frontal EEG electrodes
شناسایی فرد با استفاده از پتانسیل نام-آشنا شنوایی الکترودهای EEG جلو برانگیخته-2021
Electroencephalograph (EEG) based biometric identification has recently gained increased attention of re- searchers. However, state-of-the-art EEG-based biometric identification techniques use large number of EEG electrodes, which poses user inconvenience and consumes longer preparation time for practical applications. This work proposes a novel EEG-based biometric identification technique using auditory evoked potentials (AEPs) acquired from two EEG electrodes. The proposed method employs single-trial familiar-name AEPs extracted from the frontal electrodes Fp1 and F7, which facilitates faster and user-convenient data acquisition. The EEG signals recorded from twenty healthy individuals during four experiment trials are used in this study. Different com- binations of well-known neural network architectures are used for feature extraction and classification. The cascaded combinations of 1D-convolutional neural networks (1D-CNN) with long short-term memory (LSTM) and with gated recurrent unit (GRU) networks gave the person identification accuracies above 99 %. 1D-convolutional, LSTM network achieves the highest person identification accuracy of 99.53 % and a half total error rate (HTER) of 0.24 % using AEP signals from the two frontal electrodes. With the AEP signals from the single electrode Fp1, the same network achieves a person identification accuracy of 96.93 %. The use of familiar-name AEPs from frontal EEG electrodes that facilitates user convenient data acquisition with shorter preparation time is the novelty of this work.
Keywords: Auditory evoked potential | Biometrics | Deep learning | Electroencephalogram | Familiar-name | Person identification
مقاله انگلیسی
3 Deep belief network-based hybrid model for multimodal biometric system for futuristic security applications
مدل ترکیبی مبتنی بر باور عمیق برای سیستم بیومتریک چند حالته برای برنامه های امنیتی آینده-2021
Biometrics is the technology to identify humans uniquely based on face, iris, and fingerprints, etc. Biometric authentication allows the person recognition automatically on the basis of behavioral or physiological charac- teristics. Biometrics are broadly employed in several commercial as well as the official identification systems for automatic access control. This paper introduces the model for multimodal biometric recognition based on score level fusion method. The overall procedure of the proposed method involves five steps, such as pre-processing, feature extraction, recognition score using Multi- support vector neural network (Multi-SVNN) for all traits, score level fusion, and recognition using deep belief neural network (DBN). The first step is to input the training images into pre-processing steps. Thus, the pre-processing of three traits, like iris, ear, and finger vein is done. Then, the feature extraction is done for each modality to extract the features. After that, the texture features are extracted from pre-processed images of the ear, iris, and finger vein, and the BiComp features are acquired from individual images using a BiComp mask. Then, the recognition score is computed based on the Multi-SVNN classifier to provide the score individually for all three traits, and the three scores are provided to the DBN. The DBN is trained using the chicken earthworm optimization algorithm (CEWA). The CEWA is the integration of the chicken swarm optimization (CSO), and earthworm optimization algorithm (EWA) for the optimal authentication of the person. The analysis proves that the developed method acquired a maximal accuracy of 95.36%, maximal sensitivity of 95.85%, and specificity of 98.79%, respectively.
Keywords: Multi-modal Bio-metric system | Chicken Swarm Optimization | Earthworm Optimization algorithm | Deep Belief Network | Multi-SVNN
مقاله انگلیسی
4 Ann trained and WOA optimized feature-level fusion of iris and fingerprint
بهینه سازی شبکه‌های عصبی مصنوعی و WOA آموزش دیده همجوشی در سطح ویژگی عنبیه و اثر انگشت-2021
‘‘Uni Uni-modal Biometric systems has been widely implemented for maintaining security and privacy in various applications like mobile phones, banking apps, airport access control, laptop login etc. Due to Advancement in technologies, imposters designed various ways to breach the security and most of the designed biometric applications security can be compromised. The quality of input sample also play an important role to attain the best performance in terms of improved accuracy and reduced FAR & FRR. Researchers has combined the various biometrics modalities to overcome the problems of Uni-modal bio- metrics. In this paper, a multi biometric feature level fusion system of Iris, and Fingerprint is presented. Due to consistency feature of fingerprint and stability feature of iris modality taken into consideration for high security applications. At pre-processing level, the atmospheric light adjustment algorithm is applied to improve the quality of input samples (Iris and Fingerprint). For feature extraction, the nearest neighbor algorithm and speedup robust feature (SURF) is applied to fingerprint and Iris data respectively. Further, for selecting the best features, the extracted features are optimized by GA algorithm. To achieve an excellent recognition rate, the iris and fingerprint data is trained by ANN algorithm. The experimental results show that the proposed system exhibits the improved performance and better security. Finally, the template is secured by applying the AES algorithm and results are compared with DES, 3DES, RSA and RC4 algorithm.© 2021 Elsevier Ltd. All rights reserved. Selection and peer-review under responsibility of the scientific committee of the 1st International Con- ference on Computations in Materials and Applied Engineering – 2021.
Keywords: Multimodal biometrics fusion | ANN | SURF | GA | RSA
مقاله انگلیسی
5 GaitCode: Gait-based continuous authentication using multimodal learning and wearable sensors
GaitCode: احراز هویت پیوسته مبتنی بر راه رفتن با استفاده از یادگیری چند حالته و حسگرهای پوشیدنی-2021
The ever-growing threats of security and privacy loss from unauthorized access to mobile devices have led to the development of various biometric authentication methods for easier and safer data access. Gait-based authentication is a popular biometric authentication as it utilizes the unique patterns of human locomotion and it requires little cooperation from the user. Existing gait-based biometric authentication methods however suffer from degraded performance when using mobile devices such as smart phones as the sensing device, due to multiple reasons, such as increased accelerometer noise, sensor orientation and positioning, and noise from body movements not related to gait. To address these drawbacks, some researchers have adopted methods that fuse information from multiple accelerometer sensors mounted on the human body at different lo- cations. In this work we present a novel gait-based continuous authentication method by applying multimodal learning on jointly recorded accelerometer and ground contact force data from smart wearable devices. Gait cycles are extracted as a basic authentication element, that can continuously authenticate a user. We use a network of auto-encoders with early or late sensor fusion for feature extraction and SVM and soft max for classification. The effectiveness of the proposed approach has been demonstrated through extensive experiments on datasets collected from two case studies, one with commercial off-the-shelf smart socks and the other with a medical-grade research prototype of smart shoes. The evaluation shows that the proposed approach can achieve a very low Equal Error Rate of 0.01% and 0.16% for identification with smart socks and smart shoes respectively, and a False Acceptance Rate of 0.54%–1.96% for leave-one-out authentication.
Keywords: Biometric authentication | Gait authentication | Autoencoders | Sensor fusion | Multimodal learning | Wearable sensors
مقاله انگلیسی
6 Computer-vision classification of corn seed varieties using deep convolutional neural network
طبقه بندی بینایی ماشین انواع بذر ذرت با استفاده از شبکه عصبی پیچیده عمیق-2021
Automated classification of seed varieties is of paramount importance for seed producers to maintain the purity of a variety and crop yield. Traditional approaches based on computer vision and simple feature extraction could not guarantee high accuracy classification. This paper presents a new approach using a deep convolutional neural network (CNN) as a generic feature extractor. The extracted features were classified with artificial neural network (ANN), cubic support vector machine (SVM), quadratic SVM, weighted k-nearest-neighbor (kNN), boosted tree, bagged tree, and linear discriminant analysis (LDA). Models trained with CNN-extracted features demonstrated better classification accuracy of corn seed varieties than models based on only simple features. The CNN-ANN classifier showed the best performance, classifying 2250 test instances in 26.8 s with classification accuracy 98.1%, precision 98.2%, recall 98.1%, and F1-score 98.1%. This study demonstrates that the CNN-ANN classifier is an efficient tool for the intelligent classification of different corn seed varieties.© 2021 Elsevier Ltd. All rights reserved.
Keywords: Machine vision | Deep learning | Feature extraction | Non-handcrafted features | Texture descriptors
مقاله انگلیسی
7 Research on the algorithm of painting image style feature extraction based on intelligent vision
تحقیق در مورد الگوریتم استخراج ویژگی های سبک تصویر بر اساس دید هوشمند-2021
Because the traditional image feature extraction algorithm does not smooth the image, the success rate of feature extraction is low, the average running time and the false positive rate are increased. In view of the above problems, this paper proposes an algorithm of painting image style feature extraction based on intelligent vision. According to the internal structure of the content image and the painting image, the similarity analysis and the smooth transfer of pixels are carried out, and then the painting image is smoothed with the semi-supervised learning method. On this basis, the similarity rule of painting image style is established, and all the style features are quantified, so as to obtain the self- similarity descriptor of painting image style. Then the similarity coefficient between the painting image and other sample images is calculated, and the similarity matrix is constructed, and the intelligent vision technology is used to complete the extraction of the painting image style features. Experimental results show that this algorithm can effectively reduce the average running time and false positive rate of painting image style feature extraction, and also improve the success rate of feature extraction.© 2021 Published by Elsevier B.V.
Keywords: Painting image style | Feature extraction | Smoothing processing | Semi-supervised learning | Similarity rule | Intelligent visual
مقاله انگلیسی
8 Computer vision approach to characterize size and shape phenotypes of horticultural crops using high-throughput imagery
رویکرد بینایی رایانه ای برای توصیف فنوتیپ های اندازه و شکل محصولات باغی با استفاده از تصاویر با توان بالا-2021
For many horticultural crops, variation in quality (e.g., shape and size) contributes significantly to the crop’s market value. Metrics characterizing less subjective harvest quantities (e.g., yield and total biomass) areroutinely monitored. In contrast, metrics quantifying more subjective crop quality characteristics such as ideal size and shape remain difficult to characterize objectively at the production-scale due to the lack of modular technologies for high-throughput sensing and computation. Several horticultural crops are sent to packing facilities after having been harvested, where they are sorted into boxes and containers using high-throughput scanners. These scanners capture images of each fruit or vegetable being sorted and packed, but the images are typically used solely for sorting purposes and promptly discarded. With further analysis, these images could offer unparalleled insight on how crop quality metrics vary at the industrial production-scale and provide further insight into how these characteristics translate to overall market value. At present, methods for extracting and quantifying quality characteristics of crops using images generated by existing industrial infrastructure have not been developed. Furthermore, prior studies that investigated horticultural crop quality metrics, specifically of size and shape, used a limited number of samples, did not incorporate deformed or non-marketable samples, and did not use images captured from high-throughput systems. In this work, using sweetpotato (SP) as a use case, we introduce a computer vision algorithm for quantifying shape and size characteristics in a high-throughput manner. This approach generates 3D model of SPs from two 2D images captured by an industrial sorter 90 degrees apart and extracts 3D shape features in a few hundred milliseconds. We applied the 3D reconstruction and feature extraction method to thousands of image samples to demonstrate how variations in shape features across SP cultivars can be quantified. We created a SP shape dataset containing SP images, extracted shape features, and qualitative shape types (U.S. No. 1 or Cull). We used this dataset to develop a neural network-based shape classifier that was able to predict Cull vs. U.S. No. 1 SPs with 84.59% accuracy. In addition, using univariate Chi-squared tests and random forest, we identified the most important features for determining qualitative shape type (U.S. No. 1 or Cull) of the SPs. Our study serves as a key step towards enabling big data analytics for industrial SP agriculture. The methodological framework is readily transferable to other horticultural crops, particularly those that are sorted using commercial imaging equipment.
Keywords: Crop phenotyping | Machine learning | Computer vision
مقاله انگلیسی
9 A critical review for machining positioning based on computer vision
مروری انتقادی برای مکان یابی ماشینکاری بر اساس بینایی ماشین-2021
With the rapid development of science and technology, the manufacturing industry has to cope with increasingly stricter requirements in terms of the quality of processed products. To improve production flexibility and automation, computer vision is widely used in machining due to its safety, reliability, continuity, high accuracy, and real-time performance. In this study, a comprehensive review of positioning methods for workpieces in machining is presented from the perspective of computer vision technology. First, the key technologies in image acquisition are described in detail, and a analysis of different lighting modes is conducted. Second, image preprocessing is described by summarizing enhancement and image segmentation methods. Third, from the perspectives of accuracy and speed, feature extraction methods are compared and evaluated. Next, the existing applications of visual positioning technology in machining are discussed. Finally, the existing problems are summarized, and future research directions technology suggested.
Keywords: Visual positioning | Positioning processing | Optical system | Image preprocessing | Feature extraction
مقاله انگلیسی
10 Design of cancelable MCC-based fingerprint templates using Dyno-key model
طراحی الگوهای اثر انگشت مبتنی بر MCC قابل لغو با استفاده از مدل Dyno-key-2021
Minutia Cylinder Code (MCC) is an effective, high-quality representation of local minutia structures. MCC templates demonstrate fast and excellent fingerprint matching performance, but if compromised, they can be reverse-engineered to retrieve minutia information. In this paper, we propose alignment-free cancelable MCC-based templates by exploiting the MCC feature extraction and representation. The core component of our design is a dynamic random key model, called Dyno-key model. The Dyno-key model dynamically extracts elements from MCC’s binary feature vectors based on randomly generated keys. Those extracted elements are discarded after the block-based logic operations so as to increase security. Leveling with the performance of the unprotected, reproduced MCC templates, the proposed method exhibits competitive performance in comparison with state-of-the-art cancelable fingerprint templates, as evaluated over seven public databases, FVC2002 DB1-DB3, FVC2004 DB1 and DB2, and FVC2006 DB2 and DB3. The proposed cancelable MCC-based templates satisfy all the requirements of biometric template protection.© 2021 Elsevier Ltd. All rights reserved.
Keywords: Cancelable biometrics | Minutia cylinder code | Cancelable fingerprint templates | Biometric template protection | Alignment-free
مقاله انگلیسی
rss مقالات ترجمه شده rss مقالات انگلیسی rss کتاب های انگلیسی rss مقالات آموزشی
logo-samandehi