دانلود و نمایش مقالات مرتبط با Multimodal::صفحه 1
دانلود بهترین مقالات isi همراه با ترجمه فارسی 2

با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد). 

نتیجه جستجو - Multimodal

تعداد مقالات یافته شده: 71
ردیف عنوان نوع
1 Performance analysis of machine learning algorithm of detection and classification of brain tumor using computer vision
تحلیل عملکرد الگوریتم یادگیری ماشین تشخیص و طبقه بندی تومور مغزی با استفاده از بینایی کامپیوتر-2022
Brain tumor is one of the undesirables, uncontrolled growth of cells in all age groups. Classification of tumors depends no its origin and degree of its aggressiveness, it also helps the physician for proper diagnosis and treatment plan. This research demonstrates the analysis of various state-of-art techniques in Machine Learning such as Logistic, Multilayer Perceptron, Decision Tree, Naive Bayes classifier and Support Vector Machine for classification of tumors as Benign and Malignant and the Discreet wavelet transform for feature extraction on the synthetic data that is available data on the internet source OASIS and ADNI. The research also reveals that the Logistic Regression and the Multilayer Perceptron gives the highest accuracy of 90%. It mimics the human reasoning that learns, memorizes and is capable of reasoning and performing parallel computations. In future many more AI techniques can be trained to classify the multimodal MRI Brain scan to more than two classes of tumors.
keywords: هوش مصنوعی | ام آر آی | رگرسیون لجستیک | پرسپترون چند لایه | Artificial Intelligence | MRI | Logistic regression | OASIS | Multilayer Perceptron
مقاله انگلیسی
2 Pursuits in Collision Affiliation, Disaffiliation, and Multimodality in Persian Interaction
تعقیب وابستگی برخورد، عدم وابستگی و چندوجهی در تعامل فارسی-2022
This study is on pursuing an interactional outcome in the face of a co-interactant’s resistance. Despite at least a forty-year history of research on pursuits in social interaction (Jefferson, 1981; Pomerantz, 1984b), there is still much to explore about this ubiquitous social phenomenon. This research employs a multimodal conversation analytic methodology to address some less-explored questions on pursuits: what practices does an interactant use to further their course of action against their co-interactant’s resistance? Do the details of these practices have implications for the trajectory of the interaction towards escalation or de-escalation? What do these practices tell us about the agentive stance adopted by the pursuing party? And how can interactants heading towards an escalated pursuit manage disaffiliation? Two different types of pursuit sequences are introduced: persisting in furthering one’s course of action and gradually desisting from a course of action. The findings show a novel phenomenon called multimodal gradation: a temporally coordinated up- or downgrading of a multitude of resources that are simultaneously used in formatting a social action. Borrowing Mondada’s terms (2014), a whole “multimodal Gestalt” by which a turn at talk is delivered is up- or downgraded. Multimodal upgrading of a pursuit turn projects further expansions to the pursuit sequence and it can escalate an initial clash. On the other hand, multimodal downgrading of a pursuit turn projects a contingent sequence closure and de-escalation. Also, upgrading the multimodal Gestalt of a pursuit turn displays the pursuing party’s stronger agentive stance compared to downgrading the turn. The project introduces another multimodal phenomenon termed mock aggression. Used between intimate interactants, mock aggression offers opportunities for affiliation despite its aggressive appearance. The findings have implications for our understanding of sequence and preference organization in CA, multimodality, agency, and conflict management. Data are in Persian and collected in Iran.
مقاله انگلیسی
3 Adaptive Management of Multimodal Biometrics—A Deep Learning and Metaheuristic Approach
مدیریت تطبیقی بیومتریک چند حالته - یادگیری عمیق و رویکرد فرا مکاشفه ای-2021
This paper introduces the framework for adaptive rank-level biometric fusion: a new approach towards personal authentication. In this work, a novel attempt has been made to identify the optimal design parameters and framework of a multibiometric system, where the chosen biometric traits are subjected to rank-level fusion. Optimal fusion parameters depend upon the security level demanded by a particular biometric application. The proposed framework makes use of a metaheuristic approach towards adaptive fusion in the pursuit of achieving optimal fusion results at varying levels of security. Rank-level fusion rules have been employed to provide optimum performance by making use of Ant Colony Optimization technique. The novelty of the reported work also lies in the fact that the proposed design engages three biometric traits simultaneously for the first time in the domain of adaptive fusion, so as to test the efficacy of the system in selecting the optimal set of biometric traits from a given set. Literature reveals the unique biometric characteristics of the fingernail plate, which have been exploited in this work for the rigorous experimentation conducted. Index, middle and ring fingernail plates have been taken into consideration, and deep learning feature-sets of the three nail plates have been extracted using three customized pre-trained models, AlexNet, ResNet-18 and DenseNet-201. The adaptive multimodal performance of the three nail plates has also been checked using the already existing methods of adaptive fusion designed for addressing fusion at the score-level and decision- level. Exhaustive experiments have been conducted on the MATLAB R2019a platform using the Deep Learning Toolbox. When the cost of false acceptance is 1.9, experimental results obtained from the proposed framework give values of the average of the minimum weighted error rate as low as 0.0115, 0.0097 and 0.0101 for the AlexNet, ResNet-18 and DenseNet-201 based experiments respectively. Results demonstrate that the proposed system is capable of computing the optimal parameters for rank-level fusion for varying security levels, thus contributing towards optimal performance accuracy.© 2021 Elsevier B.V. All rights reserved.
Keywords: Adaptive Biometric Fusion | Ant Colony Optimization | Deep Learning | Fingernail Plate | Multimodal Biometrics | Rank-level Adaptive Fusion
مقاله انگلیسی
4 Deep belief network-based hybrid model for multimodal biometric system for futuristic security applications
مدل ترکیبی مبتنی بر باور عمیق برای سیستم بیومتریک چند حالته برای برنامه های امنیتی آینده-2021
Biometrics is the technology to identify humans uniquely based on face, iris, and fingerprints, etc. Biometric authentication allows the person recognition automatically on the basis of behavioral or physiological charac- teristics. Biometrics are broadly employed in several commercial as well as the official identification systems for automatic access control. This paper introduces the model for multimodal biometric recognition based on score level fusion method. The overall procedure of the proposed method involves five steps, such as pre-processing, feature extraction, recognition score using Multi- support vector neural network (Multi-SVNN) for all traits, score level fusion, and recognition using deep belief neural network (DBN). The first step is to input the training images into pre-processing steps. Thus, the pre-processing of three traits, like iris, ear, and finger vein is done. Then, the feature extraction is done for each modality to extract the features. After that, the texture features are extracted from pre-processed images of the ear, iris, and finger vein, and the BiComp features are acquired from individual images using a BiComp mask. Then, the recognition score is computed based on the Multi-SVNN classifier to provide the score individually for all three traits, and the three scores are provided to the DBN. The DBN is trained using the chicken earthworm optimization algorithm (CEWA). The CEWA is the integration of the chicken swarm optimization (CSO), and earthworm optimization algorithm (EWA) for the optimal authentication of the person. The analysis proves that the developed method acquired a maximal accuracy of 95.36%, maximal sensitivity of 95.85%, and specificity of 98.79%, respectively.
Keywords: Multi-modal Bio-metric system | Chicken Swarm Optimization | Earthworm Optimization algorithm | Deep Belief Network | Multi-SVNN
مقاله انگلیسی
5 Ann trained and WOA optimized feature-level fusion of iris and fingerprint
بهینه سازی شبکه‌های عصبی مصنوعی و WOA آموزش دیده همجوشی در سطح ویژگی عنبیه و اثر انگشت-2021
‘‘Uni Uni-modal Biometric systems has been widely implemented for maintaining security and privacy in various applications like mobile phones, banking apps, airport access control, laptop login etc. Due to Advancement in technologies, imposters designed various ways to breach the security and most of the designed biometric applications security can be compromised. The quality of input sample also play an important role to attain the best performance in terms of improved accuracy and reduced FAR & FRR. Researchers has combined the various biometrics modalities to overcome the problems of Uni-modal bio- metrics. In this paper, a multi biometric feature level fusion system of Iris, and Fingerprint is presented. Due to consistency feature of fingerprint and stability feature of iris modality taken into consideration for high security applications. At pre-processing level, the atmospheric light adjustment algorithm is applied to improve the quality of input samples (Iris and Fingerprint). For feature extraction, the nearest neighbor algorithm and speedup robust feature (SURF) is applied to fingerprint and Iris data respectively. Further, for selecting the best features, the extracted features are optimized by GA algorithm. To achieve an excellent recognition rate, the iris and fingerprint data is trained by ANN algorithm. The experimental results show that the proposed system exhibits the improved performance and better security. Finally, the template is secured by applying the AES algorithm and results are compared with DES, 3DES, RSA and RC4 algorithm.© 2021 Elsevier Ltd. All rights reserved. Selection and peer-review under responsibility of the scientific committee of the 1st International Con- ference on Computations in Materials and Applied Engineering – 2021.
Keywords: Multimodal biometrics fusion | ANN | SURF | GA | RSA
مقاله انگلیسی
6 GaitCode: Gait-based continuous authentication using multimodal learning and wearable sensors
GaitCode: احراز هویت پیوسته مبتنی بر راه رفتن با استفاده از یادگیری چند حالته و حسگرهای پوشیدنی-2021
The ever-growing threats of security and privacy loss from unauthorized access to mobile devices have led to the development of various biometric authentication methods for easier and safer data access. Gait-based authentication is a popular biometric authentication as it utilizes the unique patterns of human locomotion and it requires little cooperation from the user. Existing gait-based biometric authentication methods however suffer from degraded performance when using mobile devices such as smart phones as the sensing device, due to multiple reasons, such as increased accelerometer noise, sensor orientation and positioning, and noise from body movements not related to gait. To address these drawbacks, some researchers have adopted methods that fuse information from multiple accelerometer sensors mounted on the human body at different lo- cations. In this work we present a novel gait-based continuous authentication method by applying multimodal learning on jointly recorded accelerometer and ground contact force data from smart wearable devices. Gait cycles are extracted as a basic authentication element, that can continuously authenticate a user. We use a network of auto-encoders with early or late sensor fusion for feature extraction and SVM and soft max for classification. The effectiveness of the proposed approach has been demonstrated through extensive experiments on datasets collected from two case studies, one with commercial off-the-shelf smart socks and the other with a medical-grade research prototype of smart shoes. The evaluation shows that the proposed approach can achieve a very low Equal Error Rate of 0.01% and 0.16% for identification with smart socks and smart shoes respectively, and a False Acceptance Rate of 0.54%–1.96% for leave-one-out authentication.
Keywords: Biometric authentication | Gait authentication | Autoencoders | Sensor fusion | Multimodal learning | Wearable sensors
مقاله انگلیسی
7 Multimodal biometric authentication for mobile edge computing
Multimodal biometric authentication for mobile edge computing-2021
In this paper, we describe a novel Privacy Preserving Biometric Authentication (PPBA) sys- tem designed for Mobile Edge Computing (MEC) and multimodal biometrics. We focus on hill climbing attacks that reveal biometric templates to insider adversaries despite the encrypted storage in the cloud. First, we present an impossibility result on the existence of two-party PPBA systems that are resistant to these attacks. To overcome this negative result, we add a non-colluding edge server for detecting hill climbing attacks both in semi-honest and malicious model. The edge server that stores each user’s secret parameters enables to outsource the biometric database to the cloud and perform matching in the encrypted domain. The proposed system combines Set Overlap and Euclidean Distance metrics using score level fusion. Here, both the cloud and edge servers cannot learn the fused matching score. Moreover, the edge server is prevented from accessing any partial score. The efficiency of the crypto-primitives employed for each biometric modality results in linear computation and communication overhead. Under different MEC scenarios, the new system is found to be most efficient with a 2-tier architecture, which achieves %75 lower latency compared to mobile cloud computing.© 2021 Elsevier Inc. All rights reserved.
Keywords: Privacy Preserving Biometric Authentication (PPBA) | Mobile Edge Computing (MEC) | Multimodal Biometrics | Hill Climbing Attacks (HCA) | Euclidean distance | Malicious security
مقاله انگلیسی
8 Joint discriminative feature learning for multimodal finger recognition
یادگیری ویژگی های تبعیض آمیز مشترک برای تشخیص انگشتان چند حالته-2021
Recently, finger-based multimodal biometrics, due to its high security and stability, has received considerable attention compared with unimodal biometrics. However, existing multimodal finger feature ex- traction approaches separately extract the features of different modalities, at the same time ignoring correlations among these different modalities. Furthermore, most of the conventional finger feature representation approaches are hand-crafted by design, which require strong prior knowledge. It is therefore very important to explore and develop a suitable feature representation and fusion strategy for mul- timodal biometrics recognition. In this paper, we proposed a joint discriminative feature learning (JDFL) framework for multimodal finger recognition by combining finger vein (FV) and finger knuckle print (FKP) patterns. For the FV and FKP images, we first established the informative dominant direction vector by convoluting a bank of Gabor filters and the original finger image. Then, we developed a simple yet effective feature learning algorithm, which simultaneously maximized the distance of between-class samples and minimized the distance of within-class samples, as well as maximized the correlation among inter- modality samples of the within-class. Finally, we integrated the block-wise histograms of the learned feature maps together for multimodal finger fusion recognition. Experimental results demonstrated that the proposed approach has a better recognition performance than state-of-the-art finger recognition methods.© 2020 Elsevier Ltd. All rights reserved.
Keywords: Multimodal biometrics | Feature fusion | Inter-modality | Joint feature learning
مقاله انگلیسی
9 یادگیری عمیق برای تشخیص اشیا و درک صحنه در خودروهای خودران: نظرسنجی ، چالش ها و مسائل باز
سال انتشار: 2021 - تعداد صفحات فایل pdf انگلیسی: 20 - تعداد صفحات فایل doc فارسی: 82
این مقاله یک بررسی جامع از کاربرد های یادگیری عمیق برای تشخیص اشیا و درک صحنه در وسایل نقلیه خودران ارائه میکند. برخلاف مقالات مروری موجود ، ما تئوری زیربنایی وسایل نقلیه خودران را از منظر یادگیری عمیق و پیاده سازی های فعلی بررسی میکنیم وبه دنبال آن ارزیابی های انتقادی آن ها انجام میشود. یادگیری عمیق یکی از راه حل های بالقوه برای مشکلات تشخیص اشیا و درک صحنه است که میتواند خودروهای الگوریتم محور و داده محور را فعال کند. در این مقاله قصد داریم از طریق یک نظرسنجی جامع ، شکاف بین یادگیری عمیق و خودروهای خودران را پر کنیم. ما با مقدمه ای برماشین های خودران، یادگیری عمیق و بینایی کامپیوتری و سپس مروری بر هوش عمومی مصنوعی شروع میکنیم. سپس، کتابخانه های قدرتمند یادگیری عمیق موجود و نقش و اهمیت آنها در رشد یادگیری عمیق را طبقه بندی می کنیم. در نهایت، ما چندین تکنیک را مورد بحث قرار میدهیم که به مسائل مربوط به درک تصویر در رانندگی بلادرنگ میپردازد، و پیاده سازی ها و آزمایش های اخیر انجام شده بر روی خودروهای خودران را به طور انتقادی ارزیابی میکنیم. یافته ها و اقدامات در مراحل مختلف برای ارتباط بین تکنیک های رایج و آینده نگر، و کاربرد، مقیاس پذیری و امکان پذیری یادگیری عمیق در خودروهای خودران برای دستیابی به رانندگی ایمن بدون دخالت انسان خلاصه شده اند. بر اساس نظرسنجی فعلی، چندین توصیه برای تحقیقات بیشتر درپایان این مقاله موردبحث قرار گرفته است.
کلید واژه ها: ماشین ها ی خودران | سطوح اتوماسیون | یادگیری ماشین | یادگیری عمیق | شبکه های عصبی کانولوشنال | درک صحنه | تشخیص اشیا | همجوشی حسگر چند وجهی | LiDAR | بینایی ماشین | ابتکارات رانندگی مستقل
مقاله ترجمه شده
10 Biometric recognition through 3D ultrasound hand geometry
تشخیص بیومتریک از طریق هندسه سونوگرافی سه بعدی دست-2021
Biometric recognition systems based on ultrasonic images have several advantages over other technologies, including the capability of capturing 3D images and detecting liveness. In this work, a recognition system based on hand geometry achieved through ultrasound images is proposed and experimentally evaluated. 3D images of human hand are acquired by performing parallel mechanical scans with a commercial ultrasound probe. Several 2D images are then extracted at increasing under-skin depths and, from each of them, up to 26 distances among key points of the hand are defined and computed to achieve a 2D template. A 3D template is then obtained by combining in several ways 2D templates of two or more images. A preliminary evaluation of the system is achieved by carrying out verification experiments on a home–made database. Results have shown a good recognition accuracy: the Equal Error Rate was 1.15% when a single 2D image is used and improved to 0.98% by using the 3D template. The possibility to upgrade the proposed system to a multimodal system, by extracting from the same volume other features like palmprint and hand veins, as well as possible improvements are finally discussed.
Keywords: Ultrasound imaging | Image processing | Biometry | Hand Geometry
مقاله انگلیسی
rss مقالات ترجمه شده rss مقالات انگلیسی rss کتاب های انگلیسی rss مقالات آموزشی
logo-samandehi
بازدید امروز: 5304 :::::::: بازدید دیروز: 0 :::::::: بازدید کل: 5304 :::::::: افراد آنلاین: 72