دانلود و نمایش مقالات مرتبط با ماشین ها::صفحه 1
بلافاصله پس از پرداخت دانلود کنید 2
نتیجه جستجو - ماشین ها

تعداد مقالات یافته شده: 49
ردیف عنوان نوع
1 Use of support vector machines with a parallel local search algorithm for data classification and feature selection
استفاده از ماشینهای بردار پشتیبانی با الگوریتم جستجوی محلی موازی برای طبقه بندی داده ها و انتخاب ویژگی ها-2020
Over the last decade, the number of studies on machine learning has significantly increased. One of the most widely researched areas of machine learning is data classification. Most big data systems require a large amount of information storage for analytic purposes; however, this involves some disadvantages, such as the costs of processing and collecting data. Thus, many researchers and practitioners are working on effectively reducing the number of features used in classification. This paper proposes a method which jointly optimizes both feature selection and classification. A survey of the relevant literature shows that the vast majority of studies focus on either feature selection or classification. In this study, the proposed parallel local search algorithm both selects features and finds a classifier with high rates of accuracy. Moreover, the proposed method is capable of finding solutions for problems that have extremely high numbers of features within a reasonable computation time.
Keywords: Support vector machines | Feature selection | Classification | Heuristic | Machine learning
مقاله انگلیسی
2 Quantum recurrent encoder–decoder neural network for performance trend prediction of rotating machinery
شبکه عصبی رمزگذار- رمزگذار مکرر کوانتومی برای پیش بینی روند عملکرد ماشین های چرخشی-2020
Traditional neural networks generally neglect the primary and secondary relationships of input information and process the information indiscriminately, which leads to their bad nonlinear approximation capacity and low generalization ability. As a result, traditional neural networks always show poor prediction accuracy in the performance degradation trend prediction of rotating machinery (RM). In view of this, a novel neural network called quantum recurrent encoder–decoder neural network (QREDNN) is proposed in this paper. In QREDNN, the attention mechanism is used to simultaneously reconstruct encoder and decoder of QREDNN, so that QREDNN can fully excavate and pay attention to important information but suppress the interference of redundant information to obtain better nonlinear approximation capacity. On the other hand, the quantum neuron is used to construct a new quantum gated recurrent unit (QGRU) in which activation values and weights are represented by quantum rotation matrices. The QGRU can traverse the solution space more finely and has a lot of multiple attractors, so it can replace the traditional recurrent unit of the encoder and decoder and enhance the generalization ability and response speed of QREDNN. Moreover, the Levenberg– Marquardt (LM) algorithm is introduced to improve the update speeds of the rotation angles of quantum rotation matrices and the attention parameters of QREDNN. Based on the superiorities of QREDNN, a new performance trend prediction method for RM is proposed, in which the denoised fuzzy entropy (DFE) of vibration acceleration signal of RM is input into QREDNN as the performance degradation feature for predicting the performance degradation trend of RM. The examples of predicting the performance trend of rolling bearings demonstrate the effectiveness of our proposed method.
Keywords: Quantum recurrent encoder–decoder | neural network (QREDNN) | Artificial intelligence | Attention mechanism | Quantum neuron | Performance trend prediction | Rotating machinery
مقاله انگلیسی
3 تعادل بار در محاسبات ابری: یک تصویر بزرگ
سال انتشار: 2020 - تعداد صفحات فایل pdf انگلیسی: 10 - تعداد صفحات فایل doc فارسی: 27
زمان بندی یا تخصیص درخواست کاربر (وظایف) در محیط ابری یک مساله بهینه سازی NP-hard است. مطابق با زیرساخت ابری و درخواست های کاربران، سیستم ابری همراه با برخی بارها (که ممکن است کم باری یا اضافه بار یا بار متعادل باشد) اختصاص داده می شود. شرایطی همانند کم باری یا اضافه بار سبب خرابی سیستم مرتبط با مصرف توان، زمان اجرا، خرابی ماشین و غیره شود. بنابراین، توازن بار برای غلبه بر تمامی مشکلات اشاره شده فوق مورد نیاز است. این توازن بار کارها (آن ها ممکن است وابسته یا مستقل باشند) بر ماشین های مجازی (VM) جنبه مهمی از زمان بندی کارها در ابرها است. انواع مختلف بارها در شبکه ابری همانند بار حافظه، بار محاسباتی (CPU)، بار شبکه و غیره وجود دارد. توازن بار مکانیزم شناسایی نودهای اضافه بار و کم بار و سپس توزان بار در بین آن ها است. محققان روش های مختلف توازن بار را در محاسبات ابری برای بهینه سازی پارمترهای مختلف عملکرد پیشنهاد داده اند. ما یک طبقه بندی را برای الگوریتم های توزان بار در ابر ارائه کرده ایم. توضیح کوتاهی از پارامترهای عملکرد در ادبیات و اثرات آن ها در این مقاله ارائه شده است. به منظور تحلیل عملکرد الگوریتم های مبتنی بر اکتشاف ، شبیه سازی ها در شبیه ساز CloudSim انجام شده است و نتایج به طور کامل ارائه شده است.
کلید واژه ها: محاسبات ابری | مصرف انرژی | تعادل بار | مجازی سازی | ماشین مجازی | تخصیص وظیفه
مقاله ترجمه شده
4 TAPSTROKE: A novel intelligent authentication system using tap frequencies
TAPSTROKE: رویکرد سیستم احراز هویت هوشمند با استفاده از فرکانسهای آهسته-2019
Emerging security requirements lead to new validation protocols to be implemented to recent authen- tication systems by employing biometric traits instead of regular passwords. If an additional security is required in authentication phase, keystroke recognition and classification systems and related interfaces are very promising for collecting and classifying biometric traits. These systems generally operate in time- domain; however, the conventional time-domain solutions could be inadequate if a touchscreen is so small to enter any kind of alphanumeric passwords or a password consists of one single character like a tap to the screen. Therefore, we propose a novel frequency-based authentication system, TAPSTROKE, as a prospective protocol for small touchscreens and an alternative authentication methodology for existing devices. We firstly analyzed the binary train signals formed by tap passwords consisting of taps instead of alphanumeric digits by the regular (STFT) and modified short time Fourier transformations (mSTFT). The unique biometric feature extracted from a tap signal is the frequency-time localization achieved by the spectrograms which are generated by these transformations. The touch signals, generated from the same tap-password, create significantly different spectrograms for predetermined window sizes. Finally, we conducted several experiments to distinguish future attempts by one-class support vector machines (SVM) with a simple linear kernel for Hamming and Blackman window functions. The experiments are greatly encouraging that we achieved 1.40%–2.12% and 2.01%–3.21% equal error rates (EER) with mSTFT; while with regular STFT the classifiers produced quite higher EER, 7.49%–11.95% and 6.93%–10.12%, with Hamming and Blackman window functions, separately. The whole methodology, as an expert system for protecting the users from fraud attacks sheds light on new era of authentication systems for future smart gears and watches.
Keywords: Tapstroke | Keystroke | Authentication | Biometrics | Frequency | Short time Fourier transformation | Support vector machines
مقاله انگلیسی
5 Image quality recognition technology based on deep learning
فن آوری تشخیص کیفیت تصویر مبتنی بر یادگیری عمیق-2019
Image plays an important role in today’s society and is an important information carrier. However, due to the problems in shooting or processing, image quality is often difficult to be guaranteed, and low-quality images are often difficult to identify, which results in the waste of information. How to effectively identify low-quality images has become a hot research topic in today’s society. Deep learning has a good application in image recognition. In this paper, it is applied to low-quality image recognition. An image quality recognition technology based on deep learning is studied to effectively realize low-quality image recognition. Firstly, in the stage of image preprocessing, a low-quality image enhancement method is proposed, which uses non-linear transformation to enhance image contrast image, restore image details and enhance image quality. Secondly, the convolutional neural network is used to extract image features, and the L2 regularization method is introduced to optimize the over-fitting problem. Finally, SVM is used to recognize the output of convolutional neural network to realize low quality image recognition. Through simulation analysis, it is found that the image enhancement method proposed in the preprocessing stage can effectively enhance the image quality, and deep learning can effectively realize the recognition of the enhanced image and improve the recognition accuracy.
Keywords: Low quality image | Deep learning | Image recognition | Support vector machines(SVM)
مقاله انگلیسی
6 A meta-learning recommender system for hyperparameter tuning: Predicting when tuning improves SVM classifiers
یک سیستم توصیه گر متا یادگیری برای تنظیم hyperparameter : پیش بینی زمانیکه تنظیم باعث بهبود طبقه بندی های SVM می شود-2019
For many machine learning algorithms, predictive performance is critically affected by the hyperparameter values used to train them. However, tuning these hyperparameters can come at a high computational cost, especially on larger datasets, while the tuned settings do not always significantly outperform the default values. This paper proposes a recom- mender system based on meta-learning to identify exactly when it is better to use de- fault values and when to tune hyperparameters for each new dataset. Besides, an in-depth analysis is performed to understand what they take into account for their decisions, pro- viding useful insights. An extensive analysis of different categories of meta-features, meta- learners, and setups across 156 datasets is performed. Results show that it is possible to accurately predict when tuning will significantly improve the performance of the induced models. The proposed system reduces the time spent on optimization processes, without reducing the predictive performance of the induced models (when compared with the ones obtained using tuned hyperparameters). We also explain the decision-making pro- cess of the meta-learners in terms of linear separability-based hypotheses. Although this analysis is focused on the tuning of Support Vector Machines, it can also be applied to other algorithms, as shown in experiments performed with decision trees.
Keywords: Meta-learning | Recommender system | Tuning recommendation | Hyperparameter tuning | Support vector machines
مقاله انگلیسی
7 Epileptic seizures detection based on some new Laguerre polynomial wavelets, artificial neural networks and support vector machines
تشخیص صرع مبتنی بر برخی از موج های چند جمله ای جدید Laguerre ، شبکه های عصبی مصنوعی و ماشین های بردار پشتیبانی-2019
Electroencephalography (EEG) is noninvasive and it is an effective tool to understand the complex nature of the brain. Its analysis by visual inspection is tedious and costly, and that is why many researchers in recent years resort to the use of computer-assisted diagnosis methods. In this work, we propose a new basis of wavelets constructed with Laguerre polynomials inspired by the similarities and differences between the Mexican hat wavelets family and the Gaussian wavelets family. The constructed wavelets are applied in epileptic seizure detection of EEG signals with the help of artificial neural networks and support vector machines for classification. We use a benchmark database, which consists of five hundred signals divided into five different classes, to solve eight clinically relevant classification problems with temporal and spectral features extracted from a continuous wavelet transform (CWT) of the EEG signals and the constructed wavelets. The spectral representation shows an intensive intermediary frequency (10–30 Hz) activity for seizure signals, while the other signals are characterized by a small high frequency (30–40 Hz) activity. We verify that, for any EEG classification problem, the model made up of CWT-PCA-SVM with a quadratic kernel is a classic case of overfitting irrespective of the wavelets used. The results obtained with wavelets constructed by Laguerre polynomials with a three-, fiveand ten-fold cross-validation are better than several state-of-the-art classification methods in terms of classification accuracy.
Keywords: Wavelets | Laguerre polynomials | Epileptic seizure detection | Pattern recognition neural networks | Support vector machines
مقاله انگلیسی
8 Will aging voting machines cause more voters to experience long waits?
آیا پیرشدگی ماشین های رای گیری باعث خواهد شد تا رای دهنده های بیشتری انتظارات طولانی را تجربه کنند؟-2018
As the majority of voting machines in use today approach or exceed their expected lifetime, an increased number of voting machine failures are expected in upcoming elections. This study examines and quantifies the impact of less reliable voting machines, due to age, on the number of voters waiting longer than 30-min. G/G/s queue approximation and discrete event simulation are used in the analysis. Results show that if reliability measures — mean time between failures, mean time to repair, and availability — are within certain interval ranges, no additional voting machines are needed to ensure that no more than 5% of voters wait for longer than 30 min. However, significantly more voters would have long waits if the reliability of voting machines is poor. Accordingly, less reliable voting machines do not necessarily cause more voters to experience long waits. The proposed closed-form approximation formula and the simulation model are practical tools for election officials to evaluate the impact of less reliable voting machines on voting lines.
keywords: Voting operations |Simulation |Machine failure |Voting lines |Wait time distribution
مقاله انگلیسی
9 الگوریتم بهینه سازی ازدحام ذرات با کنترل هوشمند تعداد ذرات برای طراحی بهینه ماشین های الکتریکی
سال انتشار: 2018 - تعداد صفحات فایل pdf انگلیسی: 8 - تعداد صفحات فایل doc فارسی: 23
در این مقاله، یک الگوریتم بهینه سازی ازدحام ذرات (PSO) اصلاح شده پیشنهاد می شود که نسخه ارتقاء یافته الگوریتم PSO معمولی است. برای بهبود دادن عملکرد الگوریتم PSO ، یک روش جدید برای کنترل کردن هوشمندانه تعداد ذرات به کار برده شده است. این روش جدید، مقدار هزینه بهترین جهانی (gbest) در تکرار فعلی نسبت به gbest در تکرار قبلی را با یکدیگر مقایسه می کند. اگر بین دو مقدار هزینه اختلافی وجود داشته باشد، آنگاه الگوریتم پیشنهادی در مرحله اکتشاف عمل می کند و تعداد ذرات را حفظ می کند. اما، وقتی که اختلاف در مقادیر هزینه نسبت به مقادیر تحمل تخصیص یافته توسط کاربر کوچکتر باشد، این الگوریتم پیشنهادی در مرحله استخراج عمل می کند و تعداد ذرات را کاهش می دهد. علاوه بر این، این الگوریتم ، نزدیکترین ذره به بهترین ذره را حذف می کند تا از تصادفی بودنش بر حسب فاصله ی اقلیدسی اطمینان حاصل کند. الگوریتم پیشنهادی با استفاده از پنج تابع آزمون عددی اعتبارسنجی می شود، که تعداد فراخوانی های تابع تا اندازه ای نسبت به PSO معمولی کاهش می یابد. بعد از اعتبار سنجی الگوریتم ، برای طراحی بهینه موتور سنکرون مغناطیس دائم درونی (IPMSM) به کار برده می شود تا اعوجاج هارمونیک کل (THD) نیروی ضد محرکه الکتریکی (back-EMF) کاهش یابد. با در نظر گرفتن شرط عملکرد، طراحی بهینه به دست می آید که back-EMF THD را کاهش داده و مقدار back-EMF را برآورده می کند. نهایتا، یک مدل آزمایشگاهی را ایجاد کرده و آزمایش می کنیم. برای اعتبارسنجی عملکرد طراحی بهینه و الگوریتم بهینه سازی ، یک آزمایش بدون بار انجام می شود. بر اساس نتایج آزمایشگاهی، اثربخشی الگوریتم پیشنهادی بر روی طراحی بهینه یک ماشین الکتریکی تایید می شود.
کلمات کلیدی: طراحی بهینه | الگوریتم بهینه سازی | بهینه سازی ذرات ذرات | ماشین الکتریکی | موتور همگام مگنت دائمی.
مقاله ترجمه شده
10 Train Delay Prediction Systems: A Big Data Analytics Perspective
سیستم پیش بینی تأخیر قطار: چشم انداز تحلیل داده های بزرگ-2018
Current train delay prediction systems do not take advantage of state-of-the-art tools and techniques for handling and extracting useful and actionable information from the large amount of historical train movements data collected by the railway information systems. Instead, they rely on static rules built by experts of the railway infrastructure based on classical univariate statistic. The purpose of this paper is to build a data-driven Train Delay Prediction System (TDPS) for large-scale railway networks which exploits the most recent big data technologies, learning algorithms, and statistical tools. In particular, we propose a fast learning algorithm for Shallow and Deep Extreme Learning Machines that fully exploits the recent in-memory large-scale data processing technologies for predicting train delays. Proposal has been compared with the current state-of-the-art TDPSs. Results on real world data coming from the Italian railway network show that our proposal is able to improve over the current state-of-the-art TDPSs.
Keywords: Railway network ، Train Delay Prediction systems ، Big data analytics ، Extreme learning machines ، Shallow architecture ، Deep architecture
مقاله انگلیسی
rss مقالات ترجمه شده rss مقالات انگلیسی rss کتاب های انگلیسی rss مقالات آموزشی
logo-samandehi