دانلود و نمایش مقالات مرتبط با Dimensionality reduction::صفحه 1
دانلود بهترین مقالات isi همراه با ترجمه فارسی 2

با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد). 

نتیجه جستجو - Dimensionality reduction

تعداد مقالات یافته شده: 26
ردیف عنوان نوع
1 A real-time tennis level evaluation and strokes classification system based on the Internet of Things
یک سیستم ارزیابی سطح تنیس در زمان واقعی و طبقه بندی ضربه ها بر اساس اینترنت اشیا-2022
In this study a single wearable inertial measurement unit (IMU) and machine learning method- ologies were used to conduct player level evaluation and classification five prototype tennis strokes in real-time. The International Tennis Number (ITN) test was used to verify the accuracy of this IoT system in evaluating participant level. We conducted the ITN test on thirty-six par- ticipants and conducted one-way ANOVA on the ITN test results using IBM SPSS 26. The IMU in this study contained a tri-axis accelerometer (± 16 g) and tri-axis gyroscope (± 2000◦ /s) worn on the participants’ wrist connected to a wireless low-energy Bluetooth smart-phone with data sent to the computer terminal by cloud storage. Data processing including preprocessing, segmenta- tion, feature extraction, dimensionality reduction and classification using Support Vector Ma- chines (SVM), K-nearest neighbor (K-NN) and Naive Bayes (NB) algorithms. One-way ANOVA analysis predicting participants’ ITN level and ITN field test scores yielded p < 0.001 at the three different skill levels tested. SVM (MinMax), SVM (Standardiser) and SVM (MaxAbsScaler) clas- sified unique tennis strokes precision and recall factors at the three different skill levels reliably yielded in f1-scores above 0.90 for serve, forehand and backhand, with f1-scores for forehand and backhand volley scores falling below that. The results of this study suggest using a single six-axial 50 Hz IMU in combination with SVM and SVM + PCA represents a significant step towards a more reliable wearable tennis stroke performance and skill level real-time evaluation and feedback technology.
keywords: اینترنت اشیا | جمع آوری داده ها | پردازش داده ها | یادگیری ماشین | اپلیکیشن موبایل | تنیس | سنسورهای پوشیدنی | ارتباطات بی سیم | Internet of Things | Data collection | Data processing | Machine learning | Mobile application | Tennis | Wearable sensors | Wireless communication
مقاله انگلیسی
2 Dynamic resource allocation during reinforcement learning accounts for ramping and phasic dopamine activity
تخصیص منابع پویا در طول حساب های یادگیری تقویتی برای فعالیت دوپامین ramping و مرحله ای-2020
For an animal to learn about its environment with limited motor and cognitive resources, it should focus its resources on potentially important stimuli. However, too narrow focus is disadvantageous for adaptation to environmental changes. Midbrain dopamine neurons are excited by potentially important stimuli, such as reward-predicting or novel stimuli, and allocate resources to these stimuli by modulating how an animal approaches, exploits, explores, and attends. The current study examined the theoretical possibility that dopamine activity reflects the dynamic allocation of resources for learning. Dopamine activity may transition between two patterns: (1) phasic responses to cues and rewards, and (2) ramping activity arising as the agent approaches the reward. Phasic excitation has been explained by prediction errors generated by experimentally inserted cues. However, when and why dopamine activity transitions between the two patterns remain unknown. By parsimoniously modifying a standard temporal difference (TD) learning model to accommodate a mixed presentation of both experimental and environmental stimuli, we simulated dopamine transitions and compared them with experimental data from four different studies. The results suggested that dopamine transitions from ramping to phasic patterns as the agent focuses its resources on a small number of rewardpredicting stimuli, thus leading to task dimensionality reduction. The opposite occurs when the agent re-distributes its resources to adapt to environmental changes, resulting in task dimensionality expansion. This research elucidates the role of dopamine in a broader context, providing a potential explanation for the diverse repertoire of dopamine activity that cannot be explained solely by prediction error.
Keywords: Prediction error | Salience | Temporal-difference learning model | Pearce-Hall model | Habit | Striatum
مقاله انگلیسی
3 Electricity demand forecasting for decentralised energy management
پیش بینی تقاضای برق برای مدیریت انرژی غیر متمرکز-2020
The world is experiencing a fourth industrial revolution. Rapid development of technologies is advancing smart infrastructure opportunities. Experts observe decarbonisation, digitalisation and decentralisation as the main drivers for change. In electrical power systems a downturn of centralised conventional fossil fuel fired power plants and increased proportion of distributed power generation adds to the already troublesome outlook for op- erators of low-inertia energy systems. In the absence of reliable real-time demand forecasting measures, effective decentralised demand-side energy planning is often problematic. In this work we formulate a simple yet highly effective lumped model for forecasting the rate at which electricity is consumed. The methodology presented focuses on the potential adoption by a regional electricity network operator with inadequate real-time energy data who requires knowledge of the wider aggregated future rate of energy consumption. Thus, contributing to a reduction in the demand of state-owned generation power plants. The forecasting session is constructed initially through analysis of a chronological sequence of discrete observations. Historical demand data shows behaviour that allows the use of dimensionality reduction techniques. Combined with piecewise interpolation an electricity demand forecasting methodology is formulated. Solutions of short-term forecasting problems provide credible predictions for energy demand. Calculations for medium-term forecasts that extend beyond 6-months are also very promising. The forecasting method provides a way to advance a novel decentralised informatics, optimisa- tion and control framework for small island power systems or distributed grid-edge systems as part of an evolving demand response service.
Keywords: Demand response | Decentralised | Grid edge | Time series forecasting
مقاله انگلیسی
4 Financial portfolio optimization with online deep reinforcement learning and restricted stacked autoencoder-DeepBreath
بهینه سازی سبد مالی با یادگیری تقویتی عمیق آنلاین و محدود کردن خودکار رمزگذار-DeepBreath-2020
The process of continuously reallocating funds into financial assets, aiming to increase the expected re- turn of investment and minimizing the risk, is known as portfolio management. In this paper, a portfolio management framework is developed based on a deep reinforcement learning framework called Deep- Breath. The DeepBreath methodology combines a restricted stacked autoencoder and a convolutional neu- ral network (CNN) into an integrated framework. The restricted stacked autoencoder is employed in order to conduct dimensionality reduction and features selection, thus ensuring that only the most informative abstract features are retained. The CNN is used to learn and enforce the investment policy which consists of reallocating the various assets in order to increase the expected return on investment. The framework consists of both offline and online learning strategies: the former is required to train the CNN while the latter handles concept drifts i.e. a change in the data distribution resulting from unforeseen circum- stances. These are based on passive concept drift detection and online stochastic batching. Settlement risk may occur as a result of a delay in between the acquisition of an asset and its payment failing to deliver the terms of a contract. In order to tackle this challenging issue, a blockchain is employed. Finally, the performance of the DeepBreath framework is tested with four test sets over three distinct investment periods. The results show that the return of investment achieved by our approach outperforms current expert investment strategies while minimizing the market risk.
Keywords: Portfolio management | Deep reinforcement learning | Restricted stacked autoencoder | Online leaning | Settlement risk | Blockchain
مقاله انگلیسی
5 High-order tensor estimation via trains of coupled third-order CP and Tucker decompositions
تخمین تانسور مرتبه بالا از طریق قطارهای جداشده CP و Tucker همراه درجه سوم-2020
In this work, equivalence relations between a Tensor Train (TT) decomposition and the Canonical Polyadic Decomposi-tion (CPD)/Tucker Decomposition (TD) are investigated. It is shown that a Q-order tensor following a CPD/TD with Q >3can be written using the graph-based formalism as a train of Qtensors of order at most 3following the same decomposition as the initial Q-order tensor. This means that for any prac-tical problem of interest involving the CPD/TD, there existsan equivalent TT-based formulation. This equivalence allows us to overcome the curse of dimensionality when dealing with the big data tensors. In this paper, it is shown that the na-tive difficult optimization problems for CPD/TD of Q-order tensors can be efficiently solved using the TT decomposition according to flexible strategies that involve Q −2optimiza-tion problems with 3-order tensors. This methodology hence involves a number of free parameters linear with Q, and thus allows to mitigate the exponential growth of parameters for Q-order tensors. Then, by capitalizing on the TT decomposi-tion, we also formulate several robust and fastalgorithms to accomplish Joint dImensionality Reduction And Factors rE-trieval (JIRAFE) for the CPD/TD. In particular, based on the TT-SVD algorithm, we show how to exploit the existing coupling between two successive TT-cores in the graph-based formalism. The advantages of the proposed solutions in terms of storage cost, computational complexity and factor estima-tion accuracy are also discussed.
Keywords: Canonical polyadic decomposition | Tucker decomposition | HOSVD | Tensor trains | Structured tensors
مقاله انگلیسی
6 Optimized hardware accelerators for data mining applications on embedded platforms: Case study principal component analysis
شتاب دهنده سخت افزاری بهینه سازی شده برای برنامه های استخراج داده بر روی چهارچوب های embedded: مطالعه موردی تجزیه و تحلیل مؤلفه اصلی-2019
With the proliferation of mobile, handheld, and embedded devices, many applications such as data min- ing applications have found their way into these devices. However, mobile devices have stringent area and power limitations, high speed-performance, reduced cost, and time-to-market requirements. Furthermore, applications running on mobile devices are becoming more complex requiring high processing power. These design constraints pose serious challenges to the embedded system designers. In order to pro- cess the applications on mobile and embedded systems, effectively and efficiently, optimized hardware architectures are needed. We are investigating the utilization of FPGA-based customized hardware to ac- celerate embedded data mining applications including handwritten analysis and facial recognition. For these biometric applications, Principal Component Analysis (PCA) is applied initially, followed by similar- ity measure. In this research work, we introduce novel and efficient embedded hardware architectures to accelerate the PCA computation. PCA is a classic technique to reduce the dimensionality of data by transforming the original data set into a new set of variables called Principal Components (PCs) that rep- resent the key features of the data. We propose two hardware versions for PCA computation, each with its unique optimization techniques to enhance the performance of our designs, and one specifically with additional techniques to reduce the memory access latency of embedded platforms. To the best of our knowledge, we could not find similar work for PCA, specifically catered to the embedded devices, in the published literature. We perform experiments to evaluate the feasibility and efficiency of our designs us- ing a benchmark dataset for biometrics. Our embedded hardware designs are generic, parameterized, and scalable; and achieve 78 times speedup as compared to its software counterparts
Keywords: Data mining | Dimensionality reduction techniques | Embedded and mobile systems | FPGAs | Hardware acceleration | Principal Component Analysis
مقاله انگلیسی
7 Machine Learning Techniques for Satellite Fault Diagnosis
تکنیک های یادگیری ماشین برای تشخیص عیب ماهواره ای-2019
Satellites are known as a remotely operated systems with high degree of complexity due to large number of interconnected devices onboard the satellite. Consequently, it has corresponding significant number of telemetry parameters to allow operator and designers have full control and monitor of satellite mode of operation. The tremendous amount of telemetry data received from the satellite, during its lifetime, has to be analyzed in order to monitor and control subsystems health for better decision making and fast responsively. In this research, we address the topic of using machine learning techniques to diagnose faults of satellite subsystems using its telemetry parameters. The case study and source of telemetry are acquired from Egyptsat-1 satellite which has been launched April 2007 and lost communication with ground station last 2010. We applied Machine learning techniques in order to identify operating modes and corresponding telemetry parameters. We used Support Vector Machine for Regression to analyze the satellite performance; then a fault diagnosis approach is applied to determine the most probable reason of this satellite failure. Telemetry data is clustered using k-means clustering algorithm in combination with t-distributed stochastic neighbor embedding (t-SNE) function for dimensionality reduction. We classified data using Logical Analysis of Data (LAD) in order to generate positive patterns for each failure class which is used to determine probability failure cause for each telemetry parameter. These probabilities enable Fault Tree Analysis (FTA) to get the most probable cause that lead to satellite failure.
Keywords: Machine learning | Telemetry data mining | Satellite fault diagnosis | Logical analysis of data | Fault tree analysis
مقاله انگلیسی
8 Efficient feature selection of power quality events using two dimensional (2D) particle swarms
گزیده ای از ویژگی های موثر حوادث با کیفیت توان با استفاده از تاب های ذره ای دو بعدی (2D)-2019
A novel two-dimensional (2D) learning framework has been proposed to address the feature selection problem in Power Quality (PQ) events. Unlike the existing feature selection approaches, the proposed 2D learning explicitly incorporates the information about the subset cardinality (i.e., the number of features) as an additional learning dimension to effectively guide the search process. The efficacy of this approach has been demonstrated considering fourteen distinct classes of PQ events which conform to the IEEE Standard 1159. The search performance of the 2D learning approach has been compared to the other six well-known feature selection wrappers by considering two induction algorithms: Naive–Bayes (NB) and k-Nearest Neighbors (k-NN). Further, the robustness of the selected/reduced feature subsets has been investigated considering seven different levels of noise. The results of this investigation convincingly demonstrate that the proposed 2D learning can identify significantly better and robust feature subsets for PQ events.
Keywords: Classification | Dimensionality reduction | Feature selection | Particle swarm optimization | Pattern recognition | Power quality
مقاله انگلیسی
9 Progress in context-aware recommender systems — An overview
پیشرفت در سیستم های توصیه کننده آگاه از متن - یک مرور کلی-2019
Recommender Systems are the set of tools and techniques to provide useful recommendations and suggestions to the users to help them in the decision-making process for choosing the right products or services. The recommender systems tailored to leverage contextual information (such as location, time, companion or such) in the recommendation process are called context-aware recommender systems. This paper presents a review on the continual development of context-aware recommender systems by analyzing different kinds of contexts without limiting to any specific application domain. First, an in-depth analysis is conducted on different recommendation algorithms used in context-aware recommender systems. Then this information is used to find out that how these techniques deals with the curse of dimensionality, which is an inherent issue in such systems. Since contexts are primarily based on users’ activity patterns that leads to the development of personalized recommendation services for the users. Thus, this paper also presents a review on how this contextual information is represented (either explicitly or implicitly) in the recommendation process. We also presented a list of datasets and evaluation metrics used in the setting of CARS.Wetried to highlight thathowalgorithmic approaches used in CARS differ from those of conventional RS. In that, we presented what modification or additions are being applied on the top of conventional recommendation approaches to produce context-aware recommendations. Finally, the outstanding challenges and research opportunities are presented in front of the research community for analysis
Keywords: Context | Recommender systems | Context-aware | Dimensionality reduction | Contextual modeling | User modeling
مقاله انگلیسی
10 Machine learning models based on the dimensionality reduction of standard automated perimetry data for glaucoma diagnosis
مدلهای یادگیری ماشینی مبتنی بر کاهش ابعادی داده های پیرامونی اتوماتیک استاندارد برای تشخیص گلوکوم-2019
Introduction: Visual field testing via standard automated perimetry (SAP) is a commonly used glaucoma diagnosis method. Applying machine learning techniques to the visual field test results, a valid clinical diagnosis of glaucoma solely based on the SAP data is provided. In order to reflect structural-functional patterns of glaucoma on the automated diagnostic models, we propose composite variables derived from anatomically grouped visual field clusters to improve the prediction performance. A set of machine learning-based diagnostic models are designed that implement different input data manipulation, dimensionality reduction, and classification methods. Methods: Visual field testing data of 375 healthy and 257 glaucomatous eyes were used to build the diagnostic models. Three kinds of composite variables derived from the Garway-Heath map and the glaucoma hemifield test (GHT) sector map were included in the input variables in addition to the 52 SAP visual filed locations. Dimensionality reduction was conducted to select important variables so as to alleviate high-dimensionality problems. To validate the proposed methods, we applied four classifiers—linear discriminant analysis, naïve Bayes classifier, support vector machines, and artificial neural networks—and four dimensionality reduction methods—Pearson correlation coefficient-based variable selection, Markov blanket variable selection, the minimum redundancy maximum relevance algorithm, and principal component analysis— and compared their classification performances. Results: For all tested combinations, the classification performance improved when the proposed composite variables and dimensionality reduction techniques were implemented. The combination of total deviation values, the GHT sector map, support vector machines, and Markov blanket variable selection obtains the best performance: an area under the receiver operating characteristic curve (AUC) of 0.912. Conclusion: A glaucoma diagnosis model giving an AUC of 0.912 was constructed by applying machine learning techniques to SAP data. The results show that dimensionality reduction not only reduces dimensions of the input space but also enhances the classification performance. The variable selection results show that the proposed composite variables from visual field clustering play a key role in the diagnosis model.
Keywords: Glaucoma | Machine learning classifier | Dimensionality reduction | Visual field clustering
مقاله انگلیسی
rss مقالات ترجمه شده rss مقالات انگلیسی rss کتاب های انگلیسی rss مقالات آموزشی
logo-samandehi
بازدید امروز: 12769 :::::::: بازدید دیروز: 0 :::::::: بازدید کل: 12769 :::::::: افراد آنلاین: 74