دانلود و نمایش مقالات مرتبط با Subset Selection::صفحه 1
دانلود بهترین مقالات isi همراه با ترجمه فارسی 2

با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد). 

نتیجه جستجو - Subset Selection

تعداد مقالات یافته شده: 4
ردیف عنوان نوع
1 Simultaneous feature weighting and parameter determination of Neural Networks using Ant Lion Optimization for the classification of breast cancer
وزن همزمان ویژگی ها و تعیین پارامتر شبکه های عصبی با استفاده از بهینه سازی مورچه ها برای طبقه بندی سرطان پستان-2020
In this paper, feature weighting is used to develop an effective computer-aided diagnosis system for breast cancer. Feature weighting is employed because it boosts the classification performance more as compared to feature subset selection. Specifically, a wrapper method utilizing the Ant Lion Optimization algorithm is presented that searches for best feature weights and parametric values of Multilayer Neural Network simultaneously. The selection of hidden neurons and backpropagation training algorithms are used as parameters of neural networks. The performance of the proposed approach is evaluated on three breast cancer datasets. The data is initially normalized using tanh method to remove the effects of dominant features and outliers. The results show that the proposed wrapper method has a better ability to attain higher accuracy as compared to the existing techniques. The obtained high classification performance validates the work which has the potential for becoming an alternative to the other well-known techniques.
Keywords: Antlion optimization | Breast cancer | Feature weighting | Neural Networks
مقاله انگلیسی
2 On initial population generation in feature subset selection
تولید جمعیت اولیه در انتخاب زیر مجموعه ویژگی-2019
Performance of evolutionary algorithms depends on many factors such as population size, number of generations, crossover or mutation probability, etc. Generating the initial population is one of the impor- tant steps in evolutionary algorithms. A poor initial population may unnecessarily increase the number of searches or it may cause the algorithm to converge at local optima. In this study, we aim to find a promis- ing method for generating the initial population, in the Feature Subset Selection (FSS) domain. FSS is not considered as an expert system by itself, yet it constitutes a significant step in many expert systems. It eliminates redundancy in data, which decreases training time and improves solution quality. To achieve our goal, we compare a total of five different initial population generation methods; Information Gain Ranking (IGR), greedy approach and three types of random approaches. We evaluate these methods using a specialized Teaching Learning Based Optimization searching algorithm (MTLBO-MD), and three super- vised learning classifiers: Logistic Regression, Support Vector Machines, and Extreme Learning Machine. In our experiments, we employ 12 publicly available datasets, mostly obtained from the well-known UCI Machine Learning Repository. According to their feature sizes and instance counts, we manually classify these datasets as small, medium, or large-sized. Experimental results indicate that all tested methods achieve similar solutions on small-sized datasets. For medium-sized and large-sized datasets, however, the IGR method provides a better starting point in terms of execution time and learning performance. Finally, when compared with other studies in literature, the IGR method proves to be a viable option for initial population generation.
Keywords: Feature subset selection | Initial population | Multiobjective optimization
مقاله انگلیسی
3 Geometrical and topological approaches to Big Data
روش هندسی و توپولوژیکی به داده های بزرگ-2017
Modern data science uses topological methods to find the structural features of data sets before further supervised or unsupervised analysis. Geometry and topology are very natural tools for analysing massive amounts of data since geometry can be regarded as the study of distance functions. Mathematical formalism, which has been developed for incorporating geometric and topological techniques, deals with point cloud data sets, i.e. finite sets of points. It then adapts tools from the various branches of geometry and topology for the study of point cloud data sets. The point clouds are finite samples taken from a geometric object, perhaps with noise. Topology provides a formal language for qualitative mathematics, whereas geometry is mainly quantitative. Thus, in topology, we study the relationships of proximity or nearness, without using distances. A map between topological spaces is called continuous if it preserves the nearness structures. Geometrical and topological methods are tools allowing us to analyse highly complex data. These methods create a summary or compressed representation of all of the data features to help to rapidly uncover particular patterns and relationships in data. The idea of constructing summaries of entire domains of attributes involves understanding the relationship between topological and geometric objects constructed from data using various features. A common thread in various approaches for noise removal, model reduction, feasibility reconstruction, and blind source separation, is to replace the original data with a lower dimensional approximate representation obtained via a matrix or multi-directional array factorization or decomposition. Besides those transformations, a significant challenge of feature summarization or subset selection methods for Big Data will be considered by focusing on scalable feature selection. Lower dimensional approximate representation is used for Big Data visualization. The cross-field between topology and Big Data will bring huge opportunities, as well as challenges, to Big Data communities. This survey aims at bringing together state-of-the-art research results on geometrical and topological methods for Big Data.
Keywords:Big Data|Industry 4.0|Topological data analysis|Persistent homology|Dimensionality reduction|Big Data visualization
مقاله انگلیسی
4 انتخاب زیرمجموعه از طریق بهینه سازی Pareto
سال انتشار: 2015 - تعداد صفحات فایل pdf انگلیسی: 9 - تعداد صفحات فایل doc فارسی: 28
انتخاب زیرمجموعه بهینه از مجموعه بزرگ متغیرها در واقع مشکل اساسی در فعالیت های مختلف یادگیری است ٬ همانند : انتخاب ویژگی ٬ رگرسیون پراکنده و یادگیری دیکشنری و غیره. در این مقاله ما رویکرد POSS (انتخاب زیرمجموعه بهینه سازی پارتو ) را پیشنهاد می دهیم که بهینه سازی تکاملی Pareto را بکار می گیرد تا زیرمجموعه در اندازه کوچک و با عملکرد مناسب یافت شود . ما ثابت می نماییم که برای رگرسیون پراکنده ٬ رویکرد POSS قادر است تا به لحاظ نظری عملکرد تقریب تضمین شده موثر را که تا به حال بهترین بوده است بدست آورد . به ویژه ٬ برای زیرمجموعه Exponential Decay ٬ ثابت شده است که رویکرد مذکور راه حل بهینه ای را بدست می آورد . تحقیقات تجربی توانسته است نتایج نظری را بازبینی نماید و عملکرد برتر رویکرد POSS را برای روش های بهینه سازی محدب و الگوریتم حریصانه نمایش دهد .
مقاله ترجمه شده
rss مقالات ترجمه شده rss مقالات انگلیسی rss کتاب های انگلیسی rss مقالات آموزشی
logo-samandehi
بازدید امروز: 44 :::::::: بازدید دیروز: 0 :::::::: بازدید کل: 44 :::::::: افراد آنلاین: 46