با سلام خدمت کاربران عزیز، به اطلاع می رساند ترجمه مقالاتی که سال انتشار آن ها زیر 2008 می باشد رایگان بوده و میتوانید با وارد شدن در صفحه جزییات مقاله به رایگان ترجمه را دانلود نمایید.
Extreme minimal learning machine: Ridge regression with distance-based basis
یادگیری ماشین حداقل افراطی: رگرسیون ریج با اساس مبتنی بر فاصله-2019
The extreme learning machine (ELM) and the minimal learning machine (MLM) are nonlinear and scal- able machine learning techniques with a randomly generated basis. Both techniques start with a step in which a matrix of weights for the linear combination of the basis is recovered. In the MLM, the feature mapping in this step corresponds to distance calculations between the training data and a set of refer- ence points, whereas in the ELM, a transformation using a radial or sigmoidal activation function is com- monly used. Computation of the model output, for prediction or classification purposes, is straightforward with the ELM after the first step. In the original MLM, one needs to solve an additional multilateration problem for the estimation of the distance-regression based output. A natural combination of these two techniques is proposed and experimented here: to use the distance-based basis characteristic in the MLM in the learning framework of the regularized ELM. In other words, we conduct ridge regression using a distance-based basis. The experimental results characterize the basic features of the proposed technique and surprisingly, indicate that overlearning with the distance-based basis is in practice avoided in clas- sification problems. This makes the model selection for the proposed method trivial, at the expense of computational costs.
Keywords: Randomized learning machines | Extreme learning machine | Minimal learning machine | Extreme minimal learning machine
The performance of ELM based ridge regression via the regularization parameters
کارایی رگرسیون ریج بر پایه ELM از طریق پارامترهای تنظیم-2019
The extreme learning machine (ELM) which is a single layer feedforward neural network provides ex- tremely fast training speed and good generalization performance. The ELM however, has its respective drawback: it is known to be sensitive to the ill-conditioned data. To overcome the ill-conditioning prob- lem in ELM, ELM based on ridge regression (RR-ELM) was proposed. Since RR-ELM is a biased method, ELM based on almost unbiased ridge regression (AUR-ELM) was accordingly proposed to reduce the bias in a certain extent. RR-ELM and AUR-ELM introduced in the existence of multicollinearity, depend on the regularization parameter. The regularization parameter affects the performance of both RR-ELM and AUR- ELM. There is no consensus on the selection of the regularization parameter. Although there are various methods in linear regression to select the regularization parameter, only one method based on the selec- tion minimizing the mean squared error was used in RR-ELM. In this study, AIC, BIC and CV criteria in the context of RR-ELM and AUR-ELM were proposed as alternative methods for the selection of the regular- ization parameter. An experimental study was conducted on eight data sets which are widely known and used in machine learning studies. The analyzes are considered as purposive for regression studies which are the most important fields of expert systems and machine learning. The results obtained demonstrate that the selection method of the regularization parameter is significantly effective on both the general- ization and particularly stability performance of RR-ELM and AUR-ELM when compared to ELM
Keywords: Extreme learning machine | Ridge regression | Almost unbiased ridge regression | Regularized extreme learning machine | Model selection