با سلام خدمت کاربران عزیز، به اطلاع می رساند ترجمه مقالاتی که سال انتشار آن ها زیر 2008 می باشد رایگان بوده و میتوانید با وارد شدن در صفحه جزییات مقاله به رایگان ترجمه را دانلود نمایید.
Noise can speed backpropagation learning and deep bidirectional pretraining
سر و صدا می تواند یادگیری پشت پرده و پیشبرد عمیق دو طرفه را سرعت بخشد-2020
We show that the backpropagation algorithm is a special case of the generalized Expectation-Maximization (EM) algorithm for iterative maximum likelihood estimation. We then apply the recent result that carefully chosen noise can speed the average convergence of the EM algorithm as it climbs a hill of probability. Then injecting such noise can speed the average convergence of the backpropagation algorithm for both the training and pretraining of multilayer neural networks. The beneficial noise adds to the hidden and visible neurons and related parameters. The noise also applies to regularized regression networks. This beneficial noise is precisely the noise that makes the current signal more probable. We show that such noise also tends to improve classification accuracy. The geometry of the noise-benefit region depends on the probability structure of the neurons in a given layer. The noise-benefit region in noise space lies above the noisy-EM (NEM) hyperplane for classification and involves a hypersphere for regression. Simulations demonstrate these noise benefits using MNIST digit classification. The NEM noise benefits substantially exceed those of simply adding blind noise to the neural network. We further prove that the noise speed-up applies to the deep bidirectional pretraining of neural-network bidirectional associative memories (BAMs) or their functionally equivalent restricted Boltzmann machines. We then show that learning with basic contrastive divergence also reduces to generalized EM for an energy-based network probability. The optimal noise adds to the input visible neurons of a BAM in stacked layers of trained BAMs. Global stability of generalized BAMs guarantees rapid convergence in pretraining where neural signals feed back between contiguous layers. Bipolar coding of inputs further improves pretraining performance.
Keywords: Backpropagation | neural networks | noise benefit | stochastic resonance | Expectation-Maximization algorithm | bidirectional associative memory | deep learning | regularization | pretraining | contrastive divergence
Algorithmic sign prediction and covariate selection across eleven international stock markets
پیش بینی علائم الگوریتمی و انتخاب متغیرها در یازده بورس بین المللی سهام-2019
I investigate whether an expert system can be used for profitable long-term asset management. The trad- ing strategy of the expert system needs to be based on market predictions. To this end, I generate binary predictions of the market returns by using statistical and machine-learning algorithms. The methods used include logistic regressions, regularized logistic regressions and similarity-based classification. I test the methods in a contemporary data set involving data from eleven developed markets. Both statistical and economic significance of the results are considered. As an ensemble, the results seem to indicate that there is some degree of mild predictability in the stock markets. Some of the results obtained are highly significant in the economic sense, featuring annualized excess returns of 3.1% (France), 2.9% (Netherlands) and 0.8% (United States). However, statistically significant results are seldom found. Consequently, the re- sults do not completely invalidate the efficient-market hypothesis.
Keywords: Stock market indices | S&P 500 | Sign prediction | Efficient-market hypothesis | Regularized regression | Similarity-based classification