با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد).
دسته بندی:
شبکه های نورونی - neuron-networks
سال انتشار:
2020
عنوان انگلیسی مقاله:
Noise can speed backpropagation learning and deep bidirectional pretraining
ترجمه فارسی عنوان مقاله:
سر و صدا می تواند یادگیری پشت پرده و پیشبرد عمیق دو طرفه را سرعت بخشد
منبع:
Sciencedirect - Elsevier - Neural Networks, Journal Pre-proof. doi:10.1016/j.neunet.2020.04.004
نویسنده:
Bart Kosko, Kartik Audhkhasi, Osonde Osoba
چکیده انگلیسی:
We show that the backpropagation algorithm is a special case of the generalized Expectation-Maximization (EM) algorithm for iterative
maximum likelihood estimation. We then apply the recent result that carefully chosen noise can speed the average convergence
of the EM algorithm as it climbs a hill of probability. Then injecting such noise can speed the average convergence of the backpropagation
algorithm for both the training and pretraining of multilayer neural networks. The beneficial noise adds to the hidden and
visible neurons and related parameters. The noise also applies to regularized regression networks. This beneficial noise is precisely
the noise that makes the current signal more probable. We show that such noise also tends to improve classification accuracy. The
geometry of the noise-benefit region depends on the probability structure of the neurons in a given layer. The noise-benefit region
in noise space lies above the noisy-EM (NEM) hyperplane for classification and involves a hypersphere for regression. Simulations
demonstrate these noise benefits using MNIST digit classification. The NEM noise benefits substantially exceed those of simply
adding blind noise to the neural network. We further prove that the noise speed-up applies to the deep bidirectional pretraining
of neural-network bidirectional associative memories (BAMs) or their functionally equivalent restricted Boltzmann machines. We
then show that learning with basic contrastive divergence also reduces to generalized EM for an energy-based network probability.
The optimal noise adds to the input visible neurons of a BAM in stacked layers of trained BAMs. Global stability of generalized
BAMs guarantees rapid convergence in pretraining where neural signals feed back between contiguous layers. Bipolar coding of
inputs further improves pretraining performance.
Keywords: Backpropagation | neural networks | noise benefit | stochastic resonance | Expectation-Maximization algorithm | bidirectional associative memory | deep learning | regularization | pretraining | contrastive divergence
قیمت: رایگان
توضیحات اضافی:
تعداد نظرات : 0