دانلود و نمایش مقالات مرتبط با Sparse coding::صفحه 1
دانلود بهترین مقالات isi همراه با ترجمه فارسی 2

با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد). 

نتیجه جستجو - Sparse coding

تعداد مقالات یافته شده: 4
ردیف عنوان نوع
1 Biologically plausible deep learning—But how far can we go with shallow networks?
یادگیری عمیق زیست شناختی قابل قبول: اما تا چه حد می توانیم با شبکه های کم عمق برویم؟-2019
Training deep neural networks with the error backpropagation algorithm is considered implausible from a biological perspective. Numerous recent publications suggest elaborate models for biologically plausible variants of deep learning, typically defining success as reaching around 98% test accuracy on the MNIST data set. Here, we investigate how far we can go on digit (MNIST) and object (CIFAR10) classification with biologically plausible, local learning rules in a network with one hidden layer and a single readout layer. The hidden layer weights are either fixed (random or random Gabor filters) or trained with unsupervised methods (Principal/Independent Component Analysis or Sparse Coding) that can be implemented by local learning rules. The readout layer is trained with a supervised, local learning rule. We first implement these models with rate neurons. This comparison reveals, first, that unsupervised learning does not lead to better performance than fixed random projections or Gabor filters for large hidden layers. Second, networks with localized receptive fields perform significantly better than networks with all-to-all connectivity and can reach backpropagation performance on MNIST. We then implement two of the networks – fixed, localized, random & random Gabor filters in the hidden layer – with spiking leaky integrate-and-fire neurons and spike timing dependent plasticity to train the readout layer. These spiking models achieve >98.2% test accuracy on MNIST, which is close to the performance of rate networks with one hidden layer trained with backpropagation. The performance of our shallow network models is comparable to most current biologically plausible models of deep learning. Furthermore, our results with a shallow spiking network provide an important reference and suggest the use of data sets other than MNIST for testing the performance of future models of biologically plausible deep learning.
Keywords: Deep learning | Local learning rules | Random projections | Unsupervised feature learning | Spiking networks | MNIST
مقاله انگلیسی
2 Sparse representation over discriminative dictionary for stereo matching
نمایندگی انحصاری در مورد فرهنگ لغت تبعیض آمیز برای تطبیق استریو-2017
Article history:Received 28 December 2016Revised 31 May 2017Accepted 7 June 2017Available online 10 June 2017Keywords: Computer vision Stereo matching Data-driven Sparse codingDictionary learningWe propose a novel data-driven matching cost for dense correspondence based on sparse theory. The ability of sparse coding to selectively express the sources of influence on stereo images allows us to learn a discriminative dictionary. The dictionary learning process is incorporated with discriminative learn- ing and weighted sparse coding to enhance the discrimination of sparse coefficients and weaken the influence of radiometric changes. Then, the sparse representations over the learned discriminative dictio- nary are utilized to measure the dissimilarity between image patches. Semi-global cost aggregation and postprocessings are finally enforced to further improve the matching accuracy. Extensive experimental comparisons demonstrate that: the proposed matching cost outperforms traditional matching costs, the discriminative dictionary learning model is more suitable than previous dictionary learning models for stereo matching, and the proposed stereo method ranks the third place on the Middlebury benchmark v3 in quarter resolution up to the submitting, and achieves the best accuracy on 30 classic stereo images.© 2017 Elsevier Ltd. All rights reserved.
Keywords: Computer vision | Stereo matching | Data-driven | Sparse coding | Dictionary learning
مقاله انگلیسی
3 Local and global regularized sparse coding for data representation
برنامه نویسی پراکنده منظم شده محلی و جهانی برای نمایش اطلاعات-2016
Recently, sparse coding has been widely adopted for data representation in real-world applications. In order to consider the geometric structure of data, we propose a novel method, local and global regularized sparse coding (LGSC), for data representation. LGSC not only models the global geometric structure by a global regression regularizer, but also takes into account the manifold structure using a local regression regularizer. Compared with traditional sparse coding methods, the proposed method can preserve both global and local geometric structures of the original high-dimensional data in a new representation space. Experimental results on benchmark datasets show that the proposed method can improve the performance of clustering.& 2015 Elsevier B.V. All rights reserved.
Keywords: Sparse coding | Data representation | Regularizer | Regression | Clustering
مقاله انگلیسی
4 Cloud K-SVD: A Collaborative Dictionary Learning Algorithm for Big, Distributed Data
K-SVD ابری : یک الگوریتم یادگیری لغات همگانی برای داده های توزیع شده بزرگ-2016
This paper studies the problem of data-adaptive representations for big, distributed data. It is assumed that a number of geographically-distributed, interconnected sites have massive local data and they are interested in collaboratively learning a low-dimensional geometric structure underlying these data. In contrast with previous works on subspace-based data representations, this paper focuses on the geometric structure of a union of subspaces (UoS). In this regard, it proposes a distributed algorithm—termed cloud K-SVD—for collaborative learning of a UoS structure underlying distributed data of interest. The goal of cloud K-SVD is to learn a common overcomplete dictionary at each individual site such that every sample in the distributed data can be represented through a small number of atoms of the learned dictionary. Cloud K-SVD accomplishes this goal without requiring exchange of individual samples between sites. This makes it suitable for applications where sharing of raw data is discouraged due to either privacy concerns or large volumes of data. This paper also provides an analysis of cloud K-SVD that gives insights into its properties as well as deviations of the dictionaries learned at individual sites from a centralized solution in terms of different measures of local/global data and topology of interconnections. Finally, the paper numerically illustrates the efficacy of cloud K-SVD on real and synthetic distributed data.
Index Terms: Consensus averaging | dictionary learning | dis tributed data | K-SVD | power method | sparse coding
مقاله انگلیسی
rss مقالات ترجمه شده rss مقالات انگلیسی rss کتاب های انگلیسی rss مقالات آموزشی
logo-samandehi
بازدید امروز: 12240 :::::::: بازدید دیروز: 0 :::::::: بازدید کل: 12240 :::::::: افراد آنلاین: 90