با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد).
دسته بندی:
شبکه های نورونی - neuron-networks
سال انتشار:
2020
عنوان انگلیسی مقاله:
Sparse low rank factorization for deep neural network compression
ترجمه فارسی عنوان مقاله:
فاکتورسازی رتبه پراکنده برای فشرده سازی شبکه عصبی عمیق
منبع:
Sciencedirect - Elsevier - Neurocomputing, 398 (2020) 185-196. doi:10.1016/j.neucom.2020.02.035
نویسنده:
Sridhar Swaminathan a , ∗, Deepak Garg b , Rajkumar Kannan c , Frederic Andres d
چکیده انگلیسی:
Storing and processing millions of parameters in deep neural networks is highly challenging during the deployment of model in real-time application on resource constrained devices. Popular low-rank approx- imation approach singular value decomposition (SVD) is generally applied to the weights of fully con- nected layers where compact storage is achieved by keeping only the most prominent components of the decomposed matrices. Years of research on pruning-based neural network model compression re- vealed that the relative importance or contribution of each neuron in a layer highly vary among each other. Recently, synapses pruning has also demonstrated that having sparse matrices in network archi- tecture achieve lower space and faster computation during inference time. We extend these arguments by proposing that the low-rank decomposition of weight matrices should also consider significance of both input as well as output neurons of a layer. Combining the ideas of sparsity and existence of un- equal contributions of neurons towards achieving the target, we propose sparse low rank (SLR) method which sparsifies SVD matrices to achieve better compression rate by keeping lower rank for unimportant neurons. We demonstrate the effectiveness of our method in compressing famous convolutional neural networks based image recognition frameworks which are trained on popular datasets. Experimental re- sults show that the proposed approach SLR outperforms vanilla truncated SVD and a pruning baseline, achieving better compression rates with minimal or no loss in the accuracy. Code of the proposed ap- proach is avaialble at https://github.com/sridarah/slr .
Keywords: Low-rank approximation | Singular value decomposition | Sparse matrix | Deep neural networks | Convolutional neural networks
قیمت: رایگان
توضیحات اضافی:
تعداد نظرات : 0