با سلام خدمت کاربران عزیز، به اطلاع می رساند ترجمه مقالاتی که سال انتشار آن ها زیر 2008 می باشد رایگان بوده و میتوانید با وارد شدن در صفحه جزییات مقاله به رایگان ترجمه را دانلود نمایید.
عنوان انگلیسی مقاله:
Deep network compression based on partial least squares
ترجمه فارسی عنوان مقاله:
فشرده سازی شبکه عمیق بر اساس حداقل مربعات جزئی
Sciencedirect - Elsevier - Neurocomputing, 406 (2020) 234-243. doi:10.1016/j.neucom.2020.03.108
Artur Jordao ∗, Fernando Yamada, WilliamRobson Schwartz
Modern visual pattern recognition methods are based on convolutional networks since they are able to learn complex patterns directly from the data. However, convolutional networks are computationally ex- pensive in terms of floating point operations (FLOPs), energy consumption and memory requirements, which hinder their deployment on low-power and resource-constrained systems. To address this prob- lem, many works have proposed pruning strategies, which remove neurons (i.e., filters) in convolutional networks to reduce their computational cost. Despite achieving remarkable results, existing pruning ap- proaches are ineffective since the accuracy of the network is degraded. This loss in accuracy is an effect of the criterion used to remove filters, as it may result in the removal of the filters with high influence to the classification ability of the network. Motivated by this, we propose an approach that eliminates filters based on the relationship of their outputs with the class label, on a low-dimensional space. This relationship is captured using Partial Least Squares (PLS), a discriminative feature projection method. Due to the nature of PLS, our method focuses on keeping discriminative filters. As a consequence, we are able to remove up to 60% of FLOPs while improving network accuracy. We show that our criterion is superior to existing pruning criteria, which include state-of-the-art feature selection techniques and handcrafted approaches. Compared to state-of-the-art pruning strategies, our method achieves the best tradeoffbe- tween drop/improvement in accuracy and FLOPs reduction.
Keywords: Pruning convolutional networks | Convolutional networks compression | Partial least squares