با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد).
دسته بندی:
سیستم های توصیه گر - recommender systems
سال انتشار:
2019
عنوان انگلیسی مقاله:
An efficient manifold regularized sparse non-negative matrix factorization model for large-scale recommender systems on GPUs
ترجمه فارسی عنوان مقاله:
یک مدل فاکتور گیری ماتریس غیر منفی خلوت منظم شده چند ظرفیتی کارا برای سیستمهای توصیه گر در مقیاس بزرگ بر روی GPU
منبع:
Sciencedirect - Elsevier - Information Sciences, 496 (2019) 464-484: doi:10:1016/j:ins:2018:07:060
نویسنده:
Hao Li , Keqin Li, Jiyao An, Weihua Zheng, Kenli Li
چکیده انگلیسی:
Article history:Received 31 January 2018Revised 1 July 2018Accepted 25 July 2018Available online 27 July 2018Keywords:Collaborative filtering recommender systemsData miningEuclidean distance and KL-divergence GPU parallelizationManifold regularizationNon-negative matrix factorizationNon-negative Matrix Factorization (NMF) plays an important role in many data mining ap- plications for low-rank representation and analysis. Due to the sparsity that is caused by missing information in many high-dimension scenes, e.g., social networks or recommender systems, NMF cannot mine a more accurate representation from the explicit information. Manifold learning can incorporate the intrinsic geometry of the data, which is combined with a neighborhood with implicit information. Thus, manifold-regularized NMF (MNMF) can realize a more compact representation for the sparse data. However, MNMF suffers from (a) the forming of large-scale Laplacian matrices, (b) frequent large-scale matrix ma- nipulation, and (c) the involved K-nearest neighbor points, which will result in the over- writing problem in parallelization. To address these issues, a single-thread-based MNMF model is proposed on two types of divergence, i.e., Euclidean distance and Kullback–Leibler (KL) divergence, which depends only on the involved feature-tuples’ multiplication and summation and can avoid large-scale matrix manipulation. Furthermore, this model can remove the dependence among the feature vectors with fine-grain parallelization inher- ence. On that basis, a CUDA parallelization MNMF (CUMNMF) is presented on GPU com- puting. From the experimental results, CUMNMF achieves a 20X speedup compared with MNMF, as well as a lower time complexity and space requirement.© 2018 Published by Elsevier Inc.
Keywords: Collaborative filtering recommender systems | Data mining | Euclidean distance and KL-divergence | GPU parallelization | Manifold regularization | Non-negative matrix factorization
قیمت: رایگان
توضیحات اضافی:
تعداد نظرات : 0