عنوان انگلیسی مقاله:
Modelling perceptions on the evaluation of video summarization
ترجمه فارسی عنوان مقاله:
مدل سازی ادراک در مورد ارزیابی خلاصه سازی ویدیو
Sciencedirect - Elsevier - Expert Systems With Applications, 131 (2019) 254-265: doi:10:1016/j:eswa:2019:04:065
Kalyf Abdalla a , b , Igor Menezes c , Luciano Oliveira a , ∗
Hours of video are uploaded to streaming platforms every minute, with recommender systems suggest- ing popular and relevant videos that can help users save time in the searching process. Recommender systems regularly require video summarization as an expert system to automatically identify suitable video entities and events. Since there is no well-established methodology to evaluate the relevance of summarized videos, some studies have made use of user annotations to gather evidence about the effec- tiveness of summarization methods. Aimed at modelling the user’s perceptions, which ultimately form the basis for testing video summarization systems, this paper seeks to propose: (i) A guideline to collect unrestricted user annotations, (ii) a novel metric called compression level of user annotation (CLUSA) to gauge the performance of video summarization methods, and (iii) a study on the quality of annotated video summaries collected from different assessment scales. These contributions lead to benchmarking video summarization methods with no constraints, even if user annotations are collected from different assessment scales for each method. Our experiments showed that CLUSA is less susceptible to unbalanced compression data sets in comparison to other metrics, hence achieving higher reliability estimates. CLUSA also allows to compare results from different video summarizing approaches.
Keywords: Video summarization | Subjective evaluation | Evaluation metric