دانلود و نمایش مقالات مرتبط با Egocentric vision::صفحه 1
بلافاصله پس از پرداخت دانلود کنید

با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد). 

نتیجه جستجو - Egocentric vision

تعداد مقالات یافته شده: 3
ردیف عنوان نوع
1 Predicting the future from first person (egocentric) vision: A survey
پیش بینی آینده از دیدگاه اول شخص (خودمحور): یک مرور-2021
Egocentric videos can bring a lot of information about how humans perceive the world and interact with the environment, which can be beneficial for the analysis of human behaviour. The research in egocentric video analysis is developing rapidly thanks to the increasing availability of wearable devices and the opportunities offered by new large-scale egocentric datasets. As computer vision techniques continue to develop at an increasing pace, the tasks related to the prediction of future are starting to evolve from the need of understanding the present. Predicting future human activities, trajectories and interactions with objects is crucial in applications such as human–robot interaction, assistive wearable technologies for both industrial and daily living scenarios, entertainment and virtual or augmented reality. This survey summarizes the evolution of studies in the context of future prediction from egocentric vision making an overview of applications, devices, existing problems, commonly used datasets, models and input modalities. Our analysis highlights that methods for future prediction from egocentric vision can have a significant impact in a range of applications and that further research efforts should be devoted to the standardization of tasks and the proposal of datasets considering real-world scenarios such as the ones with an industrial vocation.
Keywords: First person vision | Egocentric vision | Future prediction | Anticipation
مقاله انگلیسی
2 Improved scene identification and object detection on egocentric vision of daily activities
شناسایی صحنه و تشخیص شی در دیدگاه خودمدار از فعالیت های روزانه-2017
Article history:Received 16 December 2015Revised 26 September 2016Accepted 19 October 2016Available online 21 October 2016Keywords:Scene classification Object detection Scene understandingFirst camera person visionThis work investigates the relationship between scene and associated objects on daily activities under egocentric vision constraints. Daily activities are performed in prototypical scenes that share a lot of vi- sual appearances independent of where or by whom the video was recorded. The intrinsic characteristics of egocentric vision suggest that the location where the activity is conducted remains consistent through- out frames. This paper shows that egocentric scene identification is improved by taking the temporal context into consideration. Moreover, since most of the objects are typically associated with particular types of scenes, we show that a generic object detection method can also be improved by re-scoring the results of the object detection method according to the scene content. We first show the case where the scene identity is explicitly predicted to improve object detection, and then we show a framework using Long Short-Term Memory (LSTM) where no labeling of the scene type is needed. We performed exper- iments in the Activities of Daily Living (ADL) public dataset (Pirsiavash and Ramanan,2012), which is a standard benchmark for egocentric vision.© 2016 Elsevier Inc. All rights reserved.
Keywords: Scene classification | Object detection | Scene understanding | First camera person vision
مقاله انگلیسی
3 Video registration in egocentric vision under day and night illumination changes
رجیستر کردن ویدئو در بینایی خودمحور تحت تغییرات روشنایی روز و شب-2017
Article history:Received 7 December 2015Revised 25 July 2016Accepted 19 September 2016Available online 21 September 2016Keywords:Video registration Egocentric vision Visual matchingWith the spread of wearable devices and head mounted cameras, a wide range of application requiring precise user localization is now possible. In this paper we propose to treat the problem of obtaining the user position with respect to a known environment as a video registration problem. Video registration,i.e. the task of aligning an input video sequence to a pre-built 3D model, relies on a matching process of local keypoints extracted on the query sequence to a 3D point cloud. The overall registration performance is strictly tied to the actual quality of this 2D-3D matching, and can degrade if environmental conditions such as steep changes in lighting like the ones between day and night occur. To effectively register an egocentric video sequence under these conditions, we propose to tackle the source of the problem: the matching process. To overcome the shortcomings of standard matching techniques, we introduce a novel embedding space that allows us to obtain robust matches by jointly taking into account local descriptors, their spatial arrangement and their temporal robustness. The proposal is evaluated using unconstrained egocentric video sequences both in terms of matching quality and resulting registration performance us- ing different 3D models of historical landmarks. The results show that the proposed method can outper- form state of the art registration algorithms, in particular when dealing with the challenges of night and day sequences.© 2016 Elsevier Inc. All rights reserved.
Keywords: Video registration | Egocentric vision | Visual matching
مقاله انگلیسی
rss مقالات ترجمه شده rss مقالات انگلیسی rss کتاب های انگلیسی rss مقالات آموزشی
logo-samandehi
بازدید امروز: 968 :::::::: بازدید دیروز: 0 :::::::: بازدید کل: 968 :::::::: افراد آنلاین: 65