SUV: Summarizing Unconstrainted Videos via Saliency Detection

Project Details


This project is to develop novel methods for robust detecting and analysing human action/gesture based on wearable cameras.

Video-based action detection, for example detecting the starting and ending time of the actions of interest, plays an important role in video surveillance, monitoring, anomaly detection, human computer interaction and many other computer-vision related applications.

Traditionally, action detection in computer vision is based on the videos collected from one or more fixed cameras, from which motion features are extracted and then fed to a trained classifier to determine the underlying action class. However, using fixed-camera videos has two major limitations: first, fixed cameras can only cover specific locations in a limited area; second, when multiple persons are present, it is difficult to decide the character of interest and his action from fixed-camera videos, especially with mutual occlusions in a crowded scene.
Effective start/end date1/11/1831/10/20


Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.