CMU-CS-09-161
Computer Science Department
School of Computer Science, Carnegie Mellon University



CMU-CS-09-161

MoSIFT: Recognizing Human Actions
in Surveillance Videos

Ming-yu Chen, Alex Hauptmann

September 2009

CMU-CS-09-161.pdf


Keywords: NA

The goal of this paper is to build robust human action recognition for real world surveillance videos. Local spatio-temporal features around interest points provide compact but descriptive representations for video analysis and motion recognition. Current approaches tend to extend spatial descriptions by adding a temporal component for the appearance descriptor, which only implicitly captures motion information. We propose an algorithm called MoSIFT, which detects interest points and encodes not only their local appearance but also explicitly models local motion. The idea is to detect distinctive local features through local appearance and motion. We construct MoSIFT feature descriptors in the spirit of the well-known SIFT descriptors to be robust to small deformations through grid aggregation. We also introduce a bigram model to construct a correlation between local features to capture the more global structure of actions. The method advances the state of the art result on the KTH dataset to an accuracy of 95.8%. We also applied our approach to 100 hours of surveillance data as part of the TRECVID Event Detection task with very promising results on recognizing human actions in the real world surveillance videos.

16 pages


Return to: SCS Technical Report Collection
School of Computer Science

This page maintained by [email protected]