Publication: Recognizing gaits on spatiooral feature domain
Issued Date
2014-01-01
Resource Type
ISSN
15566013
Other identifier(s)
2-s2.0-84905733517
Rights
Mahidol University
Rights Holder(s)
SCOPUS
Bibliographic Citation
IEEE Transactions on Information Forensics and Security. Vol.9, No.9 (2014), 1416-1423
Suggested Citation
Worapan Kusakunniran Recognizing gaits on spatiooral feature domain. IEEE Transactions on Information Forensics and Security. Vol.9, No.9 (2014), 1416-1423. doi:10.1109/TIFS.2014.2336379 Retrieved from: https://repository.li.mahidol.ac.th/handle/20.500.14594/33709
Research Projects
Organizational Units
Authors
Journal Issue
Thesis
Title
Recognizing gaits on spatiooral feature domain
Author(s)
Other Contributor(s)
Abstract
Gait has been known as an effective biometric feature to identify a person at a distance, e.g., in video surveillance applications. Many methods have been proposed for gait recognitions from various different perspectives. It is found that these methods rely on appearance (e.g., shape contour, silhouette)-based analyses, which require preprocessing of foreground-background segmentation (FG/BG). This process not only causes additional time complexity, but also adversely influences performances of gait analyses due to imperfections of existing FG/BG methods. Besides, appearance-based gait recognitions are sensitive to several variations and partial occlusions, e.g., caused by carrying a bag and varying a cloth type. To avoid these limitations, this paper proposes a new framework to construct a new gait feature directly from a raw video. The proposed gait feature extraction process is performed in the spatiooral domain. The space-time interest points (STIPs) are detected by considering large variations along both spatial and temporal directions in local spatiooral volumes of a raw gait video sequence. Thus, STIPs are allocated, where there are significant movements of human body in both space and time. A histogram of oriented gradients and a histogram of optical flow are computed on a 3D video patch in a neighborhood of each detected STIP, as a STIP descriptor. Then, the bag-of-words model is applied on each set of STIP descriptors to construct a gait feature for representing and recognizing an individual gait. When compared with other existing methods in the literature, it has been shown that the performance of the proposed method is promising for the case of normal walking, and is outstanding for the case of partial occlusion caused by walking with carrying a bag and walking with varying a cloth type. © 2012 IEEE.