Publication: Robust Gait Recognition under Unconstrained Environments Using Hybrid Descriptions
Issued Date
2017-12-19
Resource Type
Other identifier(s)
2-s2.0-85048349972
Rights
Mahidol University
Rights Holder(s)
SCOPUS
Bibliographic Citation
DICTA 2017 - 2017 International Conference on Digital Image Computing: Techniques and Applications. Vol.2017-December, (2017), 1-7
Suggested Citation
Lingxiang Yao, Worapan Kusakunniran, Qiang Wu, Jian Zhang, Zhenmin Tang Robust Gait Recognition under Unconstrained Environments Using Hybrid Descriptions. DICTA 2017 - 2017 International Conference on Digital Image Computing: Techniques and Applications. Vol.2017-December, (2017), 1-7. doi:10.1109/DICTA.2017.8227486 Retrieved from: https://repository.li.mahidol.ac.th/handle/20.500.14594/42274
Research Projects
Organizational Units
Authors
Journal Issue
Thesis
Title
Robust Gait Recognition under Unconstrained Environments Using Hybrid Descriptions
Abstract
© 2017 IEEE. Gait is one of the key biometric features that has been widely applied for human identification. Appearance-based features and motion-based features are the two mainly used presentations in the gait recognition. However, appearance-based features are sensitive to the body shape changes and silhouette extraction from real-world images and videos also remains a challenge. As for motion features, due to the difficulty in extracting the underlying models from gait sequences, the localization of human joints lacks of high reliability and strong robustness. This paper proposes a new approach which utilizes Two-Point Gait (TPG) as the motion feature to remedy the deficiency of the appearance feature based on Gait Energy Image (GEI), in order to increase the robustness of gait recognition under the unconstrained environments with view changes and cloth changes. Another contribution of this paper is that this is the first time that TPG has been applied for view change and cloth change issues since it was proposed. The extensive experiments show that the proposed method is more invariant to the view change and cloth change, and can significantly improve the robustness of gait recognition.