Recognizing Gaits Across Walking and Running Speeds
Issued Date
2022-08-01
Resource Type
ISSN
15516857
eISSN
15516865
DOI
Scopus ID
2-s2.0-85127448325
Journal Title
ACM Transactions on Multimedia Computing, Communications and Applications
Volume
18
Issue
3
Rights Holder(s)
SCOPUS
Bibliographic Citation
ACM Transactions on Multimedia Computing, Communications and Applications Vol.18 No.3 (2022)
Suggested Citation
Yao L., Kusakunniran W., Wu Q., Xu J., Zhang J. Recognizing Gaits Across Walking and Running Speeds. ACM Transactions on Multimedia Computing, Communications and Applications Vol.18 No.3 (2022). doi:10.1145/3488715 Retrieved from: https://repository.li.mahidol.ac.th/handle/20.500.14594/87132
Title
Recognizing Gaits Across Walking and Running Speeds
Author(s)
Author's Affiliation
Other Contributor(s)
Abstract
For decades, very few methods were proposed for cross-mode (i.e., walking vs. running) gait recognition. Thus, it remains largely unexplored regarding how to recognize persons by the way they walk and run. Existing cross-mode methods handle the walking-versus-running problem in two ways, either by exploring the generic mapping relation between walking and running modes or by extracting gait features which are non-/less vulnerable to the changes across these two modes. However, for the first approach, a mapping relation fit for one person may not be applicable to another person. There is no generic mapping relation given that walking and running are two highly self-related motions. The second approach does not give more attention to the disparity between walking and running modes, since mode labels are not involved in their feature learning processes. Distinct from these existing cross-mode methods, in our method, mode labels are used in the feature learning process, and a mode-invariant gait descriptor is hybridized for cross-mode gait recognition to handle this walking-versus-running problem. Further research is organized in this article to investigate the disparity between walking and running. Running is different from walking not only in the speed variances but also, more significantly, in prominent gesture/motion changes. According to these rationales, in our proposed method, we give more attention to the differences between walking and running modes, and a robust gait descriptor is developed to hybridize the mode-invariant spatial and temporal features. Two multi-task learning-based networks are proposed in this method to explore these mode-invariant features. Spatial features describe the body parts non-/less affected by mode changes, and temporal features depict the instinct motion relation of each person. Mode labels are also adopted in the training phase to guide the network to give more attention to the disparity across walking and running modes. In addition, relevant experiments on OU-ISIR Treadmill Dataset A have affirmed the effectiveness and feasibility of the proposed method. A state-of-the-art result can be achieved by our proposed method on this dataset.