Recognizing Gaits Across Walking and Running Speeds

dc.contributor.authorYao L.
dc.contributor.authorKusakunniran W.
dc.contributor.authorWu Q.
dc.contributor.authorXu J.
dc.contributor.authorZhang J.
dc.contributor.otherMahidol University
dc.date.accessioned2023-06-20T04:51:07Z
dc.date.available2023-06-20T04:51:07Z
dc.date.issued2022-08-01
dc.description.abstractFor decades, very few methods were proposed for cross-mode (i.e., walking vs. running) gait recognition. Thus, it remains largely unexplored regarding how to recognize persons by the way they walk and run. Existing cross-mode methods handle the walking-versus-running problem in two ways, either by exploring the generic mapping relation between walking and running modes or by extracting gait features which are non-/less vulnerable to the changes across these two modes. However, for the first approach, a mapping relation fit for one person may not be applicable to another person. There is no generic mapping relation given that walking and running are two highly self-related motions. The second approach does not give more attention to the disparity between walking and running modes, since mode labels are not involved in their feature learning processes. Distinct from these existing cross-mode methods, in our method, mode labels are used in the feature learning process, and a mode-invariant gait descriptor is hybridized for cross-mode gait recognition to handle this walking-versus-running problem. Further research is organized in this article to investigate the disparity between walking and running. Running is different from walking not only in the speed variances but also, more significantly, in prominent gesture/motion changes. According to these rationales, in our proposed method, we give more attention to the differences between walking and running modes, and a robust gait descriptor is developed to hybridize the mode-invariant spatial and temporal features. Two multi-task learning-based networks are proposed in this method to explore these mode-invariant features. Spatial features describe the body parts non-/less affected by mode changes, and temporal features depict the instinct motion relation of each person. Mode labels are also adopted in the training phase to guide the network to give more attention to the disparity across walking and running modes. In addition, relevant experiments on OU-ISIR Treadmill Dataset A have affirmed the effectiveness and feasibility of the proposed method. A state-of-the-art result can be achieved by our proposed method on this dataset.
dc.identifier.citationACM Transactions on Multimedia Computing, Communications and Applications Vol.18 No.3 (2022)
dc.identifier.doi10.1145/3488715
dc.identifier.eissn15516865
dc.identifier.issn15516857
dc.identifier.scopus2-s2.0-85127448325
dc.identifier.urihttps://repository.li.mahidol.ac.th/handle/20.500.14594/87132
dc.rights.holderSCOPUS
dc.subjectComputer Science
dc.titleRecognizing Gaits Across Walking and Running Speeds
dc.typeArticle
mu.datasource.scopushttps://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85127448325&origin=inward
oaire.citation.issue3
oaire.citation.titleACM Transactions on Multimedia Computing, Communications and Applications
oaire.citation.volume18
oairecerif.author.affiliationUniversity of Technology Sydney
oairecerif.author.affiliationMahidol University

Files

Collections