Two-Stream Retentive Long Short-Term Memory Network for Dense Action Anticipation

Fengda Zhao, Jiuhan Zhao, Xianshan Li*, Yinghui Zhang, Dingding Guo, Wenbai Chen

*Corresponding author for this work

Research output: Contribution to JournalArticlepeer-review

Abstract

Analyzing and understanding human actions in long-range videos has promising applications, such as video surveillance, automatic driving, and efficient human-computer interaction. Most researches focus on short-range videos that predict a single action in an ongoing video or forecast an action several seconds earlier before it occurs. In this work, a novel method is proposed to forecast a series of actions and their durations after observing a partial video. This method extracts features from both frame sequences and label sequences. A retentive memory module is introduced to richly extract features at salient time steps and pivotal channels. Extensive experiments are conducted on the Breakfast data set and 50 Salads data set. Compared to the state-of-the-art methods, the method achieves comparable performance in most cases.
Original languageEnglish
Article number4260247
Pages (from-to)1-9
Number of pages9
JournalComputational Intelligence and Neuroscience
Volume2022
Early online date16 May 2022
DOIs
Publication statusPublished - 16 May 2022

Bibliographical note

Copyright © 2022 Fengda Zhao et al.

Keywords

  • General Mathematics
  • General Medicine
  • General Neuroscience
  • General Computer Science
  • Neural Networks, Computer
  • Memory, Long-Term
  • Humans
  • Human Activities
  • Rivers
  • Memory, Short-Term

Fingerprint

Dive into the research topics of 'Two-Stream Retentive Long Short-Term Memory Network for Dense Action Anticipation'. Together they form a unique fingerprint.

Cite this