Real-Walk Modelling: Deep Learning Model for User Mobility in Virtual Reality

Murtada Dohan*, Mu Mu, Suraj Ajit, Gary Hill

*Corresponding author for this work

Research output: Contribution to JournalArticlepeer-review

Abstract

This paper presents a study on modelling user free walk mobility in virtual reality (VR) art exhibition. The main objective is to investigate and model users’ mobility sequences during interactions with artwork in VR. We employ a range of machine learning (ML) techniques to define scenes of interest in VR, capturing user mobility patterns. Our approach utilises a long short-term memory (LSTM) model to effectively model and predict users’ future movements in VR environments, particularly in scenarios where clear walking paths and directions are not provided to participants. The DL model demonstrates high accuracy in predicting user movements, enabling a better understanding of audience interactions with the artwork. It opens avenues for developing new VR applications, such as community-based navigation, virtual art guides, and enhanced virtual audience engagement. The results highlight the potential for improved user engagement and effective navigation within virtual environments.
Original languageEnglish
Article number44
Number of pages16
JournalMultimedia Systems
Volume30
DOIs
Publication statusPublished - 28 Jan 2024

Keywords

  • Dataset
  • Deep learning
  • Movement
  • Navigation
  • Spatial knowledge
  • Virtual reality

Fingerprint

Dive into the research topics of 'Real-Walk Modelling: Deep Learning Model for User Mobility in Virtual Reality'. Together they form a unique fingerprint.

Cite this