TY - JOUR
T1 - Graph Attention Based Spatial Temporal Network for EEG Signal Representation
AU - Msonda, James Ronald
AU - He, Zhimin
AU - Lu, Chuan
PY - 2024/2/1
Y1 - 2024/2/1
N2 - Graph attention networks (GATs) based architectures have proved to be powerful at implicitly learning relationships between adjacent nodes in a graph. For electroencephalogram (EEG) signals, however, it is also essential to highlight electrode locations or underlying brain regions which are active when a particular event related potential (ERP) is evoked. Moreover, it is often important to identify corresponding EEG signal time segments within which the ERP is activated. We introduce a GAT Inspired Spatial Temporal (GIST) network that uses multilayer GAT as its base for three attention blocks: edge attentions, followed by node attention and temporal attention layers, which focus on relevant brain regions and time windows for better EEG signal classification performance, and interpretability. We assess the capability of the architecture by using publicly available Transcranial Electrical Stimulation (TES), neonatal pain (NP) and DREAMER EEG datasets. With these datasets, the model achieves competitive performance. Most importantly, the paper presents attention visualisation and suggests ways of interpreting them for EEG signal understanding.
AB - Graph attention networks (GATs) based architectures have proved to be powerful at implicitly learning relationships between adjacent nodes in a graph. For electroencephalogram (EEG) signals, however, it is also essential to highlight electrode locations or underlying brain regions which are active when a particular event related potential (ERP) is evoked. Moreover, it is often important to identify corresponding EEG signal time segments within which the ERP is activated. We introduce a GAT Inspired Spatial Temporal (GIST) network that uses multilayer GAT as its base for three attention blocks: edge attentions, followed by node attention and temporal attention layers, which focus on relevant brain regions and time windows for better EEG signal classification performance, and interpretability. We assess the capability of the architecture by using publicly available Transcranial Electrical Stimulation (TES), neonatal pain (NP) and DREAMER EEG datasets. With these datasets, the model achieves competitive performance. Most importantly, the paper presents attention visualisation and suggests ways of interpreting them for EEG signal understanding.
KW - EEG models
KW - Electroencephalography
KW - Graph neural networks
KW - Attention mechanism
KW - Interpretable machine learning
UR - https://link.springer.com/chapter/10.1007/978-3-031-47508-5_23
U2 - 10.1007/978-3-031-47508-5_23
DO - 10.1007/978-3-031-47508-5_23
M3 - Conference Article/Conference Proceedings
SN - 2194-5357
VL - 1453
SP - 286
EP - 298
JO - Advances in Intelligent Systems and Computing
JF - Advances in Intelligent Systems and Computing
ER -