Publication: Analysis of Sound Imagery in EEG with a Convolutional Neural Network and an Input-perturbation Network Prediction Technique
Issued Date
2020-09-23
Resource Type
Other identifier(s)
2-s2.0-85096361761
Rights
Mahidol University
Rights Holder(s)
SCOPUS
Bibliographic Citation
2020 59th Annual Conference of the Society of Instrument and Control Engineers of Japan, SICE 2020. (2020), 1010-1015
Suggested Citation
Sarawin Khemmachotikun, Yodchanan Wongsawat Analysis of Sound Imagery in EEG with a Convolutional Neural Network and an Input-perturbation Network Prediction Technique. 2020 59th Annual Conference of the Society of Instrument and Control Engineers of Japan, SICE 2020. (2020), 1010-1015. Retrieved from: https://repository.li.mahidol.ac.th/handle/20.500.14594/60439
Research Projects
Organizational Units
Authors
Journal Issue
Thesis
Title
Analysis of Sound Imagery in EEG with a Convolutional Neural Network and an Input-perturbation Network Prediction Technique
Author(s)
Other Contributor(s)
Abstract
© 2020 The Society of Instrument and Control Engineers - SICE. Sound imagery has been studied in past decades with techniques such as fMRI, PET, MEG, or tDCS. However, sound imagery phenomenon in EEG signal has not been widely studied. Use of deep learning in EEG applications is increasing in popularity due to the ability to learn EEG data without rich data pre-processing. In contrast to typical classification models, with the input -perturbation network prediction technique used here, we visualized the learned features from the trained model in terms of the correlation between the change in input frequency and the change in network prediction to better understand the features the model used for decision making. In this study, we recorded EEG signals from three subjects who were asked to perform a sound imagery task. In the first phase, subjects were asked to listen to and remember a generated sound; in the second phase, subjects were asked to imagine a sound of the same pitch. One-fourth of trials had no sound generation; EEG signals were labeled with the no imagery class. EEG signals from the remaining trials were labeled with the sound imagery class for model training. The best accuracy of 71.41% was obtained by the shallow model for subject 1, and an average accuracy of 61.00% was achieved between subjects. The model's decision to classify EEG data into the sound imagery class was based on decreases in the delta, theta, and low beta bands in the frontal lobe and corresponding increases in the in the right temporal lobe of the brain.