Identification of Discriminative Features in EEG

download postscript version

Authors: Peter Meinicke, Thomas Hermann, Holger Bekel, Horst Müller, Sabine Weiss and Helge Ritter
submitted to ICANN 2002, August 2002, Madrid/Spain

Sonification of EEG Data using an ICA-based feature selection

The following sonifications are rendered by using a modified Spectral Mapping Sonification introduced in HermannMeinickeRitter2002.ps.gz.

Spectral Mapping Sonification for EEG Data

Usually for exploratory data analysis graphical displays are used. In the case of EEG data, however, the data are time-series and potentially interesting structures are rhythmical patterns. Our auditory system is particularly strong in perceiving rhythms or rhythmical changes. In addition, the auditory system is highly specialized to operate in noisy contexts and to interprete several auditory streams at the same time. Therefore we decided to use sonification to investigate the results of the feature selection method.

The applied technique for rendering an acoustic representation is {\em Spectral Mapping Sonification} as described in \cite{HermannMeinickeBekelRitterMuellerWeiss2002-SOE}. In Spectral Mapping Sonification, time-variant oscillators (TVO) are controlled to mediate between the data and the acoustic signal. Basically the TVOs have a frequency and an amplitude parameter, given that the waveform is kept invariant. In addition, the sound of the TVO can be positioned in the space of the listener.
In our case, we use the electrode position on the scalp to determine the spatialization while the amplitude and frequency controls are used to represent the variation of the signal in time. However, as most sound systems are limited to stereo, the accompanying sound examples just route channels within the left (right) hemisphere to the left (right) audio channel. The frequency is determined by the band such that the higher frequency bands are represented by a higher pitched tone. Practically, a constant musical interval (e.g.~a quint) is used to separate the bands.
The energy within a feature is then used to determine the amplitude, resp.~level of the sound. High activations are therefore audible as loud contributions in the sound. In principle, all 114 features may be superimposed to get an overall representation of the available data, as demonstrated in~\cite{HermannMeinickeBekelRitterMuellerWeiss2002-SOE}. But the overlay of such a great number of TVOs can cause that discriminative features get masked by the many channels/bands that are not or only weakly affected by the condition. For that reason, we apply the ICA feature selection to mute the majority of features.
The following sound examples are rendered for the data from six subjects. The sonification is a sequence of subject sonifications, consisting of the acoustic presentation of condition 1 followed by condition 2. A marker sound is introduced to signal the begin of the second condition to the listener. The subject sonifications are separated by a small pause.

Results and Sound Examples

Spectral Mapping Sonification was applied to the features selected by the presented ICA method. The first sonification presents the EEGr and the pseudospeech condition for one subject in sequence, with a click sound as a separator. The sound was rendered with 10 ICA features using 15~ms per spectral frame.
It can be perceived (sound example S1) (EEG_eyps10ICA15ms_sub2.wav) that the sound differs between the conditions in that the activation in the left hemisphere at high frequencies increases in an upper band. In a similar way, using such features a pairwise comparison of the other conditions can be done by listening. The sound examples S2 and S3 present the data for the same subject on pseudospeech/speech and on EEGr/speech. In S2 (EEG_pssp10ICA15ms_sub2.wav) a decrease in energy within the high frequency bands on the right hemisphere can be heard, (EEG_eysp10ICA15ms_sub2.wav) in S3 an significant increase in the higher bands can be heard.

This method is limited to the serial inspection of data from a single subject and the change of sound depends severely on the subject. However, such an exploratory data analysis may provide additional insight into the brain activity and particularly assist in the detection of rhythmical patterns in the data.