sonification.de Thomas Hermann's research
on Sonification, Data Mining & Ambient Intelligence
bar shadow

Publication Website | Media Files

Vocal Sonification of Pathologic EEG Features

Thomas Hermann, Gerold Baier, Ulrich Stephani, Helge Ritter. (2006)
Proc. Int. Conf. Auditory Display (ICAD 2006).
BibTeX Entry, Download (pdf, 1.4Mb), External Website


VocalEEGSon Illustration

Summary

We introduce a novel approach in EEG data sonification for process monitoring and exploratory as well as comparative data analysis. The approach uses an excitory/articulatory speech model and a particularly selected parameter mapping to obtain auditory gestalts (or auditory objects) that correspond to features in the multivariate signals. The sonification is adaptable to patient-specific data patterns, so that only characteristic deviations from background behavior (pathologic features) are involved in the sonification rendering. Thus the approach combines data mining techniques and case-dependent sonification design to give an application-specific solution with high potential for clinical use. We explain the sonification technique in detail and present sound examples from clinical data sets.

Media Files / Sonification Examples

Project: See this Sound (Linz)

The "Vocal Sonification of epileptic EEG" has been selected for presentation within the project "See this Sound", Ludwig Boltzmann Institute, Linz, 2009. For that purpose we have selected some fresh sound examples. The following abstract gives an overview for readers that are new to sonification.

Vocal Sonification of epileptic EEG - Abstract

Sonification is the scientific method of representing data by sound. In case of medical data, sonification can be used to make pathologic changes in the human body audible. In the present case, epileptic activity recorded by EEG can be perceived as rhythmic vocal sound patterns. Thereby listening allows to understand and differentiate the dynamics of epileptic activity. Technically, the method first generates generic features from the raw multivariate EEG data so that it abstracts from the details for the recording situation. These features are then mapped to parameters of an articulatory speech synthesizer that creates vowel sounds. The pathologic features of the EEG data thereby lead to transitions between vowels, resulting in audible vocal shapes such as rhythms of a 'pathologic speech'. An important motivation for using vocal sonifications is that humans (a) are highly sensitive and trained to process, memorize, and recognize speech-like signals and (b) can easily reproduce similar sounds using our own vocal tract,. The accompaning figure depicts the data flow from recorded EEG data to the vocal sonification. The user can interactively select data segments and adjust parameters. Details to the sonifications are provided at this website.

Links



bar shadow