All in the mind: decoding brainwaves to determine what music we’re listening to

summary: By combining neuroimaging data with EEG, the researchers recorded the subjects’ neural activity while listening to a piece of music. Using machine learning technology, the data was translated to reconstruct and identify the specific piece of music that the test subjects were listening to.

Source: University of Essex

A new technology for monitoring brain waves can identify the music a person is listening to.

Researchers at the University of Essex hope the project will help people with severe communication impairments such as locked-in syndrome or who have suffered a stroke by decoding the language signals inside their brains through non-invasive techniques.

Dr Ian Daly, from the School of Computer Science and Electronic Engineering in Essex, who led the research, said: “This method has many potential applications. We have shown that we can decode music, which suggests that we may one day be able to decode language from the brain.”

The Essex scientists wanted to find a less invasive way to decode audio information from signals in the brain to identify and reconstruct a piece of music someone was listening to.

While there have been successful previous studies monitoring and reconstructing acoustic information from brain waves, many have used more invasive methods such as electrocardiography (ECoG) – which involves placing electrodes inside the skull to monitor the actual surface of the brain.

Research published in the journal Scientific reportsusing a combination of two non-invasive methods — functional magnetic resonance imaging, which measures blood flow through the entire brain, and electroencephalogram (EEG), which measures what’s happening in the brain in real time — to monitor a person’s brain activity while they listen to a piece of music.

Using a deep learning neural network model, the data was translated to reconstruct and identify the musical piece.

Music is a complex audio signal, which shares many similarities with natural language, so it is likely that the model could be adapted for speech translation. The ultimate goal of this research thread will be the translation of thoughts, which could provide important help in the future for people who struggle to communicate, such as those with latching on.

The Essex scientists wanted to find a less invasive way to decode audio information from signals in the brain to identify and reconstruct a piece of music someone was listening to. The image is in the public domain

Dr Daly added: “One application is brain-computer communication (BCI), which provides a direct brain-computer communication channel. Obviously this is a long way off, but ultimately we hope that if we can successfully decode language, we can use this to build means communication, which is another important step toward the ultimate goal of BCI research and could, one day, provide a lifeline for people with severe communication impairments.”

The research involved reusing fMRI and EEG data collected, originally, as part of a previous project at the University of Reading of participants listening to a series of 40 seconds of simple piano music from an array of 36 different pieces. In rhythm, pitch harmony and rhythm. Using these combined data sets, the model was able to accurately identify the piece of music with a success rate of 71.8%.

About this music and neuroscience research news

author: Ben Hall
Source: University of Essex
Contact: Ben Hall – University of Essex
picture: The image is in the public domain

Original search: open access.
“Neural decoding of music from EEG” by Ian Daly et al. Scientific reports

See also

This shows a bowl of cereal and bananas

Summary

Neural decoding of music from EEG

Neural decoding paradigms can be used to decode neural representations of visual, audio, or semantic information. Recent studies have demonstrated neural decoders capable of decoding acoustic information from a variety of types of neural signals including electrocardiogram (ECoG) and electroencephalogram (EEG).

In this study, we explore how functional magnetic resonance imaging (fMRI) can be combined with EEG to develop an audio decoder. Specifically, we first used a combined EEG-fMRI paradigm to record brain activity while participants listened to music.

We then used fMRI-informed EEG source localization and a long-term bidirectional deep learning network to first extract neural information from EEG related to music listening and then to decode and reconstruct the individual pieces of music that the individual was listening to. We also validated our decoding model by evaluating its performance on a separate dataset of EEG recordings only.

We were able to reconstruct music, with an EEG source analysis approach informed by fMRI information, with an average rank accuracy of 71.8% (n = 18n = 18, p < 0.05, p < 0.05). Using only EEG data, without specific source analysis through fMRI, we were able to identify the music a participant was listening to with an average rating accuracy of 59.2% (n = 19n = 19, p < 0.05, p < 0.05).

This demonstrates that our decoding paradigm may use fMRI-informed source analysis to help decode and reconstruct acoustic information from EEG-based brain activity, taking a step towards building EEG-based neural decoders for other complex information domains such as other audio or visual domains or semantic information.

Leave a Comment