- Gavin McInnes is out at Blaze Media Sunday 7:07 PM
- Anthony Scaramucci praised QAnon during American Priorities conference Sunday 5:44 PM
- Report: FBI investigating fake net neutrality comments Sunday 4:36 PM
- The first professional U.S. transgender boxer just won his first fight Sunday 2:18 PM
- Twitch streamer apparently hits partner on video Sunday 1:45 PM
- There’s now rehab for Fortnite addiction Sunday 12:07 PM
- How to watch América vs. Pumas online for free Sunday 11:25 AM
- ‘Target Tammy’ is the latest white woman to complain about Black people minding their own business Sunday 11:08 AM
- Jason Momoa reprises ‘Game of Thrones’ character on ‘SNL’ Sunday 10:06 AM
- How to watch the epic Copa Libertadores final online for free Sunday 9:35 AM
- The top fandoms of 2018 Sunday 8:00 AM
- How to watch Real Madrid vs. Huesca online for free Sunday 6:40 AM
- What is Sling TV? Sunday 6:15 AM
- A year of apologizing to the internet Sunday 6:15 AM
- How to stream NFL’s Week 14 games for free Sunday 6:00 AM
A programmer and artist used brainwave data to turn a seizure into song
Brian Foo is on a mission to make data audible.
If you listen to “Rhapsody in Grey” without knowing how it was created, you might find yourself recognizing bits and pieces of popular music. You’ll hear Imogen Heap, the band Swans, and a few string bass instruments. And then you might start to hear a pattern—faint, indiscernible, but present.
But listen to the music while watching a video of a child’s brainwaves scrolling across the screen. These visuals represent her electroencephalogram (EEG) data during a seizure. You’ll realize “Rhapsody” is more than just a song.
Brian Foo, a New York City–based programmer and artist, created “Rhapsody” out of brainwave data from an anonymous pediatric epilepsy patient to show what brain activity looks like before, during, and after a seizure. The four-minute song is broken up into equal parts. As the brainwaves change, so does the music.
The music starts off soft and instrumental, but when the seizure hits, more voices and percussion are added to coincide with the increased frequency, amplitude, and synchrony of the brainwaves. After the episode, it slows down.
“Because it’s a very human and intimate dataset, I wanted the song to heighten the empathy for someone and give them a more intuitive understanding of what might be going on in someone’s brain when they’re having a seizure,” Foo said in an interview with the Daily Dot.
The brain data was compiled from anonymous information available on PhysioNet, provided by researchers at the Children’s Hospital Boston and the Massachusetts Institute of Technology, Foo explained. The EEG data was then spliced up into parts containing equal time intervals surrounding a seizure.
To create the song, Foo calculated the size of the brain waves at different points throughout the seizure, and correlated them to bass, percussion, and vocal recordings from the Philharmonia sound sample library, the rock band Swans, and Imogen Heap, respectively.
Foo created an algorithm that enabled him to combine all the pre-determined EEG data and music samples, to generate the song as output.
“The first challenge was learning how to read it, and how to read it as accurately as I could,” Foo said. “And the second challenge was, now that I know how to read it, how do I translate it in a way in which it is true to the data, but also true to how any person is feeling at any given moment.”
Foo called it a subjective exercise: It’s impossible to know precisely how people feel, but the annotated data provided him a way to discern when the seizure started, thus amplifying the music to a cacophony of sounds correlating with the excessive neural activity in the brain.
Foo open-sourced the project and provided step-by-step information on how to replicate his project here.
“Rhapsody in Grey” is the second song in a project called “Data-Driven DJ.” Foo will spend the year creating music out of data in an effort to add an intimate perspective to issues such as climate change, social issues, and, in the case of the pediatric seizure, health information.
“At the end of the day, what I’m most interested in is bringing out a human quality to given data,” he said. “The problem is that a lot of data get abstracted into these quantifiable, cold numbers. I wanted to bring it back to something more human because a lot of this data represents human activity.”
Each month Foo will release a new recording, splicing vocals and instruments to coincide with different datasets. For each project, he will create an entirely new computer program that will compile the data and generate a unique, data-driven song.
In February, Foo released his first song, “Two Trains,” which analyzes the medium household income in New York City along the subway’s 2 train, a line that cuts through Brooklyn, Manhattan, and the Bronx.
Foo plans on alternating between four and five general themes, most of which focus on social issues. But, he said, he won’t rule out more lighthearted data sets, like those based on online dating or language. Even after the 12 songs are finished, Foo plans to continue the data visualization and analysis, turning social issues into easy listening.
“Because I just started this, I wanted to have it be more exploratory, and when I’m done with these 12, focus on a particular issue more deeply, depending on what I felt resonated with me or other people,” he said.
Illustration by Fernando Alfonso III
Selena Larson is a technology reporter based in San Francisco who writes about the intersection of technology and culture. Her work explores new technologies and the way they impact industries, human behavior, and security and privacy. Since leaving the Daily Dot, she's reported for CNN Money and done technical writing for cybersecurity firm Dragos.