- Virtual Reality: Sex, marriage, and the future of relationships Today 7:00 AM
- Swipe This! My boyfriend is addicted to porn. Should I leave him? Today 6:00 AM
- How to stream Packers vs. Lions on Monday Night Football Sunday 7:15 PM
- College students burned author’s books after she spoke about white privilege Sunday 6:28 PM
- Texas police officer fatally shoots Black woman in her own home Sunday 3:44 PM
- Milo Yiannopoulos’ website dangerous.com was sold Sunday 1:42 PM
- First YouTube comment to hit 1 million likes is on Billie Eilish’s ‘bad guy’ music video Sunday 12:36 PM
- Girl says she was fired over exposing how Panera makes its mac and cheese on TikTok Sunday 11:34 AM
- David Harbour teased fans about Hopper’s ‘Stranger Things’ fate on ‘SNL’ Sunday 10:24 AM
- Kacey Musgraves accused of cultural appropriation–and botching it Sunday 9:19 AM
- Rihanna defends Vogue writer who received backlash for ‘winging’ interview Sunday 8:36 AM
- Here are the best PC games to add to your list Sunday 8:20 AM
- How to stream ‘Power’ season 6, episode 8 Sunday 6:00 AM
- How to stream Steelers vs. Chargers on Sunday Night Football Saturday 7:20 PM
- Popular TikTok teens accused of pretending to be gay for clout Saturday 6:38 PM
A programmer and artist used brainwave data to turn a seizure into song
Brian Foo is on a mission to make data audible.
If you listen to “Rhapsody in Grey” without knowing how it was created, you might find yourself recognizing bits and pieces of popular music. You’ll hear Imogen Heap, the band Swans, and a few string bass instruments. And then you might start to hear a pattern—faint, indiscernible, but present.
But listen to the music while watching a video of a child’s brainwaves scrolling across the screen. These visuals represent her electroencephalogram (EEG) data during a seizure. You’ll realize “Rhapsody” is more than just a song.
Brian Foo, a New York City–based programmer and artist, created “Rhapsody” out of brainwave data from an anonymous pediatric epilepsy patient to show what brain activity looks like before, during, and after a seizure. The four-minute song is broken up into equal parts. As the brainwaves change, so does the music.
The music starts off soft and instrumental, but when the seizure hits, more voices and percussion are added to coincide with the increased frequency, amplitude, and synchrony of the brainwaves. After the episode, it slows down.
“Because it’s a very human and intimate dataset, I wanted the song to heighten the empathy for someone and give them a more intuitive understanding of what might be going on in someone’s brain when they’re having a seizure,” Foo said in an interview with the Daily Dot.
The brain data was compiled from anonymous information available on PhysioNet, provided by researchers at the Children’s Hospital Boston and the Massachusetts Institute of Technology, Foo explained. The EEG data was then spliced up into parts containing equal time intervals surrounding a seizure.
To create the song, Foo calculated the size of the brain waves at different points throughout the seizure, and correlated them to bass, percussion, and vocal recordings from the Philharmonia sound sample library, the rock band Swans, and Imogen Heap, respectively.
Foo created an algorithm that enabled him to combine all the pre-determined EEG data and music samples, to generate the song as output.
“The first challenge was learning how to read it, and how to read it as accurately as I could,” Foo said. “And the second challenge was, now that I know how to read it, how do I translate it in a way in which it is true to the data, but also true to how any person is feeling at any given moment.”
Foo called it a subjective exercise: It’s impossible to know precisely how people feel, but the annotated data provided him a way to discern when the seizure started, thus amplifying the music to a cacophony of sounds correlating with the excessive neural activity in the brain.
Foo open-sourced the project and provided step-by-step information on how to replicate his project here.
“Rhapsody in Grey” is the second song in a project called “Data-Driven DJ.” Foo will spend the year creating music out of data in an effort to add an intimate perspective to issues such as climate change, social issues, and, in the case of the pediatric seizure, health information.
“At the end of the day, what I’m most interested in is bringing out a human quality to given data,” he said. “The problem is that a lot of data get abstracted into these quantifiable, cold numbers. I wanted to bring it back to something more human because a lot of this data represents human activity.”
Each month Foo will release a new recording, splicing vocals and instruments to coincide with different datasets. For each project, he will create an entirely new computer program that will compile the data and generate a unique, data-driven song.
In February, Foo released his first song, “Two Trains,” which analyzes the medium household income in New York City along the subway’s 2 train, a line that cuts through Brooklyn, Manhattan, and the Bronx.
Foo plans on alternating between four and five general themes, most of which focus on social issues. But, he said, he won’t rule out more lighthearted data sets, like those based on online dating or language. Even after the 12 songs are finished, Foo plans to continue the data visualization and analysis, turning social issues into easy listening.
“Because I just started this, I wanted to have it be more exploratory, and when I’m done with these 12, focus on a particular issue more deeply, depending on what I felt resonated with me or other people,” he said.
Illustration by Fernando Alfonso III
Selena Larson is a technology reporter based in San Francisco who writes about the intersection of technology and culture. Her work explores new technologies and the way they impact industries, human behavior, and security and privacy. Since leaving the Daily Dot, she's reported for CNN Money and done technical writing for cybersecurity firm Dragos.