What Can AI Decode From Human Brain Activity?

Brain scan. Photo by Wikimedia Commons.

Research exploring the capabilities of artificial intelligence (AI) to try and interpret and translate brain activity has been popping up more and more lately

by Lauren Richards

August 28, 2023

By using neuroimaging data and AI models, recent studies have explored AI’s ability to decode brain activity and reconstruct the images seen by individuals, the sounds heard, or even the stories imagined, by generating comparable images, streams of text, and even tunes

One of the neuroimaging methods often used in this field of research to record and map brain activity is functional magnetic resonance imaging (fMRI). This imaging technique is non-invasive and measures brain activity by detecting small changes in blood flow in different areas of the brain that are activated. 

As individuals perform different cognitive or motor tasks, different working areas of the brain appear brighter on the fMRI scan. AI models can be trained to decode the fMRI data, from which AI-generated outputs can be produced.  

The suggested potential applications of technology in this field could include things like aiding communication for people unable to speak, controlling artificial robotic limbs, or being integrated into virtual reality headsets.

What’s more, researchers have also highlighted the vital importance of protecting privacy by proactively discussing the potential issues associated, and developing guidelines and policies to help tackle them.

There are many studies undertaking research in this area, but here are just a few examples.

Reconstructing Images

Scientists in Singapore have developed an AI framework able to decode visual stimuli from brain activity and reconstruct images resembling those viewed by an individual.

Known as “MinD-Vis,” this technology can reconstruct “highly plausible” images with matching details such as texture and shape. 

Participants were shown pictures of things like people, vehicles, landscapes, food, animals and buildings, and MinD-Vis then decoded their brain activity and reconstructed the images.

Details such as “water and waves” as well as “drawings on the bowling ball” and “wheels of the carriage” were generated.

Reconstructing Language

Researchers at The University of Texas at Austin (UT Austin) have developed an AI language decoding system able to translate brain activity into intelligible streams of text that recover the meaning of stories an individual listened to or imagined. 

The researchers conducted several different experiments to test the capabilities of their decoder in different ways. 

Participants listened to stories from podcasts such as “Modern Love;” imagined telling short 1 min stories; and watched clips from animated films that largely lacked language. 

Though not word-for-word translations, the text sequences generated by the AI decoder captured the general meaning of the stories listened to and imagined, as well as the silent videos. 

The group also conducted experiments to test how participant attention and cooperation may affect the success of the technology. They found that decoding was significantly more similar when participants were actively attended to a story, and that cooperation was required for successful decoding. 

They also highlighted the importance of addressing potential privacy implications and respecting privacy, stating that “brain-computer interfaces should respect mental privacy.” 

Reconstructing Music

Researchers from Google and Osaka University have developed an AI pipeline able to decode brain activity and reconstruct music which resembles the music listened to by individuals.

Known as “Brain2Music,” the pipeline utilises Google’s “MusicLM” — a text-to-music AI tool able to generate music from text descriptions — to generate the music. 

eBooks.com - leading provider of e-textbooks

Participants listened to music clips from a range of genres such as blues, jazz, disco and classical (amongst others). And features of the music such as instruments (e.g., brass, woodwind, plucked string, percussion) and mood (e.g., happy, sad, tender, angry) were also taken into account.  

The music reconstructed by the Brain2Music pipeline was found to resemble the music stimuli listened to by participants in terms of genre, instrumentation, and mood.

In terms of future work, the group state in the study: “An exciting next step is to attempt the reconstruction of music or musical attributes from a subject’s imagination.”

Subscribe to our newsletter.

This article was originally published on IMPAKTER. Read the original article.

0 Shares