Texas Connect


Alex Huth and his team create an AI system that can translate brain activity into text

Rendering of a man with brain-shaped colorful abstract lines around his head
Illustration by Midjourney

Science fiction has been heralding the advancement of artificial intelligence for decades. Through characters ranging from Samantha in “Her” to C-3PO in “Star Wars” and Data in “Star Trek,” creatives have been showcasing both fear and excitement around these technological advancements. 

Alex Huth grew up consuming these films and TV shows as a self-proclaimed sci-fi nerd. These ideas sparked a long-lasting quest to see if computers could be smarter. As the distant future comes rushing into the present, he is now a creator of AI himself, and he intends to showcase how the technology can be used for the greater good. 

Huth and his team have developed an artificial intelligence system that can translate a person’s thoughts and brain activity into a continuous stream of text. The noninvasive system had participants attend one-hour sessions inside an MRI scanner 16 times. While listening to podcasts, an encoding model and a neuroscience tool predict how the brain would respond to the words. 

“(The decoder) works on the ideas,” says Huth, an assistant professor of neuroscience and computer science. “What is the idea of the thing you’re trying to say or what’s the idea of the thing you’re hearing?”

Similar decoders have been used to restore communication to people with motor impairments — such as stroke patients who are cognitively functional but have lost the motor functions for speech. Huth’s team is hopeful that its system will help people with disorders such as Broca’s aphasia, a condition where people do have full function over their motor skills but can’t convert ideas into words. This can result in broken grammar or difficulty forming complete and clear sentences. 

“We might be able to use this in situations where the existing methods wouldn’t work,” Huth says. “We’re very interested in the possibility that our system could actually restore communication for people with (these disorders). … If we can get at the idea, then we can jump over the point that’s broken.” 

Jerry Tang, a computer science graduate student and member of Huth’s team, spent years in high school working as a counselor at a summer camp for kids with special needs. A lot of that time he spent playing, coaching and getting to know kids who had conditions such as cerebral palsy and had trouble communicating.

“The kids are awesome, and it was really cool to learn more about their life,” Tang says. “That was just a really formative experience that kind of stuck with me. When (I was) applying to colleges, one thing I was really interested in was thinking about ways to use technology to help people with these types of disorders.” 

Three researchers seen at the other end of an MRI machine
Alex Huth, Shailee Jain and Jerry Tang prepare to collect data in the Biomedical Imaging Center. Photo by Nolan Zunk

Some team members are purely drawn into this project for the technology — which presents a new portal into greater scientific exploration.

“I think the active discovery is one of the coolest things (about this project), especially when you’re doing research like this and there’s a moment in time where you’re like the one person that knows this piece of information,” says Amanda LeBel, a former research assistant in Huth’s lab and a Ph.D. candidate at the University of California, Berkeley.

Huth has burned the midnight oil for nearly 15 years to bring the brain decoder to fruition. The lab he did his Ph.D. work in achieved something similar with restoring sight. He spent hours decoding the images and videos people would watch in his lab before he set on a mission to do the same thing in language. It turned out to be much harder than expected.

Ten to 15 years ago, Huth only imagined the project. It took a decade for the technology to catch up. He waited patiently, and once it did, he and his team were able to form even better models of the brain using AI.

“(It’s) kind of humbling that whatever comes next, whatever is going to make this much better … is not going to be somebody brilliant coming up with a clever idea,” Huth says. “It’s going to be some other technology developing and then us taking that and applying it here.”

After multiple trials came accurate results. Team members felt excited at first, and then they immediately began weighing the ramifications. The topic of ethics came next; knowing well the technology will only advance from here, it’s important that they create code of conduct and fail-safes to prevent misuse with future iterations.

“If anything, maybe us (building this technology) as just a proof of concept … will get people talking about what we should do from this point on,” LeBel says. “Should we start thinking about regulating it? What can this technology do in the long run?”

The decoder is still a ways off from being practical, requiring participants to stay in MRI machines for hours at a time. However, Huth and his team are committed to improving designs to help meet people’s needs while keeping their technology transparent along the way in terms of what it can achieve and when it should be used.

“(What I care) a lot about is making sure that this type of technology is understandable, accessible to the public,” Tang says. “I think it’s important that everyone kind of understands how our technology works and what the limitations are.”

Huth says that 10 years ago, there was essentially no AI involvement in neuroscience. Now, there’s an almost overwhelming amount. As AI gets smarter, it helps point experts in the right direction more and more and goes beyond the limits of human imagination and theory. 

“It’s been really exciting to see how AI has been able to help here,” Huth says. “In a lot of ways, we were just kind of hunting in the dark. … I certainly think these modern AI methods are going to make appearances everywhere. And I think that’s really exciting.”