Dr. Jaimie Henderson, a scientist and neurosurgeon at Stanford Medicine,
Had a heartfelt childhood wish: to enable his father, who was paralyzed, to communicate with him. Now, as an accomplished researcher, he and his team are making strides in developing brain implants that could fulfill similar dreams for others facing paralysis or speech impairments.
In a groundbreaking discovery, two studies featured in the journal Nature shed light on the potential of brain implants, known as neuroprostheses, to capture a person’s neural activity during attempts to speak naturally. This neural activity can then be translated into words displayed on a computer screen, conveyed audibly, or even communicated via a digital avatar.
During a recent news briefing about his research, Henderson shared his personal motivation, saying, “When I was 5 years old, my dad was involved in a devastating car accident that left him barely able to move or speak. I remember laughing at the jokes he tried to tell, but his speech ability was so impaired that we couldn’t understand the punchline. So I grew up wishing that I could know him and communicate with him. And I think that early experience sparked my personal interest in understanding how the brain produces movement and speech.”
Henderson collaborated with fellow researchers from Stanford and other U.S. institutions to explore the potential of implanted brain sensors in a patient named Pat Bennett, a 68-year-old diagnosed with amyotrophic lateral sclerosis (ALS) in 2012. ALS, a rare neurological disorder affecting nerve cells in the brain and spinal cord, had severely impacted Bennett’s ability to speak.
In a surgery conducted by Henderson in March 2022, arrays of electrodes were implanted in specific brain regions of Bennett. These implants recorded neural activity as Bennett attempted facial movements, vocalizations, and single-word speech. The recorded neural activity was then decoded using software, instantly converting it into words displayed on a computer screen. Bennett could finalize the decoding by pressing a button after speaking.
The researchers evaluated the brain-computer interface with Bennett, testing her ability to vocalize and merely “mouth” words without sound. Results showed that with a vocabulary of 50 words, decoding errors were 9.1% on vocalizing days and 11.2% on silent days. When using a larger vocabulary of 125,000 words, the error rates were 23.8% and 24.7% on vocalizing and silent days, respectively.
Frank Willett, an author of the study and a staff scientist affiliated with the Neural Prosthetics Translational Lab, emphasized the achievement, saying, “In our work, we show that we can decipher attempted speech with a word error rate of 23% when using a large set of 125,000 possible words. This means that about three in every four words are deciphered correctly.”
The research revealed impressive speeds as well. Bennett spoke at an average pace of 62 words per minute, more than tripling the speed of prior brain-computer interfaces, which averaged around 18 words per minute.
Despite these promising findings, the researchers acknowledge that their work is still in the early stages and is considered a “proof of concept.” They emphasize the need for more extensive testing on a larger pool of participants before considering the approach for clinical use.
Another study featured in the same journal detailed the case of Ann Johnson, a patient who faced speech difficulties due to paralysis following a stroke in 2005. An electrode device implanted in her brain allowed neural activity to be translated into text on a screen. This method achieved rapid and accurate decoding, with a median rate of 78 words per minute and a median word error rate of 25%. Additionally, Johnson’s attempts at silent speech were synthesized into speech sounds, accompanied by an animated facial avatar.
These two studies, conducted independently by researchers from the University of California, San Francisco, among other institutions, are aligned in their goals of restoring communication for individuals with paralysis. Dr. Edward Chang, a neurosurgeon and author of one of the studies, highlighted the significance of the findings, stating, “The results from both studies — between 60 to 70 words per minute in both of them — is a real milestone for our field, in general, and we’re really excited about it because it’s coming from two different patients, two different centers, two different approaches.”
While the devices detailed in the studies are currently in the proof-of-concept phase and not available for commercial use, they pave the way for future scientific advancements and the potential development of commercial devices. Henderson expressed his excitement about the progress in brain-computer interfaces, reflecting on his personal journey from childhood to scientific breakthrough: “I’m actually very excited about the commercial activity in the brain-computer interface area. I’ve come full circle from wishing I could communicate with my dad as a kid to seeing this actually work. It’s indescribable.”