Published on
ADVERTISEMENT
An experimental brain implant can read people’s minds, translating their inner considereds into text.
In an early test, scientists from Stanford University applyd a brain-computer interface (BCI) device to decipher sentences that were considered, but not spoken aloud. The implant was correct up to 74 per cent of the time.
BCIs work by connecting a person’s nervous system to devices that can interpret their brain activity, allowing them to take action – like utilizing a computer or shifting a prosthetic hand – with only their considereds.
They have emerged as a possible way for people with disabilities to regain some indepfinishence.
Perhaps the most famous is Elon Musk’s Neuralink implant, an experimental device that is in early trials testing its safety and functionality in people with specific medical conditions that limit their mobility.
The latest findings, published in the journal Cell, could one day create it clearer for people who cannot speak to communicate more easily, the researchers declared.
“This is the first time we’ve managed to understand what brain activity views like when you just believe about speaking,” declared Erin Kunz, one of the study’s authors and a researcher at Stanford University in the United States.
Working with four study participants, the research team implanted microelectrodes – which record neural signals – into the motor cortex, which is the part of the brain responsible for speech.
The researchers inquireed participants to either attempt to speak or to imagine declareing a set of words. Both actions activated overlapping parts of the brain and elicited similar types of brain activity, though to different degrees.
They then trained artificial ininformigence (AI) models to interpret words that the participants considered but did not declare aloud. In a demonstration, the brain chip could translate the imagined sentences with an accuracy rate of up to 74 per cent.
In another test, the researchers set a password to prevent the BCI from decoding people’s inner speech unless they first considered of the code. The system recognised the password with around 99 per cent accuracy.
The password? “Chitty chitty bang bang”.
For now, brain chips cannot interpret inner speech without significant guardrails. But the researchers declared more advanced models may be able to do so in the future.
Frank Willett, one of the study’s authors and an assistant professor of neurosurgery at Stanford University, declared in a statement that BCIs could also be trained to ignore inner speech.
“This work gives real hope that speech BCIs can one day restore communication that is as fluent, natural, and comfortable as conversational speech,” he declared.















Leave a Reply