The UCSF team made some surprising progress and today the New England Journal of Medicine is reporting that they used those electrode pads to decode speech in real time. The subject was a 36-year-old man, whom researchers called “Bravo-1,” who has lost his ability to make sense words after a severe stroke and can only moan or moan. In their report, Chang’s group says with electrodes on the surface of his brain, the Bravo-1 is capable of constructing sentences at a rate of 15 words per minute on a computer. This technique involves measuring the neural signal in part of the motor cortex associated with Bravo-1’s attempt to move its tongue and vocal tract when it is imagining speaking.
To reach that result, Chang’s team asked Bravo-1 to imagine saying one of the 50 common words about 10,000 times, feeding the patient’s neural signal to the egg-study model. After training the model to match words with neural cues, Team Bravo-1 was able to correctly determine what the word was intended to say 40% of the time (chance results would be about 2%). Still, his sentences were full of errors. “Hello, why are you?” May come out “Hungry how are you.”
But scientists improved performance by adding a model Dell to the language – a program that judges what word order is likely in English. This brings the accuracy to 75%. With this cyborg approach, the system can predict that Bravo-1’s phrase “I fix my nurse” really means “I like my nurse.”
Significantly as a result, there are more than 170,000 words in English, and so the influence will go beyond Bravo-1’s restricted vocabulary. This means that this technology, while useful as a medical aid, is not close to Facebook’s attention. “We’re looking at applications in clinical support technology in the near future, but that’s not our business.” “We’re focused on customer applications, and there’s a long way to go.”
Researchers studying these techniques are not surprised by Facebook’s decision to abandon brain reading. “I can’t say I was surprised, because they hinted that they were looking at a short time frame and going to evaluate things,” says Mark Slutsky, a professor at the Northwest whose alumnus Emily Mugel told Facebook. The keys created were rented. His project. “Speaking from experience alone, the goal of decoding speech is a big challenge. We still have a long way to go from practical, holistic kind of compromise. ”
Still, Slutzky says the UCSF project is an “impressive next step” showing both significant possibilities and some limitations of brain-reading science. He says that if artificial-intelligence models can be trained for a long time, and only on a single person’s brain, they can improve quickly.
While UCSF was conducting research, Facebook was also paying other centers, such as the Applied Physics Lab at Johns Hopkins, to learn how to pump light through the skull to read neurons non-novel. Like MRI, those techniques rely on the sensation of reflected light to measure the amount of blood flow to regions of the brain.
It is these optical techniques that create a major obstacle. Despite some recent improvements by Facebook, including some, they are not able to pick up a neural signal with sufficient resolution. Another point, Chevlett says, is that the blood flow of these mechanisms occurs five seconds after the fire group of neurons, making the computer much slower to control.