Remarkable. Here's what seems to be the key point (reordered a bit for clarity):
The study supported our hypothesis [...] that the premotor cortex represents intended speech as an 'auditory trajectory,' that is, as a set of key frequencies (formant frequencies) that vary with time in the acoustic signal we hear as speech. [...] In an intact brain, these frequency trajectories are sent to the primary motor cortex where they are transformed into motor commands to the speech articulators. [...We] had to interpret these frequency trajectories in order to translate them into speech. [...] In other words, we could predict the intended sound directly from neural activity in the premotor cortex, rather than try to predict the positions of all the speech articulators individually and then try to reconstruct the intended sound [...]
Also remarkable (but maybe this is old hat to people who know about this stuff?) is that the signals they're interpreting come from neurites that started actually growing into the electrode months after it had been implanted.
I suppose there is a big difference between being able to interpret pre-speech frequencies in a normal brain (i.e. of a person who hasn't used this device before), versus someone being able to train themselves to communicate using this device over time. Given how adaptable the brain is, it's the latter that would seem to be the big win (and the article does vaguely imply this). Of course the device presumably wouldn't work at all if it weren't rooted in normal speech function.
In the current study, only three vowel sounds were tested.The test subject's average hit rate increased from 45% to 70% across sessions, reaching a high of 89% in the last session.
Holy cow! Is this 1st April? Unbelievable.
They implanted an electrode to a disabled guy's brain, powered it wirelessly and it sent them back wirelessly signals representing audio frequencies of what the guy wanted to say. Decoded on a computer with 50ms latency, 89% accuracy on vowels.
I work as an EEG technician, with a background in electronics, and computer programming. This technology is most certainly possible, but in fact a real reality.
The results are usually a result of part classical conditioning, and part cognitive neurology.
http://en.wikipedia.org/wiki/Brain%E2%80%93computer_interfac...
This is not a huge leap from other devices like cochlear implants. Some of the newer implants use coils to avoid having wires pass through the skull.
http://en.wikipedia.org/wiki/Cochlear_implant
The physorg article links to the original research paper, which is published in an open access journal that anyone can view. Perhaps you would be interested in video S1 in the supporting information section:
It's not that wild really. You don't need a whole lot of information to get different vowels. Consonants are harder, but only in the sense that you need more electrodes and training time. The synthesis of speech (as opposed to the sampling of phonemes you usually hear in computer speech) is well understood and can be managed with (IIRC) about 16 parameters.
Unfortunately I can't read the story from my iPhone. They redirect a perfectly good site to a minimal site that doesn't properly follow links, so I'm redirected to the current story list. Clicking on Full Site switches to the proper display but then the story isn't in the list.
The study supported our hypothesis [...] that the premotor cortex represents intended speech as an 'auditory trajectory,' that is, as a set of key frequencies (formant frequencies) that vary with time in the acoustic signal we hear as speech. [...] In an intact brain, these frequency trajectories are sent to the primary motor cortex where they are transformed into motor commands to the speech articulators. [...We] had to interpret these frequency trajectories in order to translate them into speech. [...] In other words, we could predict the intended sound directly from neural activity in the premotor cortex, rather than try to predict the positions of all the speech articulators individually and then try to reconstruct the intended sound [...]
Also remarkable (but maybe this is old hat to people who know about this stuff?) is that the signals they're interpreting come from neurites that started actually growing into the electrode months after it had been implanted.
I suppose there is a big difference between being able to interpret pre-speech frequencies in a normal brain (i.e. of a person who hasn't used this device before), versus someone being able to train themselves to communicate using this device over time. Given how adaptable the brain is, it's the latter that would seem to be the big win (and the article does vaguely imply this). Of course the device presumably wouldn't work at all if it weren't rooted in normal speech function.