Jump to content
IGNORED

Machine Translates Thoughts into Speech in Real Time


chaosmachine

Recommended Posts

Right on the tails of this, comes this:

 

By implanting an electrode into the brain of a person with locked-in syndrome, scientists have demonstrated how to wirelessly transmit neural signals to a speech synthesizer. The "thought-to-speech" process takes about 50 milliseconds - the same amount of time for a non-paralyzed, neurologically intact person to speak their thoughts. The study marks the first successful demonstration of a permanently installed, wireless implant for real-time control of an external device.

 

http://www.physorg.com/news180620740.html/r:t/

 

The future is moving fast...

 

PS: This would make a fun midi controller.

Link to comment
Share on other sites

Guest ezkerraldean
The future is moving fast...

 

PS: This would make a fun midi controller.

lol!

 

using this technology we should wire up octopus brains to drum machines, and dance to them funky mollusc beats.

Link to comment
Share on other sites

Guest Enter a new display name

Before reading Chaosmachine's post, my first thought was "is this possible to listen to music if one's mind is read by a machine?"

Link to comment
Share on other sites

My first thought was stephen hawking, no more eye blinking to text \o/

 

if we hooked stephen hawking up to this thing, we would only discover that he spends about 2% of his day thinking about physics, and ~98% thinking about hardstyle mouth-fucking

Link to comment
Share on other sites

so would this help determine if people who are in comas are really "there", but unable to communicate, or if they are truly brain dead? might be useful to know this if you are about to pull the plug on someone.

Link to comment
Share on other sites

wait, wait, read the article dudes

 

... in the current study, only three vowel sounds were tested ...

 

There at an early stage, they can only discern three different vowel sounds.

 

They do seem to have cracked part of the neural code though:

 

“The study supported our hypothesis (based on the DIVA model, our neural network model of speech) that the premotor cortex represents intended speech as an ‘auditory trajectory,’ that is, as a set of key frequencies (formant frequencies) that vary with time in the acoustic signal we hear as speech,” Guenther said. “In other words, we could predict the intended sound directly from neural activity in the premotor cortex, rather than try to predict the positions of all the speech articulators individually and then try to reconstruct the intended sound (a much more difficult problem given the small number of neurons from which we recorded). This result provides our first insight into how neurons in the brain represent speech, something that has not been investigated before since there is no animal model for speech.”

 

So they are saying that the speech intention in the patients brain carries a sort of frequency information that can be used to map to certain speech sounds - thats interesting.

 

Also, the system apparently took 3 years to integrate itself with the brain:

 

Five years ago, when the volunteer was 21 years old, the scientists implanted an electrode near the boundary between the speech-related premotor and primary motor cortex (specifically, the left ventral premotor cortex). Neurites began growing into the electrode and, in three or four months, the neurites produced signaling patterns on the electrode wires that have been maintained indefinitely.

 

Three years after implantation, the researchers began testing the brain-machine interface for real-time synthetic speech production.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.