How we can have our pets speak: research to be done. from Admin's blog

How we can have our pets speak - let's look at advances in neuroscience that can highly likely lead us to a position of being able to develop speech in any creature having brains.

What happens in the brain when we hear something? Researchers took a patient who plays a piano well and recorded his brain activity when playing.

Then they turned the sound off and the same was done once again: this time the sound existed only in the brain. The striking results may suggest a way of developing a certain area in the brain that can compose music. And then a human - or any other creature with brains - can play the music by using mental power only. The sound of the music can be scanned or read from the brain directly - and music can be played out loud, in the form of sound waves and be heard by others.

Another way of advancing this result is to try to develop certain areas of the brain to reflect ideas we usualy express through words, that is speech. And then process the sound generated in the brain in a similar way - convert or translate the activity into sound that can be heard.

It is worth working in this direction to make a device that is able of reading brain activity in a creature and translate it straightaway into speech understandable by others.

Actually we might want to have this kind of reader-translator as we want to please all people keeping whatever pet they keep - dogs, cats, parrots, snakes, lizards, tortoises, even fish!

The goal at that is to make owners of pets even happier by enabling their pets with ability to sing songs, speak and have conversations.

Lets get back to the study which however was done on humans.

First, the participant played two piano pieces on an electronic piano with the sound volume of the digital keyboard on. Second, the participant replayed the same piano pieces, but without auditory feedback, and the participant was asked to imagine hearing the music in his mind. The result is similarities between perception and imagery was found.

The experiment done was a pretty simple one. Of our interest are ways to advance it.

The first thing we need to do is to determine what makes the two conditions different. That is to determine different mechanisms in the brain that are involved in a) generating music that is imaginary, unheard and b) actual hearing the sound in physical form. We are going to support this research and get involved in it many different teams.

The original paper is called "Neural Encoding of Auditory Features during Music Perception and Imagery: Insight into the Brain of a Piano Player". In terms of authors this research task is laid out this way:

"It remains unclear how the human cortex represents spectrotemporal sound features during auditory imagery, and how this representation compares to auditory perception.

This is the first turning point for further research.

The second is to replace recording electrocorticographic signals from electrodes in the brain with more suitable non-invasive technique to move towards developing devices for mass user, ones doing scanning and translation.

And the third turning point in research work to be done. We first focus on music, on auditory information that is reperesented by high and low frequency sounds. In more advanced research we are going to work on linguistic information, on its representation in the brain and ways to scan and translate it to be able to recreate it to be heard.

Previous post     
     Next post
     Blog home

The Wall

No comments
You need to sign in to comment