Neuroscientist from MIT developed a machine learning system “Deep Neural Network” which evaluates the music and speech in the similar manner as humans do.
With the use of deep neural network, the researchers from MIT have successfully developed the first model which can copy the human’s manner of processing the auditory tasks like identifying the genre of music.
The model, which includes numerous layers of data-processing system which can trained on large chunks of data to execute the same task, is been utilized by the scientist to throw some light on the way human’s brain processes the same data.
The Study, appeared in issue of Neuron on 19th April, has offered evidences that the humans’ auditory system is hierarchically arranged same as their virtual cortex. In this hierarchy the sensory data is passed via a successive stages of information processing, in which initially the basic information are processed and then the advanced data like extracting the word meaning.
Neuroscientist hoped that the system like deep neural network will used to replicate human brain when it was first developer in 1980s. Though, due to lack of technological advancement, to create such a large model is do the real world tasks was not possible like speech and object recognition.
But with the help of today’s technology and after thousands of tests and examples, the newly developed model is able to process the information same as the human brain does.
To understand and evaluate if the model stages can copy the auditory cortex of human which process the sound data, the researchers are using the fMRI (functional magnetic resonance imaging) to evaluate various regions of auditory cortex as the human brain processes the real-world voices and other sounds.
Then the data of human brain and deep neural network is going to compare to check the similarity of processed information. Further the author added, they are planning to work on model and process other data like finding the location from where the sound is coming and more in future.