Content posted to this wiki are contributions made by the IMAG research community.
Any questions or concerns should be directed to the individual authors. Full disclaimer statement found here

A Gestalt inference model for auditory scene segregation

What is being modeled?
Segregation of sounds in the auditory system
Description & purpose of resource

The auditory stream segregation model leverages the multiplexed and non-linear representation of sounds along an auditory hierarchy and learns local and global statistical structure naturally emergent in natural and complex sounds. The three key components of the architecture are : (1) A stochastic network RBM layer that encodes two-dimensional input spectrogram into localized specto-temporal bases based on short term feature analysis; (2) A dynamic aRBM that captures the long-term temporal dependencies across spectro-temporal bases characterizing the transformation of sound from fast changing details to slower dynamics. (3) A temporal coherence layer that mimics the Hebbian process of binding local and global details together to mediate the mapping from feature space to formation of auditory objects.

Has this resource been validated?
No
Table sorting checkbox
Off