Guinea Pigs, Machine Learning and How Brains Recognize Communication Sounds

University of Pittsburg auditory neuroscientists published a paper in Communications Biology describing how a machine learning model helps explain how the brain recognizes the meaning of communication sounds. The algorithm used models of how social animals use sound-processing networks in their brains to distinguish between sounds and act on them. Even with sound pollution that surrounds us, humans and other animals can communicate and understand one another, including the pitch of their voice or accent.

The team started with similarities in the way the human brain recognizes and captures the meaning of communication sounds with recognizing faces. Instead of matching every face we encounter, our brain picks up on useful features, such as the eyes, nose, and mouth, and their relative positions and creates a mental map of these small characteristics that define a face.

CREDIT: MANASWINI KAR

 

The team showed that communication sounds might also be made of such small characteristics. The researchers first built a machine-learning model of sound processing to recognize the different sounds made by social animals. To see if brain responses corresponded with the model, they recorded brain activity from guinea pigs listening to their kin’s communication sounds. Neurons lit up with a flurry of electrical activity when they heard a noise with features present in specific types of these sounds, like the machine learning model. Guinea pigs were exposed to different categories of sounds. Researchers then trained the guinea pigs to walk over to different corners of the enclosure and receive fruit rewards depending on which category of sound was played. The researchers then ran guinea pig calls through sound-altering software, speeding them up or slowing them down, raising or lowering their pitch, or adding noise and echoes.

The animals were able to perform the task as consistently as if the calls they heard were unaltered; they continued to perform well despite artificial echoes or noise. The machine learning model described their behavior (and the underlying activation of sound-processing neurons in the brain) perfectly.

Leave A Reply

Your email address will not be published.