How does our brain make it possible for us to have conversations? What brain regions are the most important for speaking or for understanding speech? And how do we connect what we know about the human mind to what we know about the brain? During the first COBRA training event, Prof. Peter Hagoort, of the Max Planck Institute for Psycholinguistics and Radboud University, explored questions like these.
Language: It Takes a Whole Network
Early on in his Neuroscience workshop it became clear that, in our current understanding of the role of the brain in language use, different linguistic functions cannot be strictly attributed to specific brain regions. In the 19th century, scientists had identified the so-called Broca’s and Wernicke’s areas, which they proposed were responsible for speech articulation and comprehension, respectively. More recent findings, however, suggest that it takes a whole neural network to enable us to use language. What’s more, when we use language, not only do we need our cognitive functions strictly related to linguistic processing, but we also activate brain regions not directly related to language.
For example, inferring the intentions of your conversation partner cannot be done by language regions alone; you need to activate other neurocognitive processes such as theory of mind. The theory of mind network lets you “access” other people’s minds, letting you infer what their intentions and expectations are. Picture this: you are having lunch, and somebody comes up to you and asks, “What are we eating today?” You will interpret the question in different ways, depending on a number of factors. Are you having this conversation at home or at work? Who asked you the question; a spouse or a colleague? Are you expected to invite them to eat with you or to make small talk? Answering these questions will involve different types of information, and it will require much more than the linguistic processing of the sentence.
Remember, Unify, and Control!
Prof. Hagoort’s MUC (Memory, Unification, Control) model helps us understand some of these brain dynamics. He suggests that there are different regions of the brain specialized in each of these three functions—see the figure below. The memory region is where our linguistic knowledge gets encoded and consolidated: we must access it every time we need to remember the meaning of a word, for example. However, speaking (or listening) is much more complicated than simply remembering the meaning of individual words; when we put words together, we create new and complex meanings. This happens through the unification of different pieces of knowledge about language, the conversation, and its context. Finally, the control component of the model deals with decisions we make during a conversation that are not directly related to language: what to say, when to say it, what language to say it in (after all, more than half of the world’s population speaks more than one language!). It is important to remember, though, that none of these regions work in isolation: they can only be understood as part of a network, and they influence and are influenced by other areas.
Further and Beyond
The Neuroscience workshop went on to delve deeper into some of the consequences of these discussions: how the neural processing of language depends on the way information is presented to you; how hearing or reading certain words may change your brain’s activation, impacting how you feel and behave; and even how we can use certain neuroimaging techniques (EEG, fMRI, MEG…) to investigate these topics further. All in all, Prof. Hagoort’s course was an insightful introduction to the neuroscience of language: it gave you (it certainly gave me) that good old feeling of where can I read more about this?
Author: Tom Offrede, ESR5, @TomOffrede
If you want to know more about our projects and the ESRs working on them, please look under the Training tab.
Hagoort, P. (2013). MUC (Memory, Unification, Control) and beyond. Frontiers in Psychology, 4.
Featured image by holdentrill via Pixabay