Dr. Kristof Strijkers is a researcher in the Laboratoire Parole et Langage (LPL) at Aix-Marseille University (AMU). During the first CoBra training event, Dr. Strijkers gave three lectures on Neurolinguistics, exploring the brain regions enabling human language. If it was usually considered that certain brain regions are specifically dedicated to language functions, Dr. Strijkers introduced recent studies pointing towards more complex and interactive models of language.
Spoken word perception and comprehension
Traditionally, the production and the perception of language have been studied separately. This separation actually comes from studying brain-damaged patients. Indeed, it was first described that patients with a damaged Broca’s area have problems with speech production, while patients with a damaged Wernicke’s area show an altered comprehension of spoken and written language. Yet, more recent empirical evidence points towards a more complex model than the usual dichotomy described between these two areas.
The study of a larger number of patients with damaged Broca’s and Wernicke’s areas showed that language production and comprehension are more interlinked than previously thought. For example, some patients with Wernicke’s aphasia are better able to comprehend nouns than produce them. Moreover, we have to bear in mind that brain lesions are never exactly comparable between patients, and a brain lesion does not always result in a specific cognitive deficit.
Recent neuroimaging experiments showed that Wernicke’s and Broca’s areas are both activated during language comprehension tasks. Contrary to what was previously thought, understanding and using language requires the activation of a broader network of brain regions. To explain this paradox, Dr. Strijkers presented two main models that try to formulate a more comprehensive and integrative approach to language.
Dual-stream of speech
The first approach characterizes the comprehension and production of language in terms of streams rather than brain areas. When listening to someone talking, we are not passively receiving auditory information; rather, we use our knowledge of the world to identify what we hear. This model describes the functional anatomy of language and states that speech perception is first characterized by auditory cortex activity, then diverging into two processing streams:
- a ventral stream: involved in mapping sound onto meaning (speech comprehension),
- a dorsal stream: involved in mapping sound onto articulatory-based representations (speech production)
In contrast, other researchers argue in favor of a dynamic view according to which words are processed by distributed neuronal assemblies with cortical topographies that reflect word semantics. This idea comes from the finding that similar brain activity was observed when participants performed physical movements and when they passively read action words corresponding to these movements. For example, the brain region activated when participants read the word “kick” was overlapping with the brain region activated for foot movements (see the image below, Hauk et al. (2004)). In this model, it is argued that frontal regions of the brain are activated because they reflect the meaning of the word.
From seriality towards interactivity
In speech production a classical view considers that production of language requires us to go through different steps, in a sequential and hierarchical manner. We first lexicaly conceptualize what we want to utter, then we translate it into a semantic representation and, next, into its phonological form, finally arriving at a sequence of articulatory movements. This view states an independent activation of each linguistic component, with no interaction between them and does not account for phonological and word substitution errors that are often observed in language.
In contrast, parallel processing of the building blocks of language (semantic, lexical, syntactic and phonological information) is supported by multiple neuroimaging studies. For example, a recent EEG study found rapid and parallel dynamics of language processing in a naming task compared to a passive listening task. The data suggest that lexico-semantic and phonological word knowledge are manifested simultaneously in the brain and recruit the same integrated cell assemblies across production and perception (Fairs et al., 2021). This parallel and distributed model of language production and perception is based on the well-known Hebbian theory that “Cells that fire together, wire together”.
Production-Perception Integration and Conversational Dynamics
Recently, a growing number of studies are investigating production and perception of language together as the two are used simultaneously during our daily interactions. In conversations, we automatically align with our interlocutor at different linguistic levels (see the blog post on Prof. Martin Pickering lecture). As such, a complete model of language should be able to explain speaking and listening processes concurrently.
The take-home message from Dr. Strijkers lectures is that a cognitive ability is rarely confined to a specific brain region. Language production and comprehension give us a relevant example of how widespread neural networks of human cognitive functions can be.
Author: Salomé Antoine, ESR4, @SalomeAntoine15
If you want to know more about our projects and the ESRs working on them, please look under the Training tab.
Fairs, A., Michelas, A., Dufour, S., & Strijkers, K. (2021). The same ultra-rapid parallel brain dynamics underpin the production and perception of speech. bioRxiv.
Hauk, O., Johnsrude, I., & Pulvermüller, F. (2004). Somatotopic representation of action words in human motor and premotor cortex. Neuron, 41(2), 301-307.
Hickok, G., & Poeppel, D. (2004). Dorsal and ventral streams: a framework for understanding aspects of the functional anatomy of language. Cognition, 92(1-2), 67-99.
Featured image by Salomé Antoine