Page 12 - Shimadzu Journal vol.5 Issue1
P. 12
Neuroscience
What happens when two brains talk to each other? Fig. 6 Illustrates the anti-correlation between signals originating from
regions sensitive to talking and listening for both subjects, S1 and
In our early studies of simultaneous cross-brain interactions during S2. The deOxyHb and OxyHb signals are anti-correlated within each
face-to-face communication between pairs of healthy adults, dyads channel for each subject as expected for fNIRS signals (See
alternated between talking and listening about pictures of objects in absorption spectra and example above).
15 second epochs. These epochs were structured and determined
the turns between speaking and listening. Neural activity from
signals acquired under monologue/dialogue and Functional connectivity within-brain during dialogue >
face-to-face/occluded conditions was compared for pairs of subjects. monologue: Using General Linear Model and
Run times of three minutes were partitioned into twelve 15 second Psychophysiological Interaction (PPI) analysis techniques.
epochs. The hypothesis was that regions of brain associated with
talking and listening would increase within-brain and cross-brain Analyses aimed at understanding conventional task-related,
synchrony during dialogue compared to monologue. For each epoch single-brain functional connectivity effects confirm the neural
a single picture of an object was presented on a monitor and was salience of dialogue in face-to-face interaction. A measure of
viewed by each subject. Tasks were run under face-to-face functional connectivity between remote regions of brain shows that
conditions and occluded-face conditions (in which subjects had no synchrony during a dialogue compared to monologue is increased.
view of their partner). In particular, a psychophysiological interaction, PPI (Friston et al.,
Structured Monologue Task: In the first instance, subject 1 1997) analysis where fusiform Gyrus, a face-sensitive region of brain
identifies the picture object and provides a spoken narrative that (Kanwisher et al., 1997) is selected as a seed confirms that dialogue
relates to the object. Subject 2 listens, but does not respond. The during face-to-face gaze increases the strength of neural
next epoch is cued by the presentation of a new picture. Subject 2 covariations between Wernicke and Broca’s Area (Fig 7). Findings
names the object and provides a spoken narrative about it while confirm expectations of the canonical language system with
subject 1 listens. This exchange of talking and listening continues for increased connectivity between Broca’s and Wernicke’s Areas during
3 min and is illustrated in Fig. 5. face-to-face dialogue.
Structured Dialogue Task: The structured dialogue task is identical
to the structured monologue task except that the speaker includes a Coherence Across Brains during dialogue>monologue:
response to the narrative of the previous speaker. The expectation is Cross-brain coherence (using wavelet comparisons) to
that the dialogue condition will reveal upregulation of language investigate brain-to-brain interactions.
systems during the face-to-face condition due to variations in the
intensity of the dynamic interactions. The internal (within-brain) functional connectivity findings predict
that these regions will also resonate across brains during face-to-face
conditions. Cross-brain coherence (Fig 8A) for dialogue (red) and
monologue (blue) conditions is plotted against wavelet kernels from
the decomposed signals acquired at each channel. All possible pairs
of brain regions across the two brains were considered in an
unbiased manner. Significant differences between brain-to-brain
coherence were found between the dialogue and monologue
conditions only for the Broca-Wernicke pair of regions for kernel
ranges centered around 6.34 secs (x–axis). Cross-brain coherence
between putative functions of language production (Broca’s Area)
and language reception (Wernicke’s Area) is consistent with these
findings and with expectations based on current understanding of
these areas (Fig 8B) (Jiang et al., 2012).
Fig. 5 Monologue and dialogue paradigms
Fig. 6ɹ fNIRS signals for single channels: 12, dorsolateral prefrontal cortex (DLPFC, Fig. 7 Within-brain functional connectivity (PPI) during dialogue > monologue
top row) and 18, Frontopolar cortex (bottom row) show anti-correlated and mutual face-to-face gaze. The seed is fusiform (green) and connected
signals from homologous locations of two interacting subjects, S1 and S2, regions (p ≤ 0.05) are Broca’ s Area (-55, 20, 16) and Wernicke’ s Area
during speaking and listening. The graphs show average signals for OxyHb (-48, -36, 40) deOxyHb signals. (Hirsch, J., Noah, A., Zhang, X., Yahil, S.,
(middle column) and deOxyHb (right column) and demonstrate the Lapborisuth, P., & Biriotti, M. (2014, October). Neural specialization for
expected anti-correlation between subjects in the pairs corresponding to interpersonal communication within dorsolateral prefrontal cortex: A NIRS
the alternating roles of the subject (listening versus talking) and the investigation. Presentation at the Annual Meeting of the Society for
anti-correlation between OxyHb and deOxyHb signals. Neuroscience, Chicago, Illinois, USA.)
12