Elsevier

Brain and Language

Volume 114, Issue 3, September 2010, Pages 180-192
Brain and Language

The neural correlates of highly iconic structures and topographic discourse in French Sign Language as observed in six hearing native signers

https://doi.org/10.1016/j.bandl.2010.05.003Get rights and content

Abstract

“Highly iconic” structures in Sign Language enable a narrator to act, switch characters, describe objects, or report actions in four-dimensions. This group of linguistic structures has no real spoken-language equivalent. Topographical descriptions are also achieved in a sign-language specific manner via the use of signing-space and spatial-classifier signs. We used functional magnetic resonance imaging (fMRI) to compare the neural correlates of topographic discourse and highly iconic structures in French Sign Language (LSF) in six hearing native signers, children of deaf adults (CODAs), and six LSF-naïve monolinguals. LSF materials consisted of videos of a lecture excerpt signed without spatially organized discourse or highly iconic structures (Lect LSF), a tale signed using highly iconic structures (Tale LSF), and a topographical description using a diagrammatic format and spatial-classifier signs (Topo LSF). We also presented texts in spoken French (Lect French, Tale French, Topo French) to all participants. With both languages, the Topo texts activated several different regions that are involved in mental navigation and spatial working memory. No specific correlate of LSF spatial discourse was evidenced. The same regions were more activated during Tale LSF than Lect LSF in CODAs, but not in monolinguals, in line with the presence of signing-space structure in both conditions. Motion processing areas and parts of the fusiform gyrus and precuneus were more active during Tale LSF in CODAs; no such effect was observed with French or in LSF-naïve monolinguals. These effects may be associated with perspective-taking and acting during personal transfers.

Introduction

The neural bases of the signed languages used by deaf communities around the world have been studied for several years, probing many different aspects, such as the similarities and differences between spoken and signed languages during comprehension and generation (Bavelier et al., 1998, Braun et al., 2001, Emmorey et al., 2007, MacSweeney, Woll, Campbell, McGuire, et al., 2002a), the effect of the syntactic use of space and other specific features of signed languages (Campbell, 2003, Emmorey et al., 2002, Emmorey et al., 2004, MacSweeney et al., 2002b), the relationships with the neural networks involved in action observation or non-linguistic gesture comprehension (Corina and Knapp, 2008, Husain et al., 2009, MacSweeney et al., 2004), or the plastic changes associated with deafness and sign-language expertise (Newman et al., 2002, Sadato et al., 2004). The present functional magnetic resonance imaging (fMRI) study explores the neural bases of two particular aspects of Sign Language discourse that are unavailable to spoken languages: (1) the use of signing-space and spatial-classifier signs to show the topographical relationships between objects and (2) highly iconic structures, such as situational and personal transfers, that mainly occur during narratives and allow the narrator to represent a previously experienced or fictional event in the signing-space via a “transfer” process. These transfers are “the visible traces of cognitive operations, which consist of transferring the signer’s conceptualization of the real world into the four-dimensional world of signed discourse (the three-dimensions of space plus the dimension of time)” (translated from Sallandre, 2007 p. 108).

The first developmental linguistics studies that took iconicity into account did not find an effect on the acquisition of vocabulary (Orlansky & Bonvillian, 1984), pronouns (Petitto, 1987) or grammar (Bellugi & Klima, 1982). However, more recent research (and theories, e.g. Taub, 2001) led to a reappraisal of this question. Vinson and collaborators (2008) found that the age of acquisition of vocabulary correlated with iconicity. Several different groups reported an effect of vocabulary iconicity on cognitive processes (Courtin, 1997, Ormel et al., 2009, Thompson et al., 2009, Vigliocco et al., 2005). Furthermore, the effect of iconicity would not be restricted to vocabulary (Schick, 2006). An effect of iconicity was reported on the emergence of classifier constructions (Slobin et al., 2003 – classifiers involve a kind of iconicity, see below) and of verb agreement (Casey, 2003).

Some authors also have looked for the neural bases of iconicity as present in vocabulary, classifier signs, topographic representations of space, and at the level of the sentence (Emmorey et al., 2002, Emmorey et al., 2004, MacSweeney et al., 2002b; for a case-study of Sign Language aphasia but normal pantomime production in a left-lesioned deaf signer, see Corina et al., 1992). Thus, iconicity is now quite largely described and addressed in the linguistic, cognitive and neuroscience literature.

In the present paper, we focus on the neuroanatomy of iconicity at the discourse level which, to the best of our knowledge, has not been addressed yet. We first describe a linguistic theory that accounts for iconicity at the sentence and discourse levels, as described by Cuxac (2000).

The highly iconic structures, initially described by French linguist Christian Cuxac, show similarities across different Sign Languages and are employed when people using different Sign Languages happen to interact (Cuxac, 1997). They are also frequently used during the course of daily conversation. More precisely, Cuxac and his colleagues, working on French Sign Language (LSF, Langue des Signes Française) have distinguished up to 20 different linguistic structures (Cuxac, 1993, Sallandre, 2003), which they gathered under the generic label of “highly iconic structures”, (English for “structures de grande iconicité”) with different components such as personal transfers, situational transfers, or double transfers (Sallandre, 2003).

Within the highly iconic structures framework, personal transfers occur during discourse, after the construction of the signing-space and spatial mapping processes, to focus on parts of the telling that the signer wants to stress or to present in action. Spatial mapping, which is “the cohesive use of signing-space at a discourse level rather than (…) the topographic use of space to describe a spatial scene” (Emmorey 2002, p. 69), is often a prerequisite for personal transfers.

A personal transfer in Cuxac’s theory corresponds to a “referential shift”, “role taking” or “role shift” in other linguistic frameworks (Sallandre, 2006). Referential shift is described as a narrative technique used to express direct quotation or to convey action, from a particular point of view (Bahan and Petitto, 1980, Emmorey, 2002, Padden, 1986). A related process can occur in spoken languages when quoting a character, with vocal and facial/body imitation. In Sign Languages, however, as Poulin and Miller (1995, p. 121) state, “the use of referential shift is not limited to reported speech. In effect, with referential shifting, the signer can also report actions (…), states (…), or thoughts.” For example, Liddell and Metzger (1998) have written in terms of reported actions to refer to actions that are described during referential shits, that is: from the characters’ point of view. Different authors have utilized several examples to illustrate the various uses of referential shift in their texts (Engberg-Pedersen, 1995, Lillo-Martin, 1995, Mather and Winston, 1998, Quer, 2005, Roy, 1989, Sallandre, 2003, Sallandre, 2007). However, a main difference between Cuxac’s theory and other ones lies in the status of real linguistic devices attributed to the highly iconic structures by Cuxac as soon as 1983, while many other authors refer to these devices as gestures, of which the “significance (…) as part of discourse has been minimized in linguistic theory” (Liddell & Metzger, 1998, p. 658).

During personal transfers, the signer expresses the “state of mind” of the character (living entities such as a human or an animal – as already exemplified in Roy, 1989 – or personified inanimate objects such as a planet or a golf ball, cf. Cuxac, 2000). Adopting the perspective of the character during a personal transfer involves a shift of the signing-space. The narrator indicates this shift by adopting a slightly different orientation or by a quick change in gaze direction and a different facial expression (Emmorey and Reilly, 1998, Engberg-Pedersen, 1995, Roy, 1989). The “signing style” is modified accordingly. In particular an amplification of different elements that participate to Sign Language prosody can be observed, including modifications of the rhythmic patterns of sign production (Braem, 1999), body movements (van der Kooij, Crasborn, & Emmerik, 2006), linguistic and emotional or attitudinal facial expressions (Nespor & Sandler, 1999).

A “situational transfer” could be used to express, in Sign Language, a sentence like “the horse leaps over the fence”. The situational transfer is achieved by first pointing to a given place in sign space (“here, a fence”) and then moving the forearm to this place: the fence is symbolized by, transferred to the forearm of the narrator. Then, the horse would be represented by the other hand using a classifier shape. The hand would proceed to “jump” over the other arm (the fence): the action is reported in a “highly iconic” way (for thorough details on highly iconic structure theory, see Cuxac, 2000). Historically, highly iconic structures for depicting constructed dialogs and actions have received more attention in the linguistic analyses of LSF than in other Sign Languages. The difference between ASL or BSL versus LSF linguistic analyses (Sallandre, 2003) surely explains why, to the best of our knowledge, the neural correlates of highly iconic structures, as detailed by Cuxac et al., have not yet been studied with neuroimaging.

Another important, although more familiar, aspect of Sign Languages is that environments are seldom described with spatial prepositions such as “in front of”, “close to”, or “on the right of”. Instead, the signer structures the signing-space topographically so as to directly represent the location of the different objects. The latter objects can be represented using classifiers. Classifiers are linguistic structures whose hand shapes specify object category (e.g., a flat surface, such as a book). The position of the classifier in the signing-space represents the spatial relation between objects and, for this reason, classifiers are also iconic though in a different way than the highly iconic structures presented above (e.g., a book is lying at the right of an overturned glass; for more details on classifiers, see Emmorey, 2003). Taylor and Tversky, 1992, Taylor and Tversky, 1996 reported that English speakers tended to use a survey perspective when describing a large-scale environment (the plan of a town, in their experiment) while they used a route perspective for small-scale environments (a convention center). Emmorey and Falgier (1999), using the same experimental setup, reported that when signers adopt a survey perspective, the signing-space becomes a diagrammatic spatial format (also labeled token space in (also labeled token space in S Liddell, 1995; model space in Schick, 1990). Spatial formats are “the topographic structure of signing-space used to express locations and spatial relations between objects” (Emmorey 2002, p. 92).

Although signed languages share their essential linguistic aspects with spoken languages, they also possess discourse-level linguistic structures (e.g. personal transfers, situational transfers) that are essentially alien to spoken languages and whose neural correlates remain largely unknown. For instance, it is not yet known whether the neural bases of the understanding of a topographical description are any different in spoken and signed languages. So far, the neuroimaging literature on the neural bases of signed language comprehension and generation has shown that signed languages mainly rely on the same set of left-hemispheric supramodal areas as spoken languages (e.g. Broca’s and Wernicke’s area, Neville et al., 1998, Sakai et al., 2005), but with a modality-specific input-pathway consisting of a network of visual processing areas specialized in object, hand, face, and motion analysis, in addition to a bilateral fronto-parietal network of spatial areas (Campbell et al., 2008, MacSweeney et al., 2008). Highly iconic structures, or iconically depicted constructed action and constructed dialogs, and topographic Sign Language are each likely to modulate the activity of brain networks involved in Sign Language comprehension, and may also necessitate the involvement of additional regions, through their respective reliance on spatial formats and classifiers and personal/situational transfers.

The neural correlates of the use of sign space in signed languages have already been explored. MacSweeney et al. (2002b) described a continuum in the use of sign space, from abstract referential location to concrete “real world” representations of spatial relationships with the use of classifiers. They compared the fMRI correlates of the comprehension of “topographical” phrases such as “the man put on the hat from the top shelf” versus “non-topographical” phrases, which still included arbitrary referential locations, and found that “topographical” sentences, compared with “non-topographical” sentences, were associated with increased activation in the left inferior parietal lobule, posterior middle temporal gyrus (V5/MT area), inferior frontal gyrus, and parieto-occipital sulcus (MacSweeney et al., 2002b). The involvement of the supra-marginal gyrus was interpreted as reflecting “the necessary precise coordinate mapping of hand shapes, locations and movements”. The orientation, position, and shape information embedded in the classifier sign would necessitate increased processing of hand shape and position (inferior parietal lobule) and motion (V5/MT area), compared with a condition in which the signing-space and classifiers do not reflect the exact topographic layout of the real environment. Regarding the use of spatial-classifiers to express spatial relationships, a comparison of naming spatial relationships with classifier signs against naming spatial relationships with spatial prepositions was carried out in a concomitant PET study by Emmorey et al. (2002). A rightward-asymmetrical activation of the intra-parietal sulcus was found when comparing the expression of spatial relations with classifiers against spatial preposition signs, along with a bilateral activation of the inferior parietal lobule, in a similar locus as the left-hemispheric one found by MacSweeney et al. (2002b). The reason for the absence of involvement of the right parietal lobe in this particular study may have been the simplicity of the spatial relationships expressed in the isolated sentences.

In this context, we aimed at recording the activity of the brain in such a situation: a verbal depiction of an actual complex topographical environment. Such a task involves the addition of successive elements to a constant mental representation as they are enounced (e.g. city maps). This has not yet been studied in the context of signed languages, despite the fact that the neural correlates of such tasks have already been studied in spoken languages (e.g. Mellet et al., 1996, Mellet et al., 2002). It is not known whether the networks involved in topographical sign and spoken languages are any different, despite the obvious difference between the two languages when conveying topographical information. Furthermore, highly iconic structures, used when reporting actions, especially during narratives, are often dependent on a prior spatial mapping process, which bears similarities to the topographic use of sign space (i.e., spatial formats) and thus may involve similar neural systems. In the case of highly iconic structures, narrative comprehension may rely directly on a network of visual areas not involved in standard Sign Language.

To date, no imaging study has specifically looked into the correlates of highly iconic structures or referential shifts, despite the fact that they are frequently encountered. The question of the influence of imagic or diagrammatic iconic phenomena (formational and functional iconicity of signs, discourse-level iconicity with highly iconic structures or spatial formats) on the neural correlates of Sign Languages is important (MacSweeney et al., 2008). So far, few studies have described the neural correlates of spatially organized discourse in Sign Languages (Campbell, 2003), and none has tackled the topic of highly iconic structures. Furthermore, personal transfers are a form of third-person perspective-taking, a function which has been the object of several functional imaging studies (David et al., 2006, Ruby and Decety, 2004, Vogeley et al., 2001). They also involve role-taking. Imaging these important functions in the natural context of verbal communication rather than artificial experimental situations may also prove to be of interest.

Here, we used fMRI to compare the networks involved during the comprehension of three different types of LSF texts (videos of a lecture, a tale, and a topographical depiction) that differed with respect to the use of highly iconic structures or topographic language.

Section snippets

Paradigm

The three texts consisted of: (1) an excerpt from a lecture (“Lect LSF” condition), with as little topographic spatialization and as few highly iconic structures as possible (i.e., “frozen” Sign Language); (2) a tale, or text with iconically depicted constructed dialog and action, designed to include an important amount of highly iconic structures with both situation transfers and personal transfers (role-shifts) between several characters (“Tale LSF” condition); and (3) topographical

Post-imaging session questionnaires

In the CODAs, the repeated-measures ANOVA on the comprehension scores detected no significant interaction or main effect of Language, but found a significant main effect of verbal content (p = 0.02, mean ± sd: Lect LSF = 7.50 ± 1.96, Tale LSF = 9.16 ± 1.46, Topo LSF = 4.58 ± 2.92; Lect French = 6.90 ± 2.46, Tale French = 7.50 ± 2.34, Topo French = 6.25 ± 3.06). This main effect was primarily driven by lower scores in the Topo LSF and Topo French texts compared with the other texts. Monolinguals tended to have lower

Discussion

This is the first report of an influence of highly iconic and topographical structures on signed language networks. We show that in CODAs these structures were associated with a modulation of visual and topographical memory areas. The effect of highly iconic structures (or iconically depicted constructed dialogs and actions) we report is not due to differing types of verbal or visual content, independent of their linguistic value. This is shown by: (1) the absence of an effect of Tale LSF in

Conclusion

The present study showed that the comprehension of highly iconic structures and topographic discourse, frequently encountered in signed language, is supported by the additional recruitment of visuo-spatial and topographic memory areas in bilinguals native signers. The present results also show that the involvement of visual areas by highly iconic structures is specific of signed language since no activation was observed using a similar narrative with spoken French. The fact that a spatial WM

Acknowledgments

The authors are grateful to Mélanie Dubois, Stéphane Gorzkowski, Goulven Josse, and Jimmy Leix for their help throughout this research project, and to Guy Perchey for data acquisition.

References (86)

  • T. Ino et al.

    Mental navigation in humans is processed in the anterior bank of the parieto-occipital sulcus

    Neuroscience Letters

    (2002)
  • J. Kassubek et al.

    Involvement of classical anterior and posterior language areas in Sign Language production, as investigated by 4 T functional magnetic resonance imaging

    Neuroscience Letters

    (2004)
  • E. van der Kooij et al.

    Explaining prosodic body leans in Sign Language of the Netherlands: Pragmatics required

    Journal of Pragmatics

    (2006)
  • S.K. Liddell et al.

    Gesture in Sign Language discourse

    Journal of Pragmatics

    (1998)
  • M. MacSweeney et al.

    Dissociating linguistic and nonlinguistic gestural communication in the brain

    NeuroImage

    (2004)
  • M. MacSweeney et al.

    The signing brain: The neurobiology of Sign Language

    Trends in Cognitive Sciences

    (2008)
  • S. McCullough et al.

    Neural organization for recognition of grammatical and emotional facial expressions in deaf ASL signers and hearing nonsigners

    Cognitive Brain Research

    (2005)
  • E. Mellet et al.

    Neural correlates of topographic mental exploration: The impact of route versus survey perspective learning

    NeuroImage

    (2000)
  • D. Papathanassiou et al.

    A common language network for comprehension and production: A contribution to the definition of language epicenters with PET

    NeuroImage

    (2000)
  • S. Park et al.

    Different roles of the parahippocampal place area (PPA) and retrosplenial cortex (RSC) in panoramic scene perception

    NeuroImage

    (2009)
  • L.A. Petitto

    On the autonomy of language and gesture: Evidence from the acquisition of personal pronouns in American Sign Language

    Cognition

    (1987)
  • G. Repovs et al.

    The multi-component model of working memory: Explorations in experimental cognitive psychology

    Neuroscience

    (2006)
  • C. Roy

    Features of discourse in an American Sign Language lecture

  • R.I. Rumiati et al.

    Neural basis of pantomiming the use of visually presented objects

    NeuroImage

    (2004)
  • D. Schmidt et al.

    Visuospatial working memory and changes of the point of view in 3D space

    NeuroImage

    (2007)
  • H.A. Taylor et al.

    Spatial mental models derived from survey and route descriptions

    Journal of Memory and Language

    (1992)
  • H.A. Taylor et al.

    Perspective in spatial descriptions

    Journal of Memory and Language

    (1996)
  • K. Vogeley et al.

    Mind reading: Neural mechanisms of theory of mind and self-perspective

    NeuroImage

    (2001)
  • M. Wallentin et al.

    Frontal eye fields involved in shifting frame of reference within working memory for scenes

    Neuropsychologia

    (2008)
  • L. Zago et al.

    Distinguishing visuospatial working memory and complex mental calculation areas within the parietal lobes

    Neuroscience Letters

    (2002)
  • Bahan, B., & Petitto, L. (1980). Aspects of rules for character establishment in ASL storytelling. Unpublished, The...
  • L.W. Barsalou

    Grounded cognition

    Annual Review of Psychology

    (2008)
  • D. Bavelier et al.

    Hemispheric specialization for English and ASL: Left invariance–right variability

    NeuroReport

    (1998)
  • D. Bavelier et al.

    Impact of early deafness and early exposure to Sign Language on the cerebral organization for motion processing

    Journal of Neuroscience

    (2001)
  • U. Bellugi et al.

    The acquisition of three morphological systems in american sign language

    Papers and Reports on Child Language Development

    (1982)
  • P.B. Braem

    Rhythmic temporal patterns in the signing of deaf early and late learners of Swiss German Sign Language

    Language and Speech

    (1999)
  • A.R. Braun et al.

    The neural organization of discourse: An H215O-PET study of narrative production in English and American Sign Language

    Brain

    (2001)
  • R. Campbell et al.

    Sign Language and the brain: A review

    Journal of Deaf Studies and Deaf Education

    (2008)
  • Casey, S. (2003). “Agreement” in gestures and sign languages: The use of directionality to indicate referents involved...
  • L.L. Chao et al.

    Attribute-based neural substrates in temporal cortex for perceiving and knowing about objects

    Nature Neuroscience

    (1999)
  • D.P. Corina et al.

    Signed language and human action processing

    Annals of the New York Academy of Sciences

    (2008)
  • C. Courtin

    Does Sign Language provide deaf children with an abstraction advantage? Evidence from a categorization task

    Journal of Deaf Studies and Deaf Education

    (1997)
  • C. Cuxac

    Iconicité des Langues des Signes

    Faits de langues

    (1993)
  • Cited by (4)

    • Visual and linguistic components of short-term memory: Generalized Neural Model (GNM) for spoken and sign languages

      2019, Cortex
      Citation Excerpt :

      This information is integrated into abstract representations (memory ‘tokens’) in the inferior parietal cortex, which can be regulated by two streams of executive connections: the dorsal, from DLPFC, regulating focus of attention and intrusion resistance, and ventral, responsible for reactivation of memory traces, and manipulation of items in the short-term memory. It has long been hypothesized that spatial systems may be recruited for certain aspects of sign language processing (e.g., Campbell, MacSweeney, & Waters, 2008; Courtin et al., 2010; Gentner, Özyürek, Gürcanli, & Goldin-Meadow, 2013). Rönnberg et al. (2004) noted that the neural substrate for SL processing encompasses bilateral temporal, parietal, and premotor arrays–markedly similar to the neural activation observed under generation of visuo-spatial array (cf. Zago & Tzourio-Mazoyer, 2002).

    • Brain correlates of constituent structure in sign language comprehension

      2018, NeuroImage
      Citation Excerpt :

      For example, sign language sentences typically exhibit a high degree of iconicity (Cuxac and Sallandre, 2007; Sallandre and Cuxac, 2002). This issue has been investigated by Courtin et al. (2010) in hearing native signers of French Sign language. As explained in the Materials section, when creating the sign language stimuli for our experiment, we did our best to avoid highly iconic structures, but one could still argue that our sign language stimuli were more iconic than written sentences.

    • Use of sign space: Experimental perspectives

      2021, The Routledge Handbook of Theoretical and Experimental Sign Language Research
    1

    Authors contributed equally to this work.

    View full text