The fully developed semantic system contains a wealth of information about the objects with which we interact each day. For example, semantic knowledge about an apple might include information about its physical attributes (e.g., red, hard, crunchy), functional attributes (e.g., can be eaten, grows on trees), and encyclopedic facts (e.g., was poisoned to cause Snow White to fall into a deep sleep). While this more general knowledge certainly interacts with personally experienced episodic memories (e.g., “I once found a worm in an apple”) to create a comprehensive representation of “appleness” (e.g., Funnell, 2001; Graham et al., 1999; Moscovitch et al., 2005; Snowden et al., 1994, 1995), a fully integrated concept is one that can be accessed independent of any particular context.

Researchers who study semantic development and semantic disorders have typically focused on either children or adults. As such, examinations of semantic processing in children and adults have not necessarily followed parallel paths. For example, investigations of semantic processing in children fractionate depending on the age of the child. Examinations of prelinguistic children have focused debate on the way in which concepts are developed (e.g., Are concepts innate or learned through experience? Are perceptual and conceptual development really a single entity?), whereas work that examines semantic development in older children naturally includes more information about the role of language. In contrast, investigation of semantic knowledge in adults has been informed by findings of semantic domain-specific impairments in brain-injured patient populations, in combination with findings from behavioral and functional neuroimaging work with healthy adults. The frequent demonstration of disproportionately deficient semantic processing of living versus nonliving common objects (e.g., animals vs. artifacts; Capitani, Laiacona, Mahon & Caramazza, 2003) has guided the formulation of questions in the cognitive neuropsychological and normal adults’ semantic processing literature, to focus on processing of particular categories of objects and the features that distinguish objects within a category, in an effort to discern the nature of semantic organization. Differences in the child versus adult literatures are evident at not only the theoretical, but also the methodological, level. Tasks used to probe questions about semantic processing in the adult literature are “language-laden,” even when testing neuropsychological populations. Picture-based tasks, such as confrontation naming, word–picture matching, and categorization, are often supplemented by asking participants to provide lengthy verbal descriptions of objects, to name objects from definitions, and to make verbal–semantic associations. These tasks are designed to gauge systematically the full breadth of adult semantic knowledge about concepts, with less emphasis on being engaging. Conversely, tasks designed for use with children, particularly young children and infants, must necessarily be less linguistically complex and more activity- or play-based. Even within the pediatric literature, the specific methodologies used to examine these questions must differ based on children’s age and development. For example, preliterate children cannot engage in tasks that involve orthography, and thus all words must be presented aurally.

Despite these differences in viewpoint and methodology, the fact remains that humans tend to develop along a continuum, as is observable in such commonalities across the child and adult semantic processing literatures as the emphasis on distinctions between living and nonliving concept processing. A better understanding of the pattern and sequence of semantic development has the potential to increase our knowledge at a theoretical level and contribute to ideas for lexical–semantic therapy across the age spectrum. Therefore, in this review, we endeavor to bring together the potentially complementary research regarding semantic knowledge of objects in children and adults to determine what we can learn from each literature as we move toward the common goal of a comprehensive understanding of semantic organization.

It seemed critical to focus our review with a set of guiding hypotheses in order to provide a rubric under which to examine the evidence. Informing our hypotheses is the set of theories postulating that semantic representations of concrete objects are “grounded” in sensory/motor experience (e.g., Barsalou, 1999). We believe that this is a particularly appropriate starting point for examining semantic development, which begins in infancy, the point in life at which we have our initial experiences with objects. The observation that the foundation of object concepts might be built upon sensory and motor processes began as early as the 1900s with Karl Wernicke (as cited in Eggert, 1977). The more recent contributions of parallel distributed processing models (Allport, 1985; McClelland & Rogers, 2003) and neuroimaging techniques (see Martin, 2007, and Thompson-Schill, 2003, for reviews) have re-enlivened this discussion (Caramazza & Mahon, 2006). Such theories appear to exist along a continuum. The strongest proponents of the view that semantic knowledge is built upon sensory/motor experience propose that semantic processing is “embodied” within sensory/motor systems (Gallese & Lakoff, 2005; Mahon & Caramazza, 2005), which are themselves capable of generating conceptual representations. According to this view, sensory/motor systems construct their own complete concepts without the need for higher-order amodal processing, and these representations can be constructed not only for concrete concepts but for more abstract concepts, such as love, through the creation of conceptual metaphors that are predicated on concrete concepts (Gallese & Lakoff, 2005). The sensory/motor model of semantic representations of objects proposed by Martin and colleagues (Martin, Ungerleider & Haxby, 2000) offers a moderate formulation of this viewpoint (see Meteyard, Rodriguez Cuadrado, Bahramic & Vigliocco, in press, for a recent comprehensive review of adult-focused findings), in that while semantic representations of concrete objects are considered “grounded” in sensory/motor content, this content is not considered adequate to represent all that we know about objects, such as verbally mediated encyclopedic knowledge (Martin, 2007, p. 304), which is likely particularly salient for abstract concepts. Martin and colleagues proposed that the “core properties” or “semantic primitives” of objects that provide for “implicit and automatic” (Martin, 2007, p. 304) processing of meaning are built upon the sensory/motor processes engaged during our earliest experiences with those objects. This view generates relatively specific hypotheses about the neuroanatomical substrates of sensory/motor-based semantic features, typically regarded as information about the form and function/action of objects. Relative to organization of concepts, this model elaborates on the “differential weighting hypothesis” of Warrington and Shallice (1984), which posits specific relationships between sensory/motor-based semantic features and those concepts (e.g., living vs. nonliving) for which they might be most salient (e.g., Gainotti, 2007; Martin & Chao, 2001; Martin et al., 2000).

It is not our intention to provide a complete accounting of all aspects of semantic development, an endeavor we believe is better served in a textbook or even a multivolume series. Neither is it our aim to pit the various permutations of grounded theories of semantic organization against amodal theories; this has been considered quite comprehensively elsewhere (e.g., Chatterjee, 2010; Gainotti, 2007; Mahon & Caramazza, 2008; Meteyard et al., in press). We are particularly interested in the featural basis of the semantic development of concrete concepts as an active process of constructing and maintaining knowledge of object meaning, which provides the foundation for asking applied questions about semantic knowledge and how this knowledge can facilitate word learning, word retrieval, and lexical–semantic rehabilitation across the lifespan. For this purpose, we will examine four hypotheses in utilizing both behavioral and neuroanatomical evidence: (1) Children build semantic representations based on their early sensory/motor experiences with concrete objects. (2) Knowledge of “core” semantic features supports word learning. (3) Relationships between semantic knowledge and concept names set down in childhood continue to be salient in the adult system. (4) Degradation of semantic knowledge can impair name retrieval in patterns that can be predicted on the basis of “core” lexical–semantic relationships. Understanding the evidence in light of these hypotheses helps us formulate new questions about semantic processing at each stage of development.

A sensory/motor perspective on the development of concrete concepts

Hypothesis 1: children build semantic representations based on their early sensory/motor experiences with concrete objects

One point that continues to be the focus of much debate is whether early semantic distinctions are innate or are built through early experiences that become permanent traces within the sensory/motor system. This debate is important because it sets the stage for how we presume human beings organize semantic information. The existence of innate concepts would imply that humans have predetermined neuroanatomical structures dedicated to semantics and would lessen the need for learning through sensory/motor experiences. Although it is difficult to confirm the presence of innate concepts in infants, there is evidence that they have innate perceptual abilities. Some researchers argue that an infant should be able to use perceptual (Quinn & Eimas, 2000) or perceptual and conceptual (Mandler, 2000) abilities to make semantic distinctions. An example of a basic semantic distinction is distinguishing between nonliving and living entities, the latter likely further fractionating into the tripartite distinction animal/plant/human (e.g., Caramazza & Mahon, 2006). Evidence suggests that this distinction is made at the earliest stages of development, in infancy. For example, infants do not categorize plant life with animals (Mandler, 2000). It is not clear, however, why this is so. Are these innate categories, created through evolutionary imperatives, as hypothesized in domain-based models of semantic organization (Caramazza & Shelton, 1998; Mandler, 2002; Santos & Caramazza, 2002), or could experience really be driving categorical differentiation in infants?

Even if infants are hardwired to discriminate living from nonliving entities, the package they come with is not flawless, and they appear to rely on maturation or additional experience to form more accurate conceptual representations. Infants will sometimes make a human/other distinction, even when a principle should apply to both the humans and the other group. For example, Kuhlheimer, Bloom and Wynn (2004) investigated how 5-month-old infants classify stimuli on the basis of the principle of continuous motion. The infants showed an expectation for continuous motion in nonliving, inanimate objects, but not for human beings, to whom the principle should apply. The researchers interpreted the findings as evidence that infants were driven to make a human/other distinction, even at the expense of incorrectly classifying the principle of continuous motion. The authors admit to not being clear on exactly what distinction the infants were making about humans and others. However, they suggest that the distinction might be between social, goal-oriented humans and objects that may be able to move but are not alive per se.

Perhaps then, rather than referencing an inherent living/nonliving distinction, it is possible that infants are making perceptual/motor distinctions about these different objects. In other words, the movement patterns of living things are perceptually (and functionally) distinct from the movement patterns of nonliving things, a notion that is echoed both in neuropsychological and neuroanatomical models of adult semantic processing (e.g., Beauchamp, Lee, Haxby & Martin, 2002; Tyler, Moss, Durrant-Peatfield & Levy, 2000). In fact, in a recent study using a picture-matching task, both children and adults demonstrated faster reaction times when identifying “contextual/functional” relationships for manipulable objects, whereas reaction times were faster when identifying “perceptual similarity-based” relations for nonmanipulables, particularly those from living categories (Kalénine & Bonthoux, 2008).

There are additional associations such that most of the things that move in a certain way also share gross perceptual characteristics, such as eyes (Tyler et al., 2000). This perspective would explain why children tend to limit the living/nonliving distinction to animals, and not plants, which share neither visual nor motor features with animals (Mandler, 2000). Rakison and Poulin-Dubois (2001) reviewed data for evidence of physical and psychological causality (including types of movement) that might give children insight into an animate/inanimate distinction. Infants did not begin to show evidence of most of these properties until at least 6 months of age. The researchers concluded that children develop the ability to make these distinctions over time, providing evidence that knowledge of the core properties is developed with experience.

If perceptual processes are invoked as infants develop semantic representations, is this perceptual processing sufficient (Quinn & Eimas, 2000), or is this a dual process, active in tandem with conceptual processes (Mandler, 2000)? Mandler (1988) argued that semantic organization likely begins with a commingling of perceptual input and conceptual growth, although the growth is largely parallel. Using the semantic features associated with the concept “apple” as an example, we know that a newborn infant is not capable of incorporating semantic features like “crunchy” or “edible,” due to both conceptual and perceptual constraints. Simply put, a newborn with limited visual processing abilities, an inability to eat solid food, and the lack of motor coordination to grasp an apple cannot perceive much about this fruit, and is unlikely to have even the construct of “apple” or “fruit” ready for sorting when his or her perceptual abilities come online. However, while the infant’s perceptual abilities are increasing, so are his/her conceptual abilities. Theoretically, the infant will use perceptual analysis skills and informational schema to organize perceptual information about the apple, including its color, scent, and shape. However, according to Mandler (1988), in order to truly have a concept of “apple,” the infant must learn the function of the apple—in this case, as something to be eaten. In addition, as Mandler (2000) stated, true concepts are those that can be called to mind even without the presence of the actual object.

The single-processing view, as defended by Quinn and Eimas (2000), is different primarily in its repudiation of a separate conceptual system. Quinn and Eimas disagreed with the premise that the perceptual analysis process that Mandler described as linking perceptual and conceptual information need be a separate and distinct process from the rest of the perceptual system. They also disagreed with Mandler on what, precisely, constitutes a concept. Whereas Mandler was more likely to argue that early concepts include information that is not easily perceivable and related to function, Quinn and Eimas argued that early concepts are built primarily on observable information. One example of this phenomenon is the development of categorical prototypes. Though Carey (2000) and others have argued for core concepts within categories that are inherent, with properties that are “not perceptually available” (p. 38), evidence suggests that prototypes for many concepts are based on environmentally specific experience, hence are not inherent. Clark (2004) suggested that a universal ability to perceive concepts is trained to more culturally specific ways of looking at the world. For example, the reason that a child from Wisconsin has an apple as a prototype for “fruit,” but a child from Mexico envisions a mango as her prototype, is based on experience.

Additional support for the “experiential basis” of conceptual processing comes from Eimas and Quinn (1994), who provided evidence that children can both broaden and narrow their perceptual sensitivity given experience. This flexibility may contribute to the development of categorical organizational levels and the relations among them. Typically, concepts are thought to fall into three categories: subordinate, basic-level, and superordinate (e.g., Golden Retriever, dog, and animal). Evidence seems to converge on the fact that children learn both conceptually based and perceptually based categories starting at the superordinate level. Relative to feature processing, one could conclude that less discrimination is required at the superordinate level than at the subordinate level. To continue with our dog example, one only needs to notice that both cats and dogs have faces and move on their own to group them as “animals.” To make the distinction between cats and dogs requires somewhat more refined observation of differences in movement patterns, behavior, and facial characteristics. At the finest-grained level, one needs to notice quite subtle details of coloring, shape, and texture to discriminate a Golden Retriever from a Yellow Lab. Quinn (2004) found that 3- to 4-month-olds who were presented with two types of cats and dogs were able to make distinctions at the basic level (e.g., cats vs. dogs), but not at the subordinate level (e.g., different types of cats). However, 6- to 7-month-olds were able to make this distinction. Given his earlier (Quinn & Eimas, 2000) evidence that 2-month-olds can make broad-based categorical distinctions (e.g., living/nonliving), Quinn claimed that perceptual development proceeds from broad to narrow categories and that children as young as (but not younger than) 6 months are capable of making such distinctions.

The sensory/motor theory’s focus on experience is bolstered by this evidence of environment-specific differences in concept formation. Such models do not suggest that people have innate prototypes for any particular category. What might be considered “innate” is the relationship between the neural/cognitive perceptual and motor capabilities and the formation of concepts. This would be particularly true for those “core” features that are directly linked with perceptual/motor properties (e.g., form and function), as opposed to more verbally mediated, encyclopedic features (e.g., the lion is the king of beasts). Experience, then, is what drives early concept formation, such that semantic features can ultimately be activated for semantic processing in the absence of an actual physical stimulus once the representation has stabilized.

Experience may also dictate how concepts are related to one another. A large literature has addressed how children first group items: taxonomically (e.g., apple–banana) or thematically (e.g., apple–pie). Several researchers have pointed out that taxonomic organization might be based more on perceptual input, while thematic organization might be based more on conceptual input (e.g., Kalénine & Bonthoux, 2008). Organization and enrichment of one’s semantic network likely happens through learning based on both perceptual and conceptual organization schemes from within the first year of life. Hammer, Diesendruck, Weinshall and Hochstein (2009) have shown, through computer modeling, evidence for a developmental continuum of semantic organization across the lifespan. The system is loaded with perceptual information early on, which is most tied to concrete objects. However, Hammer et al. pointed out functional differences between perceptual and conceptual organization: Mainly, perceptual information is more readily observable, but it tends to provide less information. Conceptual information is less available, but it carries more detailed information. Thus, it may be that concrete objects are easier to perceive or categorize taxonomically early on, whereas thematic associations and concepts that are more readily distinguished on the basis of less observable information (e.g., abstract concepts) are filled in later, with the emergence of language.

Hypothesis 2: knowledge of “core” semantic features supports word learning

Semantic knowledge is intertwined with language early on, with words first being understood within the first year of life, as evidenced by lexical priming and semantic integration, which have been observed at 14 months (Friedrich & Friederici, 2005). Friedrich and Friederici (2005) used ERPs from infants who were exposed to pictures of known objects along with auditorily presented words that either named the picture or were incongruous with it. The researchers found differing activation in expected brain areas between congruous and incongruous conditions, which they interpreted as evidence of priming. Specifically, the picture generated lexical expectations for the word. These infants also showed evidence of “N400-like incongruities” in activation that they interpreted as evidence for semantic integration (Friedrich & Friederici, 2005, p. 655).

Other researchers have documented the interaction between language and how children perceive objects (Smith, 2003; Yoshida & Smith, 2005), which contributes to category development. For example, Smith (2003) showed that young children (17–25 months) changed in their ability to recognize shapes, moving from perceiving more concrete images to being able to perceive three-dimensional caricatures of objects. The ability to perceive more abstract images was linked to more advanced conceptual representations. This increased ability was also related to the child’s number of object names, and Smith proposed that the ability to better perceive the category might have helped spur the linguistic growth. This relationship appears to work in the other direction, as well. Yoshida and Smith (2005) found that 2-year-olds were able to learn novel lexical categories (e.g., objects matched on shape or material) more quickly when presented with informative linguistic cues than when the linguistic information was uninformative. For example, children were faster to learn a material-based category if linguistic information cued the child to whether it might be a count or mass noun (e.g., Here is some ______ vs. Here is a ______) than when the linguistic information was uninformative (e.g., Here is _____). Yoshida and Smith went so far as to say that “by teaching associations between words and perceptual properties, one will change not only what is known about the words, but also what is known about the correlations among the perceptual properties” (p. 94), thus forming the basis for enhanced semantic development.

Other empirical evidence supports the role of language as a driving force in conceptual organization. Booth, Waxman and Huang (2005) found that children extended words to novel objects differently, depending on whether or not they had been told that the first object was animate. However, the ability to use language to influence conceptual development is also a gradual one. Nazzi and Gopnik (2001) found that while 20-month-olds could use language to help categorize objects, 16-month-olds were only able to rely on visual information. Additional work has shown a relation between children’s number of words and their ability to categorize (Smith, 2003). The general trend is that children with better vocabularies can make more advanced categorizations.

Whereas Smith’s (2003) work has shown how language can influence perceptual, and thus semantic, development, Capone and McGregor (2005) showed that depth of semantic encoding facilitates word retrieval even in toddlers. They contrasted novel word learning in 2-year-olds in conditions that varied the amount and type of semantic cues children were given in the initial encoding phase. Semantic cues were provided through the use of gestures. The toddlers showed superior fast mapping (learning items after 1–3 exposures), slow mapping (learning items after >3 exposures), and word retrieval for items that were presented with enhanced semantic cues, as compared to items in the control condition. Although children showed more facility with learning when exposed to cues that focused on shape as compared to functional characteristics, either type of information was preferable to the control condition. McGregor, Friedman, Reilly and Newman (2002a) directly investigated the relationship between semantic representations and naming in older children. In their studies, 4- and 5-year-old children were shown pictures of objects and asked to name the objects. They were later asked to both describe and draw the objects. There was a relationship between how fully elaborated a child’s semantic knowledge of the object was and the child’s accuracy in naming. The range of semantic knowledge associated with a word’s label was thought to be representative of the process of fully encoding a word. In other words, although a child could say a word, he or she might not fully understand the word. The child might be adding levels of semantic knowledge with each new experience with the word and/or the object/concept it represents. This idea is further supported by a case study of an adolescent with Williams syndrome who had unimpaired vocabulary but distinctly impaired semantic knowledge, particularly for nonmanipulable objects (Robinson & Temple, 2009).

Relative to the notion of what Martin and colleagues refer to as “core properties” (e.g., visual/perceptual, visual motion, manipulability; see, e.g., Martin, 2007; Martin et al., 2000), which typically refer to the semantic features associated with sensory/motor experience with the concept, the majority of information provided by children in McGregor et al. (2002a) study described “functional” (use or manipulability) or “physical” (including visual/perceptual properties such as color, size, or shape) features. The authors suggested that these findings coincide with Mandler’s (2000) view that physical and functional properties serve as the foundation of categorization for infants, and they suggested that this “conceptual core” remains a particularly salient aspect of children’s semantic representations into the early school years (McGregor et al., 2002a, p. 341; but cf. Hughes, Woodcock & Funnell, 2005). The importance of sensory/motor features in grounding semantic development has also been demonstrated in neural network models of language acquisition (Howell, Jankowicz & Becker, 2005).

The key point is that once children have access to language, their formation of concepts includes sensory/motor, conceptual, and linguistic strategies. It is less clear how all of these strategies intertwine, but some evidence is available. Gopnik and Sobel (2000) conducted a series of experiments that spoke to the type of organization that occurs when children add language to the equation. Clear evidence for use of causal reasoning was evident as early as 2 years of age, based on information from Gopnik and Sobel. A group of 2- to 4-year-old children participated in a series of experiments to determine what types of strategies they used for categorization. The children were asked to determine a causal relationship between labeled three-dimensional objects. All of the children were able to use causal information and to override perceptual and associative information. Gopnik and Sobel posited that children used a specialized causal reasoning module to achieve this categorization and that language played a unique role. The preschool participants preferred to use labels, when possible, and were more accurate at categorization when they used labels (e.g., which ones were “blickets”) rather than relied on perceptual information (e.g., which ones were the same color or shape). This trait was stronger in the older children, suggesting that linguistic cues were more salient for concept organization for this group. In this case, language allowed the older children to move from a more taxonomic (perceptually based) to more thematic (linguistically based) organization system.

It is important to note, however, that the road to integration of language with semantic development is not without some bumps along the way. For example, Bowerman (1978) proposed that the language errors of her children were indicative of increased semantic processing. She found that her young children began to make lexical errors with words that had previously been used correctly. She determined that these errors were due to increased semantic incorporation among the words, and she found such evidence in children as early as age 2;4. This work echoes findings of the semantic interference effect in adults, in which the presence of semantically related distractors (i.e., competitors) can increase the latency of target responses (see Mortensen, Meyer & Humphreys, 2006, for a review).

Hypothesis 3: relationships between semantic knowledge and naming set down in childhood continue to be salient in adult semantic processing

Connectionist theories of intact word retrieval in adults suggest that fully developed lexical retrieval requires both feedforward semantic information to activate the appropriate lexical label for a concept and lexical feedback to support selection of the most relevant network of semantic features for accurate conceptual processing (Dell, 1988; Dell & Reich, 1981; Dell, Schwartz, Martin, Saffran & Gagnon, 1997; Foygel & Dell, 2000; Gagnon, Schwartz, Martin, Dell & Saffran, 1997; Garrett, 1992). Hence, as observed in children, semantic and linguistic/lexical forms of knowledge continue to have a codependent relationship in adulthood, such that in a fully functioning, integrated system, semantic knowledge underlies lexical knowledge.

Attempts to discern more fully the feature-based content and organization of adult semantic knowledge have employed a variety of behavioral methods, including feature verification, semantic priming, and large-scale feature generation studies in non-brain-injured adults. This variability across tasks is mirrored in variability within the tasks of the same type. Some studies provide for distinction across different categories (e.g., animals vs. plants, vehicle vs. tools), whereas others provide only for distinction across domains (i.e., living vs. nonliving). In feature generation studies, which perhaps provide for the most thorough exploration of semantic feature knowledge in non-brain-injured adults, there are considerable differences in the ways that semantic feature information is elicited, as well as in how feature types are defined, which can affect the ways in which data are interpreted as evidence for or against particular theories of semantic organization.

Some “normal” patterns do emerge, which are consistent with patterns laid down in childhood, and these appear to be relatively stable across the healthy adult lifespan [e.g., similarities between the results of Garrard, Lambon Ralph, Hodges & Patterson, 2001a, mean age of participants = 67.4 years (SD = 3.9), and Zannino, Perri, Pasqualetti, Caltagirone & Carlesimo, 2006, mean age of participants = 24.3 years (SD = 3.5)]. For example, categorical differences between living and nonliving objects remain salient. For living objects, healthy adults consistently provide more features in general (Garrard et al., 2001a; McRae & Cree, 2002; McRae, de Sa & Seidenberg, 1997; Zannino et al., 2006) and more features that are shared among category members (Garrard et al., 2001a; McRae et al., 1997; Zannino et al., 2006). In contrast, for nonliving objects, participants tend to produce fewer features in general (Garrard et al., 2001a; McRae & Cree, 2002; McRae et al., 1997; Zannino et al., 2006), and the features are more likely to be distinctive to individual category members (Garrard et al., 2001a; Cree & McRae, 2003; Zannino et al., 2006). There is also a relative consistency across studies, such that for living things, visual-perceptual features are particularly salient (defined sometimes as number of features produced, sometimes as relative weighting of feature types) (Cree & McRae, 2003; Farah & McClelland, 1991; Garrard et al., 2001a; McRae & Cree, 2002; Vigliocco, Vinson, Lewis & Garrett, 2004; Vinson, Vigliocco, Cappa & Siri, 2003), whereas for nonliving things, the saliency of functional and motoric features comes more to the fore (Cree & McRae, 2003; Farah & McClelland, 1991; Garrard et al., 2001a; Laws, Humber, Ramsey & McCarthy, 1995; McRae & Cree, 2002; Vigliocco et al., 2004; Vinson et al., 2003).

The binary dichotomy between living things/visual-perceptual features and nonliving things/functional features does not reflect the full range of adult semantic knowledge, nor is it sufficient to account for all of the variants of category-specific impairments that have been documented (Cree & McRae, 2003; McRae & Cree, 2002). In fact, taking patterns observed in the cognitive neuropsychological literature as a starting point for work with healthy adults, Cree and McRae used a feature production paradigm that demonstrated more complex relationships between feature types and categories of concepts, which may reflect extension of a developmental pattern (Hughes et al., 2005). For example, a division was shown between animals and plants, often-times considered within the single domain of living concepts. “Creatures” elicited high proportions of visual-motion and, to a lesser degree, visual-form features, with the lowest proportion of functional features. Plants (i.e., fruits and vegetables) elicited a very high proportion of visual-color features, but (not surprisingly) lower proportions of visual-motion features relative to creatures. Nonvisual-perceptual features such as taste and tactile information were also found to be distinguishing for fruits/vegetables. Furthermore, fruits/vegetables also elicited a greater proportion of function features, a feature type prominently elicited by nonliving concepts, which also elicited a high proportion of visual-parts features.

Two particularly interesting, intuitively nonliving object categories are foods and musical instruments, both of which were high in perceptual attributes associated with living object categories. Foods were distinguished by perceptual features such as taste and smell, and musical instruments, distinguished primarily by sound features, were also high in visual-color features. Ultimately, it appears that all categories elicited a high proportion of at least one type of visual-perceptual feature: creatures, visual motion; fruits/vegetables, visual color; nonliving, visual parts. Utilizing evidence from a task in which participants judged the saliency of different “sources of knowledge” (color, shape, action, smell, taste, etc.) about concepts, Gainotti, Ciaraffa, Silveri and Marra (2009) proposed that what actually distinguishes living from nonliving entities is the interplay between visual-perceptual and other perceptual features, for living things, versus that between visual-perceptual and function/action features, for nonliving things—an account that they have also examined with respect to the neural substrates for processing living versus nonliving concepts (Gainotti, 2011). Ultimately, while adults’ semantic organization is more complicated than a binary living concepts/visual-perceptual features versus nonliving concepts/functional features dichotomy, this evidence supports the importance of features as an organizing principle, such that semantic categories may surface as a function of the similarity of the underlying feature structure among groups of concepts (e.g., Caramazza, Hillis, Rapp & Romani, 1990; Cree & McRae, 2003; Garrard et al., 2001a; Martin et al., 2000; McRae, Cree, Seidenberg & McNorgan, 2005; Tyler et al., 2000; Warrington & Shallice, 1984).

Evidence for feature diagnosticity being developed through sensory/motor experience during concept acquisition finds support along several lines of investigation. For example, the “experience-based” aspect of the relative salience of visual-color features for fruits and vegetables has been supported by Connelly, Gleitman and Thompson-Schill (2007), who provided evidence for the diagnosticity of color in implicit similarity judgments of fruits and vegetables relative to nonliving things (i.e., household items) for sighted participants, but not those who were congenitally blind. Relative to conceptual processing of nonliving objects (i.e., tools), evidence from right- versus left-handers has suggested a relationship between the experience of manipulating tools (with one’s dominant hand) and the neural substrates involved in conceptually distinguishing tools from animals (Lewis, Phinney, Brefczynski-Lewis & DeYoe, 2006). Personal experience may also influence relative familiarity with different conceptual categories. Gender effects have been observed such that men have an advantage for processing nonliving objects, which reflects a general trend toward greater familiarity that begins with young boys’ earlier acquisition of names for tools and vehicles (Barbarotto, Laiacona & Capitani, 2008). Within the domain of living concepts, men show an advantage for animals, whereas women show an advantage for plants (i.e., fruits and vegetables; e.g., Albanese, Capitani, Barbarotto & Laiacona, 2000; Barbarotto, Laiacona, Macchi & Capitani, 2002; Laws, 2004). One explanation is that these contrasts reflect differences in relative experience between men and women with these categories, which has a basis in social-role-based familiarity (Gainotti, 2005; Gainotti, Ciaraffa, Silveri & Marra, 2010. Interestingly, findings of a female advantage for living categories is less consistent in children (Barbarotto, Laiacona & Capitani, 2005, 2008), suggesting that this is not innate knowledge, but rather is developed with experience that perhaps comes later in life than men’s experience with tools and vehicles. It will be informative to continue this line of inquiry as social roles evolve and we are able to compare new generations of younger versus older men and women (e.g., Cameron, Wambaugh & Mauszycki, 2008).

Finally, a different kind of experience-based effect may also be operating in semantic “development” from younger to older adulthood. We note that across studies, the definition of who qualifies as an “older adult” varies, as does the method for grouping participants by age (e.g., binary—“young” vs. “old”; by decade—20s–80s; etc.); however, having attained 60 or more years of life appears to consistently qualify one as an older adult. Older adults typically demonstrate larger vocabularies and greater lexical diversity than younger adults (Horton, Spieler & Shriberg, 2010; Verhaeghen, 2003). This likely contributes to the fact that whereas older adults perform more poorly on structured naming tasks requiring a single, specific response (e.g., confrontation naming), they perform better in connected speech, wherein they have the opportunity to choose their own vocabulary (e.g., Hough, 2007; Kavé, Samuel-Enoch & Adiv, 2009). Older adults’ access to larger vocabularies and longer experience using these vocabularies has been proposed as one explanation for the observation that older adults’ connected speech contains a larger proportion of low-frequency words than that of younger adults (Kavé et al., 2009). Older adults also systematically provide higher ratings for semantically based psycholinguistic factors, such as typicality and familiarity, of both living and nonliving concepts, as well as identifying fewer concepts as “unknown” (Morrow & Duffy, 2005). One likely explanation for this is what Morrow and Duffy termed the “expert theory of semantic representations” (see also Horton et al., 2010; Mayr & Kliegl, 2000). The greater magnitude of ratings for familiarity is considered to come from older adults’ longer and more diverse experience with concepts, while typicality differences may reflect not only increased frequency of contact with object concepts, but also “better defined category structures” (Morrow & Duffy, 2005, p. 615), which come as a result of greater experience with a diversity of category members. Morrow and Duffy also likened these effects to similar effects seen in experts versus novices with a particular category (e.g., Johnson, 2001), as well as to developmental trends observed from childhood to adulthood (e.g., Berman, Friedman, Hamberger & Snodgrass, 1989; Bjorklund, Thompson & Ornstein, 1983).

Hypothesis 4: degradation of semantic knowledge can impair name retrieval in patterns that can be predicted on the basis of “core” lexical–semantic relationships

It stands to reason that if semantic feature knowledge supports word learning, then deficient semantic processing should play a role in language impairment. Brackenbury and Pye (2005) noted that semantic deficits contribute to word learning, access, and retrieval problems for children with specific language impairment, but that, due to the challenges of measuring semantic knowledge, these problems are often overlooked. Recent work in child word learning has pointed to semantic deficits in children with specific language impairment (Alt & Plante, 2006; Alt, Plante & Creusere, 2004; McGregor et al., 2002b). These children have been found to encode fewer semantic features of novel words (Alt & Plante, 2006; Alt et al., 2004) and to have shallower semantic encodings for the words that they do know (McGregor et al., 2002b) than do typically developing peers.

Evidence derived from studies of adults with semantic impairment resulting from neurological insult also suggests that there is a relationship between those concepts for which semantic knowledge has degraded and the items that are most difficult for patients to name (e.g., Lambon Ralph, McClelland, Patterson, Galton & Hodges, 2001). Such central semantic deficits are discerned from lexical–semantic access or phonological–lexical production impairments through systematic cognitive neuropsychological assessment, which includes verbal production and comprehension tasks, as well as nonverbal tasks, such as drawing. Central semantic impairment is characterized by the presence of multimodal naming impairments (i.e., present in both spoken and written production), predominance of semantic naming errors (e.g., naming an apple as “orange” or “fruit”), concomitant comprehension deficits, even at the single-word level, and in some cases impaired representation of concepts in nonlinguistic form (e.g., drawing). This degradation of nonlinguistic representations mirrors the poorer drawing skills of children with specific language impairment, who have concomitant vocabulary problems (McGregor et al., 2002b). Semantic dementia (SD), a progressive degenerative disorder that results in circumscribed atrophy of the anterolateral and ventral temporal lobes (e.g., Brambati et al., 2009; Mummery et al., 2000; Snowden, Goulding & Neary, 1989; Snowden, Neary & Mann, 2002), provides a relatively pure model of central semantic impairment in the context of well-preserved episodic memory and phonological and visuospatial processing (e.g., Hodges, Graham & Patterson, 1995; Patterson & Hodges, 2000). Examination of the trajectory of semantic decline in SD provides a window into how the feature structure that underlies semantic processing influences performance (Rogers et al., 2004). Utilizing convergent evidence from computational modeling and patient performance, Rogers et al. proposed that semantic degradation results in an increasing overgeneralization of conceptual knowledge as the ability to distinguish among salient features of concepts is lost. In other words, conceptual knowledge regresses toward increasingly generic information. Naming errors proceeded first to more general-level superordinate errors (e.g., dog as “animal”) and then to no-response errors, seemingly reflecting a reversal of the developmental trend observed in childhood that progresses from superordinate level to more distinctive levels of processing. Similarly, patients with SD demonstrated increasingly disproportionate difficulty sorting pictures by more specific (e.g., land vs. sea animals) relative to more general (e.g., living vs. nonliving object) categories. In addition, degradation of distinguishing feature knowledge had a different effect on living versus nonliving stimuli relative to their respective feature structures (Rogers et al., 2004). Living concepts, characterized by a larger proportion of shared features among category members, elicited more commission errors (e.g., semantic errors in naming, intrusion of inappropriate features in drawing tasks). Conversely, nonliving concepts, which tend to have more distinctive features relative to other category members, elicited more omission errors (e.g., no-responses in naming and omission of features in drawing tasks).

As we have observed, the evidence suggests that the distinction between living and nonliving objects continues to be salient in adult semantic organization. Whereas there is some evidence of experience-based differential processing of living versus nonliving concepts in healthy men and women, substantial disproportionate deficiencies are indicative of impairment. In fact, one of the most frequently reported sequelae of semantic impairment is disproportionate difficulty processing living objects or “biological kinds” across a wide range of tasks (Capitani et al., 2003). Such difficulty has been observed consequent to a number of neurological disorders, including herpes simplex encephalitis (e.g., Barbarotto, Capitani & Laiacona, 1996; Moss, Tyler, Durrant-Peatfield & Bunn, 1998; Sartori, Job & Coltheart, 1993a; Sartori, Job, Miozzo, Zago & Marchiori, 1993b; Tyler & Moss, 1997; Warrington & Shallice, 1984), cerebrovascular accident (CVA; e.g., Caramazza & Shelton, 1998), traumatic brain injury (e.g., Farah, McMullen & Meyer, 1991; Farah, Meyer & McMullen, 1996), progressive aphasia (e.g., Basso, Capitani & Laiacona, 1988), and Alzheimer’s disease (e.g., Gonnerman, Andersen, Devlin, Kempler & Seidenberg, 1997; participant D.B. in Lambon Ralph, Howard, Nightingale & Ellis, 1998). Less frequently reported, the converse—disproportionate difficulty with nonliving objects—has been primarily associated with left hemisphere CVA (Capitani et al., 2003; Gainotti, 2000). In addition, the domain of living things sometimes appears to fractionate into animate (animals) and inanimate (plants) categories, as compared to nonliving objects (Caramazza & Shelton, 1998). In cases in which age-matched healthy older adult controls have been tested, substantive disproportionate performance is not observed (e.g., Bird, Howard & Franklin, 2000; Caramazza & Shelton, 1998; Garrard, Lambon Ralph, Watson, Powis, Patterson & Hodges, 2001b; Garrard, Patterson, Watson & Hodges, 1998; Gonnerman et al., 1997).

Consistent with the category/feature relationships observed in feature production tasks with healthy participants, it has been proposed that categorical fractionation may be attributable to patterns of feature-based similarities and differences across categories (Caramazza et al., 1990; Cree & McRae, 2003; Garrard et al., 2001b; Gonnerman et al., 1997; Tyler et al., 2000; Warrington & Shallice, 1984). One such theory is the “differential weighting hypothesis” proposed by Warrington and Shallice. Warrington and her colleagues were among the first to suggest that there might be a relationship between categorical deficits and impairment in the feature knowledge most salient for different types of objects. The feature types that were considered to be of greatest relevance were those that are known within the parlance of the Martin sensory/motor model as “core properties” or “semantic primitives.” In this view, category-specific deficits for living things, which are proposed to be primarily differentiated based on visual-perceptual features, emerge as a consequence of damage to underlying visual-perceptual feature knowledge. Conversely, it was predicted that category-specific deficits for nonliving objects would emerge from impairment to those features that are most salient for their differentiation—namely, functional features (Warrington & McCarthy, 1983, 1987; Warrington & Shallice, 1984). As certain categories of nonliving objects, such as musical instruments, might be supposed to be differentiated on the basis of visual-perceptual features, whereas body parts (attached to living creatures) may be more differentiated based on action-related features, oft-cited cross-category impairments were thus also explained (for an alternative view, see Barbarotto, Capitani & Laiacona, 2001). A number of neuropsychological studies with adults have upheld the predicted association between impairments for living categories and visual-perceptual features (Antonucci, Beeson, Labiner & Rapcsak, 2008; Basso et al., 1988; De Renzi & Lucchelli, 1994; Forde, Francis, Riddoch & Rumiati, 1997; Gainotti & Silveri, 1996; participant K.H. in Lambon Ralph, Patterson, Garrard & Hodges, 2003; Silveri & Gainotti, 1988), and some theoretical models have also supported this claim (Bird et al., 2000; Farah & McClelland, 1991). Conversely, neuropsychological evidence for the association between nonliving objects and functional features has been less frequently reported (see Capitani et al., 2003, for a review).

The methodological disparities discussed earlier come into play in the interpretation of these findings. Criticisms have been leveled at some of the earlier neuropsychological case reports for not controlling relative difficulty across living versus nonliving objects by balancing for psycholinguistic variables (see Funnell & Sheridan, 1992, for discussion) or for testing the feature-type dichotomy using different items (see Hillis, Rapp, Romani & Caramazza, 1990, for discussion). In addition, neuropsychological reports have documented evidence contrary to the predicted double dissociations (e.g., Barbarotto et al., 2001; Barbarotto, Capitani, Spinnler & Trivelli, 1995; Caramazza & Shelton, 1998; Laiacona & Capitani, 2001; Lambon Ralph, Graham, Patterson & Hodges, 1999; Lambon Ralph et al., 1998; Lambon Ralph et al., 2003; see also Capitani et al., 2003, and Caramazza & Mahon, 2006, for reviews), as well as demonstrating that primary sensory deficits (e.g., blindness) do not necessarily lead to deficient processing of visually based semantic features (Noppeney, Friston & Price, 2003) . Ultimately, the differential-weighting hypothesis in its original, dichotomous form has proven incomplete; however, it served as the impetus for examination of the featural basis of semantic knowledge and for sensory/motor models of semantic organization (see Gainotti, 2006, and Mahon & Caramazza, 2009, for discussion). While there is clearly fractionation beyond the binary category/feature model observed both with non-brain-injured participants (e.g., Gainotti et al., 2009; McRae & Cree, 2002; McRae et al., 2005) and patient populations (e.g., Borgo & Shallice, 2003; Carroll & Garrard, 2005), when a finer-grained approach is taken there does seem to be a relationship between semantic categories and the sensory/motor experience-based “core properties” described in the sensory/motor model (Martin et al., 2000).

Finally, as observed above for healthy individuals, experience-based familiarity is likely relevant to analysis of category-specific deficits. This phenomenon, in combination with lesion location, also appears to contribute to category-specific deficits in patient populations; patients demonstrate greater proficiency processing the categories with which they are putatively more familiar (i.e., for men, animals > plants; for women, plants > animals; for a review, see Gainotti, 2010). The potentially protective effect of familiarity has also been put forth as an explanation for cases in which a category expected to be impaired based on previously observed patterns is spared—for example, preservation of musical instrument information in a professional musician who demonstrated deficient processing of living concepts (Patient C in Wilson, Baddeley & Kapur, 1995; see also Gainotti, 2005, 2010, and Thompson-Schill, Kan & Oliver, 2006, for discussions of this case, as well as Capitani & Laiacona, 2011, and Laiacona, Barbarotto & Capitani, 2006, for alternative accounts).

A sensory/motor perspective on the development of concrete concepts: the bottom line

Despite the variety in participants and methodologies related to concrete concept development, several trends do emerge. There is converging evidence that experience, and particularly sensory/motor experience, contributes significantly to concept development and organization from infancy onward. Certain types of information lend themselves to more direct observation (i.e., form vs. function differences), which facilitates early categorization. Language starts to play a role in conceptual development within the first year of life. This relationship is complicated and bidirectional, and this should be considered carefully in future experimental designs. There is a relatively large body of evidence that the relationships between semantic features and concepts that are built on sensory/motor experience are maintained into adulthood, forming salient aspects of adults’ semantic representations of concrete objects and influencing how those semantic representations break down in adult neuropsychological disorders. Some of the most compelling behavioral evidence supporting the notion that semantic categories may be differentiated according to feature-type salience has been presented in the context of a feature-type taxonomy that reflects knowledge about the brain regions that support processing of each type of feature (Cree & McRae, 2003; McRae et al., 2005). In addition, influential sensory/motor models of semantic processing have been largely informed by evidence from functional neuroimaging and lesion studies (e.g., Gainotti, 2000, 2006; Martin et al., 2000). As such, we now turn to a more thorough and explicit examination of the neuroanatomical substrates of semantic feature and concept processing.

Neural substrates of semantic processing of “core” features of concrete concepts

Neuroimaging provides myriad advantages for investigation of the neural substrates of cognitive–linguistic processes. Investigation of semantic organization is no exception. Multiple techniques allow for noninvasive study of in vivo processing in both healthy and brain-injured populations. Functional magnetic resonance imaging (fMRI) and positron emission tomography (PET), as well as statistical analyses of high-resolution structural lesion scans, provide windows into the spatial mapping of brain regions involved in semantic processing, whereas event-related potentials (ERP) provide temporal information about these events. We have taken the “converging evidence” approach advocated by a number of imaging researchers (Fellows et al., 2005; Rorden & Karnath, 2004; Sidtis, 2007); rather than undertaking a review of the sum of the evidence regarding the neural substrates of semantic processing, we examine those implicated brain regions with a convergence of evidence from functional imaging, lesion studies, and ERP work. Specifically, we will review evidence relevant to predictions of the sensory/motor model of semantic processing, which state that “core properties” are represented in the same brain regions active when those properties were first acquired (Martin, 2007) and that when these regions are damaged, semantic impairment will reflect deficient processing of the associated features and of the concepts that rely on them for accurate processing (Gainotti, 2006).

Fusiform gyrus and visual-perceptual feature processing

Much of the work that has examined the neural substrates of semantic processing in children has employed variants of semantic judgment tasks in which participants of various ages are asked to judge category membership or similarity among words. Recent evidence has suggested that, in children as in adults, cortical activity during semantic processing tends to be left lateralized (Balsamo et al., 2002; Balsamo, Xu & Gaillard, 2006; Binder, Desai, Graves & Conant, 2009; Binder et al., 1997). Though evidence shows that language networks for semantic processing are less specialized in children (Brauer & Frederici, 2007), one region that has been implicated in semantic processing both in children and adults is the fusiform gyrus, in the ventral temporal lobe. Activity in the fusiform gyrus has been correlated with increased accuracy for semantic tasks in children as young as 5 years old (Balsamo et al., 2006; Schmithorst, Holland & Plante, 2007), with some evidence suggesting that this is the result of a developmental shift to more adult-like semantic processing (Schmithorst et al., 2007). It may be that the emergence of fusiform recruitment for semantic processing represents the beginnings of children’s establishment of more stable associations between object concepts and their associated visual-perceptual semantic features.

The fusiform gyrus is part of the “ventral stream” of visual processing of the form of objects (Ungerleider & Mishkin, 1982), and functional neuroimaging work with healthy adults supports the notion that processing of visual-perceptual semantic features, particularly color and form, is subserved by posterior-inferior/ventral temporal structures (Goldberg, Perfetti & Schneider, 2006; Ishai, Ungerleider & Haxby, 2000; Ishai, Ungerleider, Martin, Schouten & Haxby, 1999; Kellenbach, Brett & Patterson, 2001; Martin & Chao, 2001; Martin, Haxby, Lalonde, Wiggs & Ungerleider, 1995; Thompson-Schill, 2003; Thompson-Schill, Aguirre, D’Esposito & Farah, 1999; Thompson-Schill et al., 2006). Within the fusiform gyrus, evidence suggests that the lateral fusiform is more attuned to biological color and form attributes, while the medial fusiform responds to the color and form of artifacts (Beauchamp et al., 2002). The evidence from functional neuroimaging converges with ERP findings from electrophysiological studies. The N400 component, associated with semantic processing in both children and adults (e.g., Friedrich & Friederici, 2006), has been shown to be sensitive to the visual-perceptual semantic feature of “form” in comparisons of visual-perceptually semantically related word pairs versus unrelated pairs (Kellenbach, Wijers & Mulder, 2000). Some authors have also suggested that the topography of N400 attenuations supports localizations derived from functional neuroimaging (e.g., Kiefer, 2005). In both object categorization (Kiefer, 2001) and lexical decision tasks (Kiefer, 2005), living categories, relative to nonliving categories, have elicited attenuation of the N400 response across “occipito-temporal and parietal scalp regions” (Kiefer, 2005, p. 200), which Kiefer (2005) suggests is the result of greater reliance of living categories on the visual-semantic knowledge represented in these regions (see also Sitnikova, West, Kuperberg & Holcomb, 2006).

Additional support for this brain–behavior relationship comes from neuropsychological lesion studies of adults with neurological impairment, in whom damage to bilateral or left ventral temporal cortex often results in selectively impaired knowledge of visual-perceptual features (Lambon Ralph et al., 1999; Lambon Ralph et al., 2003; participant I.O.C. in Miceli et al., 2001). Disproportionate loss of visual-perceptual feature information owing to bilateral temporal atrophy has been considered an explanation for the somewhat unusual phenomenon of the “reverse concreteness effect,” which manifests as greater difficulty processing concrete relative to abstract concepts. Concrete concepts are presumed to rely much more than abstract concepts on visual-perceptual features, such that degradation of those features has a disproportionately detrimental effect (e.g., Breedin, Saffran & Coslett, 1994; Grossman & Ash, 2004; Macoir, 2009; Reilly & Peelle, 2008; Yi, Moore & Grossman, 2007; but see Jefferies, Patterson, Jones & Lambon Ralph, 2009, for an alternative view). Convergent evidence from functional neuroimaging shows that the anterior ventral fusiform is engaged in processing of known and new concrete words, but not abstract words (Mestres-Missé, Münte & Rodriguez-Fornells, 2008). This feature-processing deficit also frequently co-occurs with domain-specific deficits in naming living versus nonliving objects (e.g., Antonucci et al., 2008; De Renzi & Lucchelli, 1994; Warrington & Shallice, 1984). As the behavioral data examined above suggest, living concepts are not distinguished solely by visual color and form; the fact these two deficits do not always co-occur may be due in part to the fact that the neural substrates for additional perceptual features salient for processing living things (e.g., visual motion for animals, taste and scent for plants) lie beyond the fusiform.

Beyond the fusiform: processing other types of sensory/motor semantic features

Evidence of reliance on sensory/motor-based features comes from fMRI data from Ciesielski, Lesnik, Savoy, Grant and Ahlfors (2006). These authors compared nine 6-year-old children, eight 10-year-old children, and ten adults in a categorical n-back task in which participants were asked to remember, when presented with a picture of a raccoon, whether previously presented pictures contained at least two animals. Although the n-back task is traditionally a measure of working memory, in this instance, participants were asked to draw directly on semantic, taxonomic knowledge to complete this task. In terms of accuracy, 6-year-old children, but not 10-year-old children, performed more poorly than adults. This difference in accuracy could be related to both working memory capacity and less mature taxonomic categorization strategies. However, for both the 6- and 10-year-old children, the authors found distinctly different patterns of brain activation when compared to adults. In children, accuracy was related to activation in the dorsal visual and premotor/sensory-based networks, whereas adult activation was observed predominantly in bilateral ventral prefrontal and inferior temporal cortex. The authors interpreted these differences as developmental and based not only on physiology, but on different task strategies. Specifically, the authors proposed that the children were basing their judgments on animation, whereas adults were attuned to more static semantic features. The authors noted that these differences were not due to task difficulty. Although the 10-year-olds were not significantly different from the adults in accuracy, and one 6-year-old was 100% accurate, these children’s neural patterns looked more like one another’s than like adult patterns. Taken together with evidence from Balsamo et al. (2006) and Schmithorst et al. (2007), these data suggest that children in general demonstrate less intense and less extensive activation of ventral temporal cortex than do adults, and that, when observed, more adult-like participation of ventral temporal cortex facilitates semantic processing.

It is important to note, however, that the “dorsal stream” (Goodale & Milner, 1992; Goodale, Milner, Jakobson & Carey, 1991) and premotor cortex continue to contribute to adult semantic processing. In functional imaging and neuropsychological lesion studies of adults, dorsal frontoparietal networks have been associated with conceptualization of actions (Rizzolatti, Fogassi & Gallese, 2002; Tranel, Kemmerer, Adolphs, Damasio & Damasio, 2003) and action naming (Cappa, Sandrini, Rossini, Sosta & Miniussi, 2002; Martin, Haxby, Lalonde, Wiggs & Ungerleider, 1995; Tranel, Adolphs, Damasio & Damasio, 2001), such that regions critical for action planning may provide a foundation for semantic representations of those actions (see Caramazza & Mahon, 2006; Gainotti, 2006, for critical reviews). In fact, an increasing number of studies have provided evidence for a “somatotopic” semantic organization such that concepts that represent actions executed by different parts of the body have neuroanatomical substrates proximal to those brain regions that control the relevant body part (Aziz-Zadeh, Wilson, Rizzolatti & Iacoboni, 2006; Hauk, Johnsrude & Pulvermüller, 2004; Oliveri et al., 2004; Pulvermüller, Hauk, Nikulin & Ilmoniemi, 2005; Tettamanti et al., 2005; see also Raposo, Moss, Stamatakis & Tyler, 2009, for discussion regarding how such activations may be affected by semantic context). Further evidence that these representations are built on sensory/motor experience comes from work demonstrating differential contralateral activation of frontoparietal networks in participants who are right- versus left-handed during semantic tasks (Lewis et al., 2006; Willems, Hagoort & Casasanto, 2010). Such differences suggest that the experience of using one’s dominant hand to manipulate objects (i.e., tools) or carry out a manual action becomes salient to their semantic representations, even in the absence of actual object use or action performance during semantic processing (Lewis et al., 2006; Willems et al., 2010).

Electrophysiological evidence also supports a sensory/motor-model-based assertion of somatotopic organization of semantic features representing actions. A growing number of ERP studies (Kiefer, 2001, 2005; Sitnikova et al., 2006) have demonstrated that nonliving categories, relative to living categories, elicit a “frontocentral” distribution of attenuation of the N400 response, supporting the greater salience of action information, represented near motor areas in the frontal lobe, for processing of nonliving categories (Kiefer, 2005; see also Kellenbach, Wijers, Hovius, Mulder & Mulder, 2002). To assess further the notion that semantic knowledge has its foundation in sensory/motor experience, Kiefer and colleagues have also provided ERP evidence that conceptual representations are affected by the manner in which the concept is acquired, even when said acquisition occurs in adulthood (Kiefer, Sim, Liebich, Hauk & Tanaka, 2007). Following novel object training based either on a function/action feature (pantomime of object use) or a nonfunctional feature (pointing to a specific form feature of the object), only those participants who were trained with the function/action feature demonstrated an early ERP response in frontocentral sites proximal to premotor cortex, whereas those trained on the nonfunctional feature demonstrated greater activity in right occipital cortex, which the authors attributed to early visual feature analysis (Kiefer et al., 2007). The authors suggested that these findings converge with functional imaging studies of concept acquisition (e.g., James & Gauthier, 2003; Weisberg, van Turennout & Martin, 2007) in supporting the notion that weighting of object properties in concept acquisition depends on the earliest sensory/motor experiences with the object (Barsalou, Simmons, Barbey & Wilson, 2003; Gallese & Lakoff, 2005; Martin et al., 2000). In fact, recent fMRI evidence has demonstrated that the process of learning new concepts results in a shift from widespread fusiform activation to property-specific recruitment of those brain regions presumed to be the neural substrates of those features most salient for that concept (Martin, 2007, p. 320).

A considerable number of studies have demonstrated category-specific activations for living versus nonliving concepts, which are sometimes related to differences in neuroanatomical processing of their underlying semantic features, even though some studies have provided evidence of a distributed neural network of semantic processing that does not fractionate along category boundaries (Bright, Moss & Tyler, 2004; Taylor, Moss & Tyler, 2007; Tyler et al., 2003a; Tyler et al., 2000; Tyler et al., 2003b). Processing of nonliving objects recruits dorsolateral structures implicated in the processing of nonbiological visual motion (e.g., left posterior middle temporal gyrus) and object use (e.g., left premotor cortex) (Beauchamp & Martin, 2007; Cappa et al., 2002; Chao, Haxby & Martin, 1999; Martin & Chao, 2001; Martin, Wiggs, Ungerleider & Haxby, 1996). The feature of object use or “manipulation” has also been contrasted with the feature of “function” (i.e., the purpose served to humans by the object). For example, Canessa et al. (2008) demonstrated that elicitation of manipulation or “action knowledge” activates the “left frontoparietal network” (Canessa et al., 2008, p. 740), whereas elicitation of “function knowledge” results in activation of more inferior regions in temporal cortex. Evidence also suggests that for manipulable objects, such as tools, activation of neural substrates for action-related information is more central to conceptual representations, whereas for nonmanipulable nonliving objects (e.g., furniture), functional features may be more salient (Kellenbach, Brett & Patterson, 2003; see also Kemmerer, Gonzalez Castillo, Talavage, Patterson & Wiley, 2008, for a review of interaction among the semantic components of verbs); this dichotomy may have its roots in concept acquisition (e.g., Kalénine & Bonthoux, 2008; Saccuman et al., 2006; Warrington & McCarthy, 1987).

The importance of these distinctions is highlighted in neuropsychological studies of patient populations, which have demonstrated that action and function knowledge can be independently spared or impaired as the result of damage to different brain regions. For example, patients with ideomotor apraxia, characterized by spatiomotor errors in the pantomime and actual use of objects (Buxbaum & Saffran, 2002), often demonstrate impaired knowledge of manipulation, with sparing of knowledge about object function, as the result of damage to left frontoparietal cortex (Buxbaum & Saffran, 2002; Buxbaum, Veramonti & Schwartz, 2000), which can include the motor hand area/pathway (Arevalo et al., 2007). Consistent with sensory/motor models of semantic memory, Buxbaum and colleagues suggested that this behavioral pattern results from the fact that the same structures responsible for “spatiomotor coding for action” are also associated with storing the manipulation-related semantic features, so that lesion of these structures will not only impair actions for actual object use, but also semantic knowledge about object use (Buxbaum & Saffran, 2002, p. 195). Evidence of the double dissociation, in which a patient was able to manipulate and describe how to manipulate objects for which he could not describe the function, has also been reported (Sirigu, Duhamel & Poncey, 1991; see also Mahon & Caramazza, 2009, for discussion of this dichotomy). So, then, growing evidence indicates that into adulthood, processing of core semantic features continues to require participation of cortical regions closely aligned with those responsible for processing the initial sensory/motor experiences. In fact, a study combining fMRI and ERP techniques has demonstrated that adult conceptual object representations are formed through “semantic feature maps” (Kiefer et al., 2007), integrating information from visual, motion-related, and motor brain regions, which are “flexibly recruited,” depending on the context in which interaction with the object takes place (Hoenig, Sim, Bochev, Herrnberger & Kiefer, 2008, p. 1799). It may be that what ultimately drives adult-like semantic processing is not only adequate representation of sensory/motor (and associative) semantic features, but also the ability to fully integrate these core features in a context-independent way, which seems critical for creating concepts that can exist independent of any particular sensory/motor experience.

How is semantic information integrated?

In contrast to adult-like conceptual representations, children’s greater reliance on sensory/motor-based features may not only be, as previously noted, a product of lack of experience with which to build a fully developed repertoire of semantic features, but may also be the result of difficulty fully integrating semantic features into a coherent conceptual whole. Booth and colleagues have shown in several studies that accuracy and age-related differences in semantic processing may be associated with the ability to recruit more middle temporal structures. As noted above, older children, as well as younger children who are more accurate in making semantic judgments, tend to recruit left inferior and middle temporal structures (e.g., BA 21) during semantic processing (Blumenfeld, Booth & Burman, 2006; Chou et al., 2006a; Chou et al., 2006b), more closely conforming with adult activation patterns. The relative infrequency of reports of activations in studies with children suggests that they may only be just beginning to develop the ability to integrate the multiple components of semantic knowledge into distinct and distinguishable concepts. There is a great deal more evidence available in the adult-focused literature. Despite the accumulation of evidence, a consistent explanation of the mechanism through which semantic features are integrated to represent concepts and the relationships between them (i.e., categories) has remained elusive. Deliberation regarding the neuroanatomical substrates of this process is also ongoing.

Consistent with sensory/motor models of semantic representation, it has been proposed that semantic integration is achieved through the representation of concepts as “activity patterns” distributed throughout a network composed of regions that store perceptual- and motor-based conceptual features (e.g., Gainotti, 2011; Gainotti et al., 2009; McNorgan, Reid & McRae, 2011). Based primarily on work with SD patients, some authors have advanced a “distributed-plus-hub” hypothesis, positing that semantic integration ultimately converges within those regions observed to facilitate more accurate semantic processing in children, the anterior temporal lobes (ATLs), in which modal representations are “abstracted away” from original perceptual or motor forms (e.g., Patterson, Nestor & Rogers, 2007; Rogers et al., 2004). Within this framework, it is proposed that the high connectivity of the ATLs not only with primary sensory and motor cortices, but also with medial temporal structures that process affective responses (e.g., the amygdala) and that form new memories (e.g., hippocampus), ideally situates the ATLs for instantiating relations among concepts for all semantic categories (Patterson et al., 2007, p. 982). Others have proposed that integration results from cascading activation proceeding from unimodal to multimodal convergence zones distributed throughout the brain (e.g., Damasio, 1989a, 1989b; Gainotti, 2011; Simmons & Barsalou, 2003). Likely neuroanatomical regions contributing to this network include anterolateral and anteromedial temporal cortex, as well as portions of the parietal and frontal lobes (e.g., angular and supramarginal gyri, ventromedial prefrontal cortex, and the inferior frontal gyrus) (Binder et al., 2009; Damasio, 1989a, 1989b; Gainotti, 2011; Gainotti et al., 2009; McNorgan et al., 2011).

The relative preservation of this more posterior network, relative to the anterior prefrontal region, in age-related cortical atrophy (e.g., Haug & Eggers, 1991; West, 1996) has been put forth by Mayr and Kliegl (2000) as a neuroanatomical explanation for behavioral evidence of relative preservation of semantic knowledge in older adults concomitant to deficient executive processes, such as selection and retrieval, subserved by prefrontal cortex (e.g., Mayr & Kliegl, 2000; Mortensen et al., 2006; Newman & German, 2005; Wingfield, Lindfield & Kahana, 1998). This supposition is supported by converging functional imaging evidence of the relative contributions of prefrontal versus more posterior sites during older versus younger adults’ completion of semantic tasks (Meinzer et al., 2009; Obler et al., 2010; Wierenga et al., 2008; Wingfield & Grossman, 2006).

Included in this network are primary perceptual and motor cortices, and functional neuroimaging has suggested that repositories for their respective semantic representations lie directly anterior to these primary cortical sites (McNorgan et al., 2011; Thompson-Schill, 2003). In this view, concepts within the same category would engage similar convergence zone activation patterns specific to those features salient to their acquisition. For example, living concepts, distinguished by visual-color and -form features, in combination with other sensory features (e.g., taste, smell), would be subserved by coordinated activation of rostral brain regions where processing of these sensory features converges. In contrast, nonliving concepts would emerge from the coordinated activation of dorsal-visual-stream structures with regions important for processing the somatosensory and motor information relevant to their handling and use (Gainotti, 2011; Gainotti et al., 2009; McNorgan et al., 2011). As McNorgan et al. (2011) pointed out, the consistent and coordinated activity of the convergence zones is what integrates concepts into what we experience as a reliable and “coherent” whole, rather than as “a jumble of features, disjointed in time and space . . .” (p. 212). The oft-observed left lateralization of activation in semantic tasks likely reflects the coordination of sensory/motor conceptual features with more verbally mediated encyclopedic features (particularly salient for abstract relative to concrete concepts) and linguistic structures, wherein conceptual information becomes semantic information (e.g., Binder et al., 2009).

Neuroanatomical development of semantic substrates for concrete concepts: The bottom line

The convergence of evidence suggests that the neural substrates of semantic processing in children are largely similar to those in adults. However, there are some notable differences, and it is not yet clear whether those differences are related to age or to ability. Some of the specific differences between children and adults include lower accuracy for children on behavioral tasks, delayed and longer latencies on children’s ERPs for incongruous semantic stimuli, and different areas of activation on both ERP and fMRI tasks. Children appear to have a tendency to be less left lateralized than adults. They also tend to recruit frontal areas more often than adults do. Other notable areas of activation include the fusiform gyrus. In general, children seem to need to rely on areas that allow them to access more visual forms and more attentional resources than do adults. Given that different tasks have different requirements, changes in semantic processing may not be driven only by semantic skills, but are likely influenced by an individual’s aptitude in attention, lexical, and phonological processing, at a minimum. There is a huge mismatch in terms of the amount of neurobiological evidence in the literature regarding the semantic organization of adults as compared to children. Given the relatively small number of imaging studies performed with young children, as well as the divergence in the tasks that are used, divergent findings across studies may take time to be fully understood as they become integrated with future findings. Relative to the potential trajectories of change in the relationships among concepts and semantic features across the adult lifespan, we note that the overwhelming majority of imaging studies with healthy participants (fMRI, PET, and ERP) have provided evidence collected from younger adults (mean ages ranging from 21 to 30 years). Even when a larger range of participant ages is represented (e.g., Lewis et al., 2006, age range 21–52 years), age-related analyses are not performed. This situation represents a gap in the literature that must be addressed to allow adequate understanding of how semantic representations are maintained across the lifespan.

Implications and future directions

We have presented a new step toward the ongoing effort to establish a lifespan perspective on semantic processing. The purpose of this review has been to examine the featural basis of semantic organization in order to discern whether a common thread exists between the developmental and adult literatures, which can provide the foundation for future lifespan-based investigation of semantic processing. Sensory/motor-based models provided a good point of departure, as they lend themselves well to making predictions about semantic development based on active learning processes. Evidence from the developmental literature has shown that children are able to incorporate sensory/motor experiences and to build semantic representations from that input. This characterization of semantic development fits well with models of implicit learning, in which humans (adults and infants) and some animals have been shown able to implicitly analyze input and derive all types of patterns (phonological, visual, grammatical, etc.) (see, e.g., Reber, 1967). Semantic organization relies on a more sophisticated system that can combine and compare information, as well as incorporate aspects of concepts that are not readily observable through sensory/motor mechanisms. This characterization is supported by the neuropsychological findings of studies of semantic organization. There is no single “semantic organization” area. Rather, we see a developmental progression that implicates the use of sensory/motor information and, with maturity and increased accuracy, increasing association of information. Clearly, language is one of the key factors that influences mature concept development, and studies have shown the influence of vocabulary on semantics even before a child has turned 2. Although there is a rich literature about word learning in the normal language literature, there is still much to confirm regarding the relationship between lexical knowledge and semantic knowledge. For example, McGregor, Sheng and Ball (2007) found that there was some dissociation between lexical and semantic learning in a study that involved typically developing 8-year-olds, and better semantic knowledge did not always lead to better lexical learning. In addition, the existing literature does not fully address the issue of how children code multiple semantic features of words, or what effect learning a word’s label has on the encoding of semantic features. For a comprehensive understanding of the development of the semantic system, the best convergence of evidence has come from work designed to look at similar questions with similar methodologies, as highlighted by the similarities between the cognitive neuropsychologically based work of McGregor and colleagues with children, and that of Lambon Ralph and colleagues with adults. Such convergence to date has been largely coincidental. We advocate that future studies in this vein be designed a priori to examine semantic processing from a developmental perspective, to test hypotheses equally relevant across the lifespan with methods that are well-suited for use with both children and adults. We see several ways forward.

Both cross-sectional studies utilizing a common methodology and longitudinal studies have the potential to achieve several goals. As evidenced by the structure of our hypotheses, there is a prominent gap in our understanding of when and how semantic representations develop. The bulk of the evidence, relative to both normal and disordered processing, jumps from early school age to adulthood, with less evidence from older children and adolescents. Taking a finer-grained approach to sampling the lifespan would inform us regarding how semantic representations develop over time. While there is evidence that certain types of “core features” are more salient for particular types of concepts, there is less evidence with respect to how that relative salience changes over time as additional linguistic skills are acquired. For example, how does sensory/motor-based knowledge interact with knowledge of thematic associations and grammar (see Meteyard et al., in press; Meteyard & Vigliocco, 2008), and what is the developmental timeline for any changes in relative salience? Do we even have clear definitions for what a fully developed, adult-like semantic representation must be and how would we go about operationalizing such a definition (e.g., Counting number of features known? Recognizability of concept description across listeners?) and norming for different stages of development? What is the developmental trajectory for changes in relative salience of these features for acquisition of more abstract concepts (e.g., Chatterjee, 2010; Meteyard et al., in press)? How do these behavioral trajectories manifest or depend on maturational changes in the brain? Does the healthy aging process influence the relationship between features and concepts, and the neural substrates thereof?

Conclusion

Evidence continues to grow supporting the notion that human beings use sensory/motor information for forming semantic representations as soon as they have access to that information. The information alone is not enough to form fully developed concepts, but as humans mature and are able to associate information, and to bring in knowledge derived from language, semantic organization is enriched. These early associations remain salient, both in the way in which the normally functioning semantic system accesses information and in predicting deficit patterns in individuals with brain damage. While a great deal has been accomplished in elucidating how semantic knowledge is represented both cognitively and neuroanatomically, there is still work to be done. A more precise understanding of the interaction between sensory/motor systems, semantic organization, and lexical retrieval has the potential to inform theoretical debate as well as to guide the design of cognitive neuropsychological interventions that take advantage of brain–behavior relationships.