Hostname: page-component-54dcc4c588-trf7k Total loading time: 0 Render date: 2025-10-06T06:15:48.433Z Has data issue: false hasContentIssue false

Perceptual-semantic features of words differentially shape early vocabulary in American Sign Language and English

Published online by Cambridge University Press:  06 October 2025

Erin E. Campbell*
Affiliation:
Boston University, Boston, MA, USA
Jennie Pyers
Affiliation:
Wellesley College, Wellesley, MA, USA
Naomi Caselli
Affiliation:
Boston University, Boston, MA, USA
Amy Lieberman
Affiliation:
Boston University, Boston, MA, USA
Arielle Borovsky
Affiliation:
Purdue University, West Lafayette, IN, USA
*
Corresponding author: Erin E. Campbell; Email: eecamp@bu.edu
Rights & Permissions [Opens in a new window]

Abstract

How do sensory experiences shape the words we learn first? Most studies of language have focused on hearing children learning spoken languages, making it challenging to know how sound and language modality might contribute to language learning. This study investigates how perceptual and semantic features influence early vocabulary acquisition in deaf children learning American Sign Language and hearing children learning spoken English. Using vocabulary data from parent-report inventories, we analyzed 214 nouns common to both languages to compare the types of meanings associated with earlier Age of Acquisition. Results revealed that while children in both groups were earlier to acquire words that were more strongly related to the senses, the specific types of sensory meaning varied by language modality. Hearing children learned words with sound-related features earlier than other words, while deaf children learned words with visual and touch-related features earlier. This suggests that the easiest words to learn are words with meanings that children can experience first-hand, which varies based on children’s own sensory access and experience. Studying the diverse ways children acquire language, in this case deaf children, is key to developing language learning theories that reflect all learners.

Information

Type
Original Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial licence (https://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original article is properly cited. The written permission of Cambridge University Press must be obtained prior to any commercial use.
Copyright
The Author(s), 2025. Published by Cambridge University Press

Introduction

As infants and toddlers explore their environment, they encounter perceptual information—such as how objects look, sound, feel, or move—as well as labels for those objects. Experimental evidence suggests that these perceptual experiences boost children’s learning and retention of words (Seidl et al., Reference Seidl, Indarjit and Borovsky2023). The influence of perceptual information on children’s vocabulary is evident in the semantic properties of early words. Words more strongly connected to perceptual dimensions of meaning (i.e. words that are associated with visual, auditory, tactile, and other properties) tend to be learned earlier than those with fewer such connections (Hills et al., Reference Hills, Maouene, Maouene, Sheya and Smith2009; Peters & Borovsky, Reference Peters and Borovsky2019).

Language learners nevertheless vary in their perceptual experience. Sighted deaf children who are acquiring a signed language have reduced access to auditory information but unrestricted perceptual access to visual and tactile information. Here we ask: How does the learner’s perceptual and linguistic experience shape their language-learning trajectory? As a test case, we examine the semantic properties of early-acquired words in deaf American Sign Language (ASL) signers with exposure to ASL from birth and hearing English speakers, investigating whether differences in perceptual experience (deaf vs. hearing) and language modality (spoken vs. signed) affect the semantic composition of early vocabulary.

Why might early vocabularies differ for children with varying perceptual experiences?

Just as blind toddlers are less likely to produce highly visual words (e.g., “blue,” “see”) than their sighted peers (Campbell et al., Reference Campbell, Casillas and Bergelson2024), deaf toddlers may be less likely than hearing peers to produce words related to sound. Moreover, deafness enhances some aspects of visual attention (Lieberman et al., Reference Lieberman, Hatrak and Mayberry2014; Gioiosa Maurno et al., Reference Gioiosa Maurno, Phillips-Silver and Daza González2024). These changes in visual attention may allow deaf learners to attend more to visual information in the environment than hearing learners do, and this increased attention to visual features could make visual-semantic features of a referent more salient and easier to learn for deaf children.

Differences in language modality may also affect vocabulary. Spoken languages are transmitted orally and perceived auditorily, whereas signs are transmitted in the visual-motor modality and perceived visually. The visual-spatial nature of signed languages may allow for the better encoding of some sensory features (e.g., visual features and tactile features) of a word relative to others (e.g., auditory features) (Perlman et al., Reference Perlman, Little, Thompson and Thompson2018). These possible affordances of signed languages may make certain perceptual features of referents more salient for deaf children in particular. Thus, the sensory experience of deaf children and the properties of the sign language that deaf children are learning may lead to a differential weighting of perceptual features in vocabulary.

Why might early vocabulary composition be unaffected by individuals’ perceptual experience?

Perceptual cues are just one route to word learning (Gleitman, Reference Gleitman1990), so differences in perceptual experience may not substantially alter vocabulary composition. As evidence for this account, blind children still acquire words for visual things (Campbell et al., Reference Campbell, Casillas and Bergelson2024; Landau & Gleitman, Reference Landau and Gleitman1985). Moreover, despite differences in sensory access, deaf children reason about others’ hearing and visual access equally well (Schmidt & Pyers, Reference Schmidt and Pyers2014). Additionally, abstract words (e.g., “hi and “more) are also early-learned (Frank et al., Reference Frank, Braginsky, Yurovsky and Marchman2017), despite having no consistent connection to a particular perceptual experience (Casey et al., Reference Casey, Potter, Lew-Williams and Wojcik2023). In this light, we might expect early vocabulary to be robust to differences in children’s perceptual experiences.

Languages around the world exhibit striking cross-linguistic consistency in mechanisms that drive early vocabulary composition (Braginsky et al., Reference Braginsky, Yurovsky, Marchman and Frank2019). Even across language modalities, the trajectory of language acquisition is largely similar: among deaf children who are exposed to language from early in life, sign language vocabulary development parallels that of spoken language acquisition (Caselli et al., Reference Caselli, Lieberman and Pyers2020; Thompson et al., Reference Thompson, Vinson, Woll and Vigliocco2012). Given the strong likenesses between spoken and signed language in vocabulary development, alongside broad cross-linguistic similarity in the composition of early vocabulary, we might expect perceptual experience and language modality to have little bearing on the semantic properties of deaf children’s early vocabularies.

The present study

We investigated whether the early vocabularies of deaf signers and hearing English speakers differ in the perceptual-semantic features of words. If Age of Acquisition (AoA) varies by learning context, we would expect stronger effects of auditory features in spoken English than in ASL and stronger effects of visual and tactile features in ASL than in spoken English. Alternatively, if other mechanisms of word learning carry more weight, then we would observe no differences in AoA for words based on their perceptual salience to deaf and hearing children.

Methods

Measures

This study focused on the acquisition of a set of nouns from the MacArthur-Bates Communicative Development Inventory (CDI), a parent-report inventory of children’s vocabulary (Fenson et al., Reference Fenson, Marchman, Thal, Dale, Reznick and Bates2007). Nouns were selected for this study because young children’s expressive vocabularies tend to be dominated by nouns (relative to other parts of speech) in the first two years (Gentner, Reference Gentner and Kuczaj1982), and because a complete set of semantic feature norms for CDI nouns is available (Borovsky et al., Reference Borovsky, Peters, Cox and McRae2024). Only the 214 nouns that had a translation equivalent on the ASL adaptation of the CDI (ASL-CDI 2.0; Caselli et al. Reference Caselli, Lieberman and Pyers2020) were included to enable comparisons across the same set of concepts. For each of the 214 nouns per language, we estimated typical AoA, tallied the number of semantic features, and obtained a measure of frequency.

Deriving AoA for early-acquired nouns

Age of Acquisition (AoA): To calculate AoA, we used data from two language-specific vocabulary assessments: the MacArthur-Bates Communicative Development Inventory (English CDI; Fenson et al., Reference Fenson, Marchman, Thal, Dale, Reznick and Bates2007) and the American Sign Language Communicative Development Inventory 2.0 (ASL CDI; Caselli et al., Reference Caselli, Lieberman and Pyers2020). Caregivers are asked to report which vocabulary items on the inventory their child understands and/or produces; in the present study, we drew from the production measure. We pooled the data from these checklists from many participants, using openly available datasets for English (N English = 5,450 hearing, monolingual children; Wordbank database; Frank et al., 2016) and ASL (N ASL = 120 deaf signing children with deaf parents; Caselli et al., Reference Caselli, Lieberman and Pyers2020). AoA was then computed by calculating the proportion of children who were reported to produce each word at each age (months) over which the assessment was measured (English CDI: 16–30 months; ASL CDI: 9–73 months) and then fitting a logistic curve to those proportions to determine the first age when the curve crossed 50%. This AoA estimation method controls for some variability in measurement across time (Frank et al., Reference Frank, Braginsky, Yurovsky and Marchman2021). Because the English and ASL datasets differ in size and in age range (with the ASL sample being smaller and covering a broader age range), we z-scored AoA within languages for all analyses.

Calculating perceptual features

Semantic and Perceptual Features: We drew information about the number of semantic and perceptual features from the Language Learning and Meaning Acquisition lab Noun Norms (Borovsky et al., Reference Borovsky, Peters, Cox and McRae2024). This feature set was generated using a typical procedure for collecting semantic features: by asking multiple individuals (typically 30 or more; here, N raters = 33 – 126) to list semantic features that come to mind for various concepts, and retaining features that are mentioned by a reasonable proportion of individuals (typically at least 10–20%; here, 16.67%), resulting in a list of features that constitute canonical understanding of concepts across individuals, processing, and representation of concepts. These semantic features were categorized according to the Wu and Barsalou (Reference Wu and Barsalou2009) knowledge-type taxonomy (perceptual, functional, taxonomic, or encyclopedic), and then the perceptual features were subcategorized according to Cree & McRae’s (Reference Cree and McRae2003) brain region knowledge-type taxonomy (Auditory, Tactile, Gustatory, Olfactory, Visual-Color, Visual-Motion, and Visual-Form and Surface; see Table 1 for examples). For further details on how these semantic feature norms were generated, see Borovsky et al., Reference Borovsky, Peters, Cox and McRae2024.

Table 1. Example features for different semantic feature types and perceptual feature subtypes from Borovsky et al., Reference Borovsky, Peters, Cox and McRae2024

Note. Semantic features can be categorized into four main categories: perceptual, functional, taxonomic, and encyclopedic, following the Wu and Barsalou (Reference Wu and Barsalou2009) knowledge-type taxonomy. Perceptual features can be further broken down into subtypes (Cree & McRae, Reference Cree and McRae2003).

For each noun on the English and ASL CDIs, we then counted the number of perceptual features in each feature category. For example, lollipop (see Figure 1.B) in this metric would score as having one visual-color feature (<different_colors>), two tactile features (<is_hard>, <is_sticky>), two taste features (<tastes_sweet>, <different_flavors>), and three visual-form and surface features (<comes_on_a_stick>, <is_round>, <has_a_wrapper>), and zero features in each of the other perceptual categories, for a total of eight perceptual features.

Figure 1. Visualizing the perceptual-semantic features of early acquired nouns. The panels progress from left to right, offering an increasingly broad view of dataset variability. (a) A conceptual “feature list” highlights the perceptual-semantic properties of a single noun (frog), categorized by feature type (e.g., visual motion, tactile, and auditory). (b) Polar plots display the feature composition of four selected nouns (balloon, friend, frog, and lollipop), chosen to represent variation in feature subtype distribution. For example, balloon exhibits the most sound features, frog emphasizes visual and motion features, and friend has no perceptual features. Each filled rung of the circle represents one feature in that category. (c) A stacked bar chart showing the overall distribution of perceptual features across a subset of thirty nouns. Nouns are sorted by total feature count, with color coding indicating feature type. The full list of 214 nouns can be viewed in Supplementary Materials.

Deriving measures of frequency in ASL and English

Frequency: To control for the possibility that words that are simply higher frequency tend to have more perceptual features in different modalities, we included lexical frequency as a control variable in our analyses. We calculated lexical frequency separately for English and ASL. In English, following procedures in Peters and Borovsky (Reference Peters and Borovsky2019), we calculated the natural log of each English CDI noun’s frequency in speech directed to children within the English CDI age range (30 months or younger) drawn from the North American English CHILDES corpus (MacWhinney, 2002, childes-db-version-0.1.0; Sanchez et al., Reference Sanchez, Meylan, Braginsky, MacDonald, Yurovsky and Frank2019).

For ASL, because corpus-derived frequency estimates are not available, we used subjective frequency estimates from ASL-LEX 2.0 (Caselli et al., Reference Caselli, Sehyr, Cohen-Goldberg and Emmorey2017). These estimates were generated by aggregating subjective frequency ratings made by deaf ASL signers (N raters = 25–35; see Sehyr et al., Reference Sehyr, Caselli, Cohen-Goldberg and Emmorey2021), and are generally well correlated with corpus frequency counts (e.g., Chen & Dong, Reference Chen and Dong2019; Alonso et al., Reference Alonso, Fernandez and Díez2011; Fenlon et al., Reference Fenlon, Schembri, Rentelis, Vinson and Cormier2014; Vinson et al., Reference Vinson, Cormier, Denmark, Schembri and Vigliocco2008; Balota et al., Reference Balota, Pilotti and Cortese2001).

Approach to multivariate linear regression analysis

Our goal was to examine whether and how perceptual information across concepts contributed differentially to AoA as a function of language modality (i.e., English and ASL). We explored this question by measuring how the number of features across all noun concepts in the English and ASL CDIs predicts AoA while controlling for frequency. Then, to explore the independent contribution of each perceptual feature type while controlling for other feature types and frequency, we conducted multivariate linear regression analyses in three models: English-only, ASL-only, and then English+ASL together. In the models reported below, categorical predictors (language) were sum-coded, and continuous variables (frequency and number of perceptual features) were centered and scaled to allow for relative comparisons of the influence of each variable across our English and ASL models. Since the ASL measure is not a direct measure of frequency, for models that include both ASL and English together, we included only the frequency measures derived from corpora of North American English. All analyses were conducted in R (v4.2.2), figures were created with Canva (Figure 1.A) and ggplot (all others), and data and code are available on OSF. Each analysis is reported in greater detail below.

Results

Relations between semantic feature types and AoA

In our first multiple regression analysis, we explored how AoA for each concept associates with the number of broad feature types (Perceptual, Functional, Taxonomic, and Encyclopedic) for each language individually while controlling for lexical frequency in each language.

In English, this analysis directly replicates a previously reported analysis (Peters & Borovsky, Reference Peters and Borovsky2019) with a subset of 214 of their 359 noun concepts that have a translation equivalent in ASL. As expected, this English analysis revealed a strong association between the number of perceptual features attributed to a noun concept and AoA, after controlling for other semantic feature types; see Table 2. The model results align with evidence that children learning an auditorily presented language acquire words with referents that are frequent and perceptually accessible (e.g., Seidl et al., Reference Seidl, Indarjit and Borovsky2023).

Table 2. Effects of broad feature subtypes on AoA in English and ASL

We carried out the same analysis in ASL. As with the analysis in English, this analysis revealed a strong association between the number of perceptual features comprising a concept and AoA (β = −0.21, p<.001), even after controlling for other feature subtypes (not significant; all ps>.05) and sign frequency (higher frequency associated with earlier AoA; β = −0.37, p<.001); see Table 2. Together, these analyses indicate that irrespective of linguistic modality or sensory experience of the learner, learners prioritize the acquisition of concepts that have perceptually accessible components.

Relations between perceptual feature sub-types and AoA across language modalities

Given the clear perceptual differences between ASL and English in how they are produced and perceived by young learners, we next asked which perceptual modality subtypes contribute most strongly to AoA.

We first explored how different perceptual feature subtypes predicted AoA in English while controlling for lexical frequency in child-directed speech. The model included as predictors: frequency and the number of perceptual features in each of seven feature subtypes (Auditory, Gustatory, Olfactory, Tactile, Visual-Color, Visual-Form and Surface, and Visual-Motion). The results indicated that perceptual features were not all weighted equally: in particular, having more Auditory, Tactile, and Gustatory features was associated with earlier AoA in English (see Table 3); we did not observe relationships with olfactory features or any of the visual feature subtypes.

Table 3. Effect of perceptual feature subtypes on AoA in English and ASL

The parallel analysis in ASL revealed that a different subset of perceptual features contributed to variance in AoA. Here, Tactile and all three Visual feature subtypes (Color, Form and Surface, and Motion) were significant predictors of ASL AoA; concepts with more Tactile and Visual features are earlier-acquired for children learning ASL (see Table 3). We found no significant relationships between AoA and Olfactory, Auditory, or Gustatory features.

Finally, we directly compared whether perceptual features subtypes interact with language modality while controlling for frequency. To maximize power in our model to detect interactions, we dropped two terms (Olfactory and Gustatory features) and focused our comparison on the perceptual feature subtypes of interest (Sound, Tactile, and the three Visual feature subtypes). In this combined model, we found a main effect of frequency (β = −2.46, p < 0.001) as well as two significant interactions between feature subtype and language modality: Language x Visual-Motion features and Language x Tactile features. Full model results are described in Table 4 below.

Table 4. Comparing influence of different perceptual feature types across languages. Significant interactions indicate that feature subtypes differentially influence AoA across languages

As illustrated in Figures 2A and 2B, these significant interactions are characterized by increased effects of tactile (β = 0.80, p = 0.038) and visual-motion features (β = 1.27, p = 0.002) on AoA in ASL vs. English. In other words, deaf children learning ASL—a visual and tactile language—learn words that have visual and tactile semantic features earlier (β visual-motion = −1.60, p visual-motion < 0.001; β tactile = −1.42, p tactile < 0.001); these effects are absent (visual) or weaker (tactile) for children learning English. At the baseline level (ASL), there was no significant effect of auditory features on AoA (β = 0.12, p = 0.738). While the effect was numerically stronger in English (and significant in the English-only model [β = −0.43, p = 0.042]), there was no significant difference between the languages’ slopes (β = −0.47, p = 0.245; Figure 2C).

Figure 2. Illustrating the influence of Visual-Motion (A), Tactile (B), and Sound (C) features on AoA across ASL and English. To highlight differences in effect of features across languages, AoA has been mean-centered within languages, so that differences in slope reflect differences in the influence of each feature on AoA, with steeper slopes indicating a stronger relation to AoA; negative slopes indicate that the feature is associated with earlier word production, and positive slopes indicate that the feature is associated with later word production.

Discussion

In this study, we asked whether children’s perceptual experiences and first language modality shape the composition of their early vocabulary. We compared the effects of different semantic features on the AoA of words for deaf ASL learners and hearing English learners. We replicated the effect of perceptual semantic features on AoA in the vocabulary development of English-speaking children (Peters & Borovsky, Reference Peters and Borovsky2019) and extended that finding to deaf children learning ASL. Across groups, words with more perceptual-semantic features were learned earlier, an effect not observed for other semantic features. Yet we found that perceptual feature subtypes exerted different effects across the two languages. For hearing English learners only, the number of sound-related perceptual features (e.g., <croaks> for a frog) and taste-related perceptual features were associated with earlier word production. By contrast, for deaf signing children only, the number of visual features predicted earlier AoA. Tactile features exerted stronger effects on AoA in ASL than in English. We consider two possible non-competing explanations for this observed difference in perceptual types: (1) differences between ASL and English, and (2) differences between deaf and hearing learners.

Differences in the language

Prior research shows minimal differences between spoken and signed languages in acquisition (Newport & Meier, Reference Newport and Meier1985; Caselli & Pyers, Reference Caselli and Pyers2017) and in behavioral and neural processing (Emmorey, Reference Emmorey2021; MacSweeney et al., Reference MacSweeney, Woll, Campbell, McGuire, David, Williams, Suckling, Calvert and Brammer2002), so observing any cross-linguistic difference is notable.

One possible explanation is that children’s language input is tailored to their hearing status. Interlocutors are sensitive to their conversation partner’s sensory abilities and adjust their communication accordingly (Grigoroglou & Papafragou, Reference Grigoroglou and Papafragou2016; Hazan & Baker, Reference Hazan and Baker2011). For deaf children, adults may emphasize words with visual or tactile features while deemphasizing sound-related words. It is hypothetically possible that the children receive fewer auditory features in their language input not solely due to their own hearing status but also due to the hearing status of their primary caregivers and the perceptual features that are salient to those caregivers; i.e., hearing caregivers may use words with auditory features more frequently than deaf caregivers, and deaf caregivers may be more likely to use words with visual features. While lexical frequency in ASL was controlled, such measures cannot fully capture potential differences in child-directed input, especially given the current lack of a large corpus of parent-child interactions in ASL.

Another possible explanation for differences in AoA is the languages’ origins: ASL and English emerged to meet the needs of language users who differ in hearing status; ASL evolved organically through generations of North American deaf signers, whereas the vast majority (>95%) of English speakers are hearing (National Deaf Center on Postsecondary Outcomes, 2023). Despite these differences, ASL and other signed languages have a diverse lexicon for sound-related words (Emmorey et al., Reference Emmorey, Nicodemus and O’Gradyin press; Spread the Sign). Nevertheless, such differences may shape how words for sensory experience enter the lexicon and are used—in the same way that culture might affect the semantic organization of the lexicon (McGregor et al., Reference McGregor, Munro, Chen, Baker and Oleson2018).

Lastly, and perhaps most compellingly, the differences in iconic affordances of signed vs. spoken languages may drive our observed effects. Sensory features can be represented in a word through iconicity, a structured alignment between word form and meaning. For example, “moo” approximates the braying of a cow, and the ASL sign for DRINK resembles holding and tipping a glass toward the lips. Iconic mappings may facilitate learning by highlighting perceptual similarities between word-forms and their meanings (Imai & Kita, Reference Imai and Kita2014; Laing, Khattab, Sloggett & Keren-Portnoy, Reference Laing, Khattab, Sloggett and Keren-Portnoy2025). Indeed, in both signed and spoken languages, iconic words are produced earlier than non-iconic words (Caselli & Pyers, Reference Caselli and Pyers2017; Perry et al., Reference Perry, Perlman and Lupyan2015; Sidhu et al., Reference Sidhu, Williamson, Slavova and Pexman2021; Thompson et al., Reference Thompson, Vinson, Woll and Vigliocco2012), although how learners access iconicity may change with age and experience (Caselli & Pyers, Reference Caselli and Pyers2017; Magid & Pyers, Reference Magid and Pyers2017; Occhino et al, Reference Occhino, Anible, Wilkinson and Morford2017; Thompson et al., Reference Thompson, Vinson, Woll and Vigliocco2012).

The semantic characteristics of iconic words differ across modalities. Iconic mappings in sign language rarely represent auditory features of a referent, and more frequently align with tactile and visual features of form and meaning (Perlman et al., Reference Perlman, Little, Thompson and Thompson2018). In ASL, concepts with auditory features are often depicted iconically using visual or temporal properties (e.g., volume depicted by the degree of opening of the fingers; Emmorey et al., Reference Emmorey, Nicodemus and O’Gradyin press). Such iconic affordances may amplify the salience of modality-specific sensory features in child-directed input.

Beyond salience, iconic words may be overrepresented in the input to children (Montamedi et al., Reference Motamedi, Murgiano, Perniss, Wonnacott, Marshall, Goldin-Meadow and Vigliocco2021; Perry et al, Reference Perry, Perlman, Winter, Massaro and Lupyan2018) or may be modified during child-directed speech in ways that highlight the iconic mapping of the sensory properties (Fuks, Reference Fuks2020; Perniss et al., Reference Perniss, Lu, Morgan and Vigliocco2018; although c.f. Gappmayr et al., Reference Gappmayr, Lieberman, Pyers and Caselli2022). A more systematic analysis of corpus data would be a useful step toward answering this question.

Finally, through iconicity, phonological features may systematically convey semantic information (Campbell et al., Reference E.E., Sehyr, Pontecorvo, Cohen-Goldberg, Emmorey and Caselli2025). In ASL, systematic phonological features such as the location of the sign on the body may highlight specific perceptual features of the referent (e.g., the sign for FLOWER is located at the nose and is associated with smell; Cates et al., Reference Cates, Gutiérrez, Hafer, Barrett and Corina2013; many signs related to vision are produced at the eyes; Östling, Börstell & Courtaux, Reference Östling, Börstell and Courtaux2018). This systematic association may make it easier for children to learn new words with that same phonological and perceptual-semantic relationship.

Differences in the learner

Experimental work shows that toddlers learn words better when they can directly experience the referents of the word through the senses (Seidl et al., Reference Seidl, Indarjit and Borovsky2023). Accordingly, auditory features—which are largely inaccessible to deaf children—may not support the acquisition of words for deaf children. The nature of deaf children’s early experiences might, in turn, lead to an upweighting of visual, tactile, and motion features relative to auditory ones. If early word learning initially relies on perceptual salience (e.g., Pruden et al., Reference Pruden, Hirsh-Pasek, Golinkoff and Hennon2006), then these types of words might be more salient and thus more easily learned for deaf children, who are (by definition) less sensitive to auditory stimuli and possibly more sensitive to certain types of visual and tactile stimuli (Dhanik et al., Reference Dhanik, Pandey, Mishra, Keshri and Kumar2024; Gioiosa Maurno et al., Reference Gioiosa Maurno, Phillips-Silver and Daza González2024).

These results could also be viewed through a more active lens, wherein learners’ preferences shape their vocabulary. Experimental findings show that children more robustly learn words that interest them (Ackermann et al., Reference Ackermann, Hepach and Mani2020). Deaf children may gravitate toward visuo-motor or tactile experiences during play or interaction, prompting additional linguistic input related to these referents. This increased exposure and interest may support the encoding and retention of words associated with visual and tactile features. More systematic analyses of naturalistic interactions are needed to better understand how children’s exploratory behaviors shape their vocabulary acquisition.

Limitations and future directions

Because fewer data are presently available for ASL than for English, our estimates of ASL AoA are likely less precise than the English AoA estimates. It is possible that this noise masked real patterns that we were unable to detect (e.g., effects of sound). However, despite this limitation, we observed consistent and significant patterns in the ASL data, suggesting that these findings are robust.

By comparing deaf signers with deaf parents and hearing English speakers with hearing parents, language modality and the perceptual access of the learner and caregiver are all conflated. Future studies comparing the vocabulary development of hearing English-learners to that of deaf children learning spoken language (same language modality, but groups differ in auditory access) or hearing children learning ASL (different language modality, but groups have similar auditory access), as well as comparisons by caregiver hearing status, would better tease apart the effects of language modality, child hearing status, and caregiver hearing status.

Additionally, we substituted English semantic feature norms for ASL-specific semantic feature norms. In a recent study comparing semantic features collected for English words and Spanish words, researchers found that the norms were semantically similar, not language-specific (Vivas et al., Reference Vivas, Kogan, Romanelli, Lizarralde and Corda2020), suggesting that English norms could be a reasonable substitute in this context. However, understanding the semantic features that deaf signers associate with these signs may further elucidate the mechanisms underlying the observed effects on vocabulary composition.

Conclusions

Studying diverse language acquisition experiences is essential for understanding how variation in sensory and linguistic experiences shapes learning. This study shows that, across languages and learners, children were most likely to learn words that have meanings that are aligned with their sensory and linguistic experience. For deaf ASL learners, these were words linked to visual and tactile features, whereas for hearing English learners, they were words tied to auditory features. This study represents a rare example of a modality difference between deaf and hearing learners of signed and spoken languages, and in doing so, our findings illustrate one way learners’ experience with the world can fundamentally change language learning.

Replication package

All analysis, data, and code are available at: https://osf.io/m7v6k/.

Supplementary material

To view supplementary material for this article, please visit https://doi.org/10.1017/S0142716425100210

Competing interests

The authors have no conflicts of interest to declare.

Financial support

This research was supported by R01DC018593 and R21HD108730 to AB; NIH DC015272 to AL; NIH DC018279, NIH DC016104, NSF BCS-1918252, NIH DC018279-04S, and NSF BCS 2234787 to NC; James S. McDonnell Foundation to JP.

Artificial intelligence

We did not use AI in conducting this research study. All content has been carefully written, reviewed, and edited by human authors.

Ethics

This research received approval from the Boston University and Purdue University Institutional Review Boards.

References

Ackermann, L., Hepach, R., & Mani, N. (2020). Children learn words easier when they are interested in the category to which the word belongs. Developmental Science, 23(3), e12915. https://onlinelibrary.wiley.com/doi/full/10.1111/desc.12915 CrossRefGoogle ScholarPubMed
Alonso, M. A., Fernandez, A., & Díez, E. (2011). Oral frequency norms for 67,979 Spanish words. Behavior Research Methods, 43(2), 449458. https://doi.org/10.3758/s13428-011-0062-3 CrossRefGoogle Scholar
Balota, D. A., Pilotti, M., & Cortese, M. J. (2001). Subjective frequency estimates for 2,938 monosyllabic words. Memory & Cognition, 29(4), 639647. https://doi.org/10.3758/BF03200465 CrossRefGoogle ScholarPubMed
Borovsky, A., Peters, R. E., Cox, J. I., & McRae, K. (2024). Feats: a database of semantic features for early produced noun concepts. Behavior Research Methods, 56(4), 32593279. https://doi.org/10.3758/s13428-023-02242-x CrossRefGoogle ScholarPubMed
Braginsky, M., Yurovsky, D., Marchman, V. A., & Frank, M. C. (2019). Consistency and variability in children’s word learning across languages. Open Mind, 3, 5267. https://doi.org/10.1162/opmi_a_00026 CrossRefGoogle ScholarPubMed
Campbell, E. E., Casillas, R., & Bergelson, E. (2024). The role of vision in the acquisition of words: vocabulary development in blind toddlers. Developmental Science, 27(4), e13475. https://doi.org/10.1111/desc.13475 CrossRefGoogle ScholarPubMed
E.E., Campbell, Sehyr, Z.S., Pontecorvo, E., Cohen-Goldberg, A., Emmorey, K., & Caselli, N. (2025). Iconicity as an organizing principle of the lexicon. Proceedings of the National Academy of Sciences of the United States of America, 122(16), e2401041122. https://doi.org/10.1073/pnas.2401041122.Google Scholar
Caselli, N. K., Lieberman, A. M., & Pyers, J. E. (2020). The ASL-CDI 2.0: an updated, normed adaptation of the macarthur bates communicative development inventory for American sign language. Behavior Research Methods, 52(5), 20712084. https://doi.org/10.3758/s13428-020-01376-6 CrossRefGoogle ScholarPubMed
Caselli, N. K., & Pyers, J. E. (2017). The road to language learning is not entirely iconic: iconicity, neighborhood density, and frequency facilitate acquisition of Sign Language. Psychological Science, 28(7), 979987. https://doi.org/10.1177/0956797617700498 CrossRefGoogle Scholar
Caselli, N. K., Sehyr, Z. S., Cohen-Goldberg, A. M., & Emmorey, K. (2017). ASL-LEX: a lexical database of American sign language. Behavior Research Methods, 49(2), 784801. https://doi.org/10.3758/s13428-016-0742-0 CrossRefGoogle ScholarPubMed
Casey, K., Potter, C. E., Lew-Williams, C., & Wojcik, E. H. (2023). Moving beyond “nouns in the lab”: using naturalistic data to understand why infants’ first words include uh-oh and hi. Developmental Psychology, 59(11), 21622173. https://doi.org/10.1037/dev0001630 CrossRefGoogle ScholarPubMed
Cates, D., Gutiérrez, E., Hafer, S., Barrett, R., & Corina, D. (2013). Location, location, location. Sign Language Studies, 13(4), 433461.10.1353/sls.2013.0014CrossRefGoogle Scholar
Chen, X., & Dong, Y. (2019). Evaluating objective and subjective frequency measures in L2 lexical processing. Lingua, 230, 102738. https://doi.org/10.1016/j.lingua.2019.102738 CrossRefGoogle Scholar
Cree, G. S., & McRae, K. (2003). Analyzing the factors underlying the structure and computation of the meaning of chipmunk, cherry, chisel, cheese, and cello (and many other such concrete nouns). Journal of Experimental Psychology: General, 132(2), 163201. https://doi.org/10.1037/0096-3445.132.2.163 CrossRefGoogle ScholarPubMed
Dhanik, K., Pandey, H. R., Mishra, M., Keshri, A., & Kumar, U. (2024). Neural adaptations to congenital deafness: enhanced tactile discrimination through cross-modal neural plasticity - an fMRI study. Neurological Sciences, 45(11), 54895499. https://doi.org/10.1007/s10072-024-07615-4 CrossRefGoogle ScholarPubMed
Emmorey, K (2021) New perspectives on the neurobiology of sign languages. Frontiers in Communication, 6, 748430. https://doi.org/10.3389/fcomm.2021.748430 CrossRefGoogle ScholarPubMed
Emmorey, K., Nicodemus, B, & O’Grady, L. (in press). The language of perception in American Sign Language. PsyArXiv. https://doi.org/10.31234/osf.io/ed9bf CrossRefGoogle Scholar
Fenlon, J., Schembri, A., Rentelis, R., Vinson, D., & Cormier, K. (2014). Using conversational data to determine lexical frequency in British Sign Language: The influence of text type. Lingua, 143, 187202. https://doi.org/10.1016/j.lingua.2014.02.003 CrossRefGoogle Scholar
Fenson, L., Marchman, V., Thal, D. J., Dale, P. S., Reznick, J. S., & Bates, E. (2007). MacArthur-Bates Communicative Development Inventories: User’s Guide and Technical Manual (2nd ed.) | Frank Porter Graham Child Development Institute (2nd ed.). Brookes. https://fpg.unc.edu/publications/macarthur-bates-communicative-development-inventories-users-guide-and-technical-manual Google Scholar
Frank, M. C., Braginsky, M., Yurovsky, D., & Marchman, V. A. (2017). Wordbank: an open repository for developmental vocabulary data. Journal of Child Language, 44(3), 677694. https://doi.org/10.1017/S0305000916000209 CrossRefGoogle ScholarPubMed
Frank, M. C., Braginsky, M., Yurovsky, D., and Marchman, V. A. (2021). Variability and Consistency in Early Language Learning: The Wordbank Project. MIT Press.10.7551/mitpress/11577.001.0001CrossRefGoogle Scholar
Fuks, O. (2020). Developmental path in input modifications and the use of iconicity in early hearing infant–deaf mother interactions. American Annals of the Deaf, 165(4), 418435.10.1353/aad.2020.0028CrossRefGoogle ScholarPubMed
Gappmayr, P., Lieberman, A. M., Pyers, J., & Caselli, N. K. (2022). Do parents modify child-directed signing to emphasize iconicity? Frontiers in Psychology, 13, 920729. https://doi.org/10.3389/fpsyg.2022.920729 CrossRefGoogle ScholarPubMed
Gentner, D. (1982). Why nouns are learned before verbs: Linguistic relativity versus natural partitioning. In Kuczaj, S. A. (Ed.), Language development: Vol. 2. Language, thought and culture, 301–334. Hillsdale, NJ: ErlbaumGoogle Scholar
Gioiosa Maurno, N., Phillips-Silver, J., & Daza González, M. T. (2024). Research of visual attention networks in deaf individuals: a systematic review. Frontiers in Psychology, 15. https://doi.org/10.3389/fpsyg.2024.1369941 CrossRefGoogle ScholarPubMed
Gleitman, L. (1990). The structural sources of verb meanings. Language Acquisition, 1(1), 355. https://doi.org/10.1207/s15327817la0101_2 CrossRefGoogle Scholar
Grigoroglou, M., & Papafragou, A. (2016). Are children flexible speakers? Effects of typicality and listener needs in children’s event descriptions. Proceedings of the Annual Meeting of the Cognitive Science Society, 38, 782787. Retrieved from https://escholarship.org/uc/item/3s13v7sg Google Scholar
Hazan, V., & Baker, R. (2011). Acoustic-phonetic characteristics of speech produced with communicative intent to counter adverse listening conditions. The Journal of the Acoustical Society of America, 130(4), 21392152. https://doi.org/10.1121/1.3623753 CrossRefGoogle ScholarPubMed
Hills, T. T., Maouene, M., Maouene, J., Sheya, A., & Smith, L. (2009). Longitudinal analysis of early semantic networks preferential attachment or preferential acquisition? Psychological Science, 20(6), 729739. https://doi.org/10.1111/j.1467-9280.2009.02365.x CrossRefGoogle ScholarPubMed
Imai, M., & Kita, S. (2014). The sound symbolism bootstrapping hypothesis for language acquisition and language evolution. Philosophical Transactions of the Royal Society B: Biological Sciences, 369(1651), 20130298. https://doi.org/10.1098/rstb.2013.0298 CrossRefGoogle ScholarPubMed
Laing, C, Khattab, G, Sloggett, S, Keren-Portnoy, T. (2025). Size sound symbolism in mothers’ speech to their infants. Journal of Child Language, 52(4), 739761. https://doi.org/10.1017/S0305000921000799 CrossRefGoogle ScholarPubMed
Landau, B., & Gleitman, L. R. (1985). Language and Experience: Evidence from the Blind Child (pp. xi, 250). Harvard University Press.Google Scholar
Lieberman, A. M., Hatrak, M., & Mayberry, R. I. (2014). Learning to look for language: development of joint attention in young deaf children. Language Learning and Development, 10(1). https://doi.org/10.1080/15475441.2012.760381 CrossRefGoogle ScholarPubMed
MacSweeney, M., Woll, B., Campbell, R., McGuire, P. K., David, A. S., Williams, S. C. R., Suckling, J., Calvert, G. A., & Brammer, M. J. (2002). Neural systems underlying British Sign Language and audio-visual English processing in native users. Brain, 125(7), 15831593. https://doi.org/10.1093/brain/awf153 CrossRefGoogle ScholarPubMed
Magid, R. W., & Pyers, J. E. (2017). “I use it when I see it”: The role of development and experience in Deaf and hearing children’s understanding of iconic gesture. Cognition, 162, 7386. https://doi.org/10.1016/j.cognition.2017.01.015 CrossRefGoogle ScholarPubMed
McGregor, K., Munro, N., Chen, S. M., Baker, E., & Oleson, J. (2018). Cultural influences on the developing semantic lexicon. Journal of Child Language, 45(6), 13091336. https://doi.org/10.1017/S0305000918000211 CrossRefGoogle ScholarPubMed
Motamedi, Y., Murgiano, M., Perniss, P., Wonnacott, E., Marshall, C., Goldin-Meadow, S., & Vigliocco, G. (2021). Linking language to sensory experience: Onomatopoeia in early language development. Developmental Science, 24(3), e13066. https://doi.org/10.1111/desc.13066 CrossRefGoogle ScholarPubMed
National Deaf Center on Postsecondary Outcomes. (2023, January 16 ). A new look at 2022 Census data about deaf people. University of Texas at Austin, National Deaf Center on Postsecondary Outcomes. Retrieved from https://nationaldeafcenter.org/2022CensusData.Google Scholar
Newport, E. L., & Meier, R. P. (1985). The acquisition of American sign language. In The crosslinguistic study of language acquisition, Vol. 1: The data; Vol. 2: Theoretical issues (pp. 881938). Lawrence Erlbaum Associates, Inc.Google Scholar
Occhino, C., Anible, B., Wilkinson, E., & Morford, J. P. (2017). Iconicity is in the eye of the beholder: how language experience affects perceived iconicity. Gesture, 16(1), 100126.10.1075/gest.16.1.04occCrossRefGoogle Scholar
Östling, R., Börstell, C., & Courtaux, S. (2018). Visual Iconicity Across Sign Languages: Large-Scale Automated Video Analysis of Iconic Articulators and Locations. Frontiers in Psychology, 9. https://doi.org/10.3389/fpsyg.2018.00725 CrossRefGoogle Scholar
Perlman, M., Little, H., Thompson, B., & Thompson, R. L. (2018). Iconicity in signed and spoken vocabulary: a comparison between American sign language, British sign language, English, and Spanish. Frontiers in Psychology, 9. https://doi.org/10.3389/fpsyg.2018.01433 CrossRefGoogle ScholarPubMed
Perniss, P., Lu, J. C., Morgan, G., & Vigliocco, G. (2018). Mapping language to the world: the role of iconicity in the sign language input. Developmental Science, 21(2), e12551. https://doi.org/10.1111/desc.12551 CrossRefGoogle Scholar
Perry, L. K., Perlman, M., & Lupyan, G. (2015). Iconicity in English and Spanish and its relation to lexical category and age of acquisition. PLOS ONE, 10(9), e0137147. https://doi.org/10.1371/journal.pone.0137147 CrossRefGoogle ScholarPubMed
Perry, L. K., Perlman, M., Winter, B., Massaro, D. W., & Lupyan, G. (2018). Iconicity in the speech of children and adults. Developmental Science, 21(3), e12572.10.1111/desc.12572CrossRefGoogle ScholarPubMed
Peters, R., & Borovsky, A. (2019). Modeling early lexico-semantic network development: Perceptual features matter most. Journal of Experimental Psychology: General, 148(4), 763782. https://doi.org/10.1037/xge0000596 CrossRefGoogle ScholarPubMed
Pruden, S. M., Hirsh-Pasek, K., Golinkoff, R. M., & Hennon, E. A. (2006). The birth of words: ten-month-olds learn words through perceptual salience. Child Development, 77(2), 266280. https://doi.org/10.1111/j.1467-8624.2006.00869.x CrossRefGoogle ScholarPubMed
Sanchez, A., Meylan, S. C., Braginsky, M., MacDonald, K. E., Yurovsky, D., & Frank, M. C. (2019). childes-db: a flexible and reproducible interface to the child language data exchange system. Behavior Research Methods, 51(4), 19281941. https://doi.org/10.3758/s13428-018-1176-7 CrossRefGoogle Scholar
Schmidt, E., & Pyers, J. (2014). First-hand sensory experience plays a limited role in children’s early understanding of seeing and hearing as sources of knowledge: evidence from typically hearing and deaf children. British Journal of Developmental Psychology, 32(4), 454467. https://doi.org/10.1111/bjdp.12057 CrossRefGoogle Scholar
Sehyr, Z. S., Caselli, N., Cohen-Goldberg, A. M., & Emmorey, K. (2021). The ASL-LEX 2.0 Project: a database of lexical and phonological properties for 2,723 signs in American sign language. The Journal of Deaf Studies and Deaf Education, 26(2), 263–27.710.1093/deafed/enaa038CrossRefGoogle Scholar
Seidl, A. H., Indarjit, M., & Borovsky, A. (2023). Touch to learn: multisensory input supports word learning and processing. Developmental Science, 27(1), e13419. https://doi.org/10.1111/desc.13419 CrossRefGoogle ScholarPubMed
Sidhu, D., Williamson, J., Slavova, V., & Pexman, P. (2021). An investigation of iconic language development in four datasets. Journal of Child Language, 49(2), 382396. https://doi.org/10.1017/S0305000921000040 CrossRefGoogle ScholarPubMed
Sign language dictionary | SpreadTheSign. (n.d.). Retrieved April 16, 2021, from https://www.spreadthesign.com/en.us/search/ Google Scholar
Thompson, R. L., Vinson, D. P., Woll, B., & Vigliocco, G. (2012). The road to language learning is iconic: evidence from British sign language. Psychological Science, 23(12), 14431448.10.1177/0956797612459763CrossRefGoogle ScholarPubMed
Vinson, D. P., Cormier, K., Denmark, T., Schembri, A., & Vigliocco, G. (2008). The British Sign Language (BSL) norms for age of acquisition, familiarity, and iconicity. Behavior Research Methods, 40(4), 10791087. https://doi.org/10.3758/BRM.40.4.1079 CrossRefGoogle ScholarPubMed
Vivas, J., Kogan, B., Romanelli, S., Lizarralde, F., & Corda, L. (2020). A cross-linguistic comparison of Spanish and English semantic norms: looking at core features. Applied Psycholinguistics, 41(2), 285297. https://doi.org/10.1017/S0142716419000523 CrossRefGoogle Scholar
Wu, L., & Barsalou, L. W. (2009). Perceptual simulation in conceptual combination: evidence from property generation. Acta Psychologica, 132(2), 173189. https://doi.org/10.1016/j.actpsy.2009.02.002 CrossRefGoogle ScholarPubMed
Figure 0

Table 1. Example features for different semantic feature types and perceptual feature subtypes from Borovsky et al., 2024

Figure 1

Figure 1. Visualizing the perceptual-semantic features of early acquired nouns. The panels progress from left to right, offering an increasingly broad view of dataset variability. (a) A conceptual “feature list” highlights the perceptual-semantic properties of a single noun (frog), categorized by feature type (e.g., visual motion, tactile, and auditory). (b) Polar plots display the feature composition of four selected nouns (balloon, friend, frog, and lollipop), chosen to represent variation in feature subtype distribution. For example, balloon exhibits the most sound features, frog emphasizes visual and motion features, and friend has no perceptual features. Each filled rung of the circle represents one feature in that category. (c) A stacked bar chart showing the overall distribution of perceptual features across a subset of thirty nouns. Nouns are sorted by total feature count, with color coding indicating feature type. The full list of 214 nouns can be viewed in Supplementary Materials.

Figure 2

Table 2. Effects of broad feature subtypes on AoA in English and ASL

Figure 3

Table 3. Effect of perceptual feature subtypes on AoA in English and ASL

Figure 4

Table 4. Comparing influence of different perceptual feature types across languages. Significant interactions indicate that feature subtypes differentially influence AoA across languages

Figure 5

Figure 2. Illustrating the influence of Visual-Motion (A), Tactile (B), and Sound (C) features on AoA across ASL and English. To highlight differences in effect of features across languages, AoA has been mean-centered within languages, so that differences in slope reflect differences in the influence of each feature on AoA, with steeper slopes indicating a stronger relation to AoA; negative slopes indicate that the feature is associated with earlier word production, and positive slopes indicate that the feature is associated with later word production.

Supplementary material: File

Campbell et al. supplementary material

Campbell et al. supplementary material
Download Campbell et al. supplementary material(File)
File 23.2 KB