Introduction
As infants and toddlers explore their environment, they encounter perceptual information—such as how objects look, sound, feel, or move—as well as labels for those objects. Experimental evidence suggests that these perceptual experiences boost children’s learning and retention of words (Seidl et al., Reference Seidl, Indarjit and Borovsky2023). The influence of perceptual information on children’s vocabulary is evident in the semantic properties of early words. Words more strongly connected to perceptual dimensions of meaning (i.e. words that are associated with visual, auditory, tactile, and other properties) tend to be learned earlier than those with fewer such connections (Hills et al., Reference Hills, Maouene, Maouene, Sheya and Smith2009; Peters & Borovsky, Reference Peters and Borovsky2019).
Language learners nevertheless vary in their perceptual experience. Sighted deaf children who are acquiring a signed language have reduced access to auditory information but unrestricted perceptual access to visual and tactile information. Here we ask: How does the learner’s perceptual and linguistic experience shape their language-learning trajectory? As a test case, we examine the semantic properties of early-acquired words in deaf American Sign Language (ASL) signers with exposure to ASL from birth and hearing English speakers, investigating whether differences in perceptual experience (deaf vs. hearing) and language modality (spoken vs. signed) affect the semantic composition of early vocabulary.
Why might early vocabularies differ for children with varying perceptual experiences?
Just as blind toddlers are less likely to produce highly visual words (e.g., “blue,” “see”) than their sighted peers (Campbell et al., Reference Campbell, Casillas and Bergelson2024), deaf toddlers may be less likely than hearing peers to produce words related to sound. Moreover, deafness enhances some aspects of visual attention (Lieberman et al., Reference Lieberman, Hatrak and Mayberry2014; Gioiosa Maurno et al., Reference Gioiosa Maurno, Phillips-Silver and Daza González2024). These changes in visual attention may allow deaf learners to attend more to visual information in the environment than hearing learners do, and this increased attention to visual features could make visual-semantic features of a referent more salient and easier to learn for deaf children.
Differences in language modality may also affect vocabulary. Spoken languages are transmitted orally and perceived auditorily, whereas signs are transmitted in the visual-motor modality and perceived visually. The visual-spatial nature of signed languages may allow for the better encoding of some sensory features (e.g., visual features and tactile features) of a word relative to others (e.g., auditory features) (Perlman et al., Reference Perlman, Little, Thompson and Thompson2018). These possible affordances of signed languages may make certain perceptual features of referents more salient for deaf children in particular. Thus, the sensory experience of deaf children and the properties of the sign language that deaf children are learning may lead to a differential weighting of perceptual features in vocabulary.
Why might early vocabulary composition be unaffected by individuals’ perceptual experience?
Perceptual cues are just one route to word learning (Gleitman, Reference Gleitman1990), so differences in perceptual experience may not substantially alter vocabulary composition. As evidence for this account, blind children still acquire words for visual things (Campbell et al., Reference Campbell, Casillas and Bergelson2024; Landau & Gleitman, Reference Landau and Gleitman1985). Moreover, despite differences in sensory access, deaf children reason about others’ hearing and visual access equally well (Schmidt & Pyers, Reference Schmidt and Pyers2014). Additionally, abstract words (e.g., “hi” and “more”) are also early-learned (Frank et al., Reference Frank, Braginsky, Yurovsky and Marchman2017), despite having no consistent connection to a particular perceptual experience (Casey et al., Reference Casey, Potter, Lew-Williams and Wojcik2023). In this light, we might expect early vocabulary to be robust to differences in children’s perceptual experiences.
Languages around the world exhibit striking cross-linguistic consistency in mechanisms that drive early vocabulary composition (Braginsky et al., Reference Braginsky, Yurovsky, Marchman and Frank2019). Even across language modalities, the trajectory of language acquisition is largely similar: among deaf children who are exposed to language from early in life, sign language vocabulary development parallels that of spoken language acquisition (Caselli et al., Reference Caselli, Lieberman and Pyers2020; Thompson et al., Reference Thompson, Vinson, Woll and Vigliocco2012). Given the strong likenesses between spoken and signed language in vocabulary development, alongside broad cross-linguistic similarity in the composition of early vocabulary, we might expect perceptual experience and language modality to have little bearing on the semantic properties of deaf children’s early vocabularies.
The present study
We investigated whether the early vocabularies of deaf signers and hearing English speakers differ in the perceptual-semantic features of words. If Age of Acquisition (AoA) varies by learning context, we would expect stronger effects of auditory features in spoken English than in ASL and stronger effects of visual and tactile features in ASL than in spoken English. Alternatively, if other mechanisms of word learning carry more weight, then we would observe no differences in AoA for words based on their perceptual salience to deaf and hearing children.
Methods
Measures
This study focused on the acquisition of a set of nouns from the MacArthur-Bates Communicative Development Inventory (CDI), a parent-report inventory of children’s vocabulary (Fenson et al., Reference Fenson, Marchman, Thal, Dale, Reznick and Bates2007). Nouns were selected for this study because young children’s expressive vocabularies tend to be dominated by nouns (relative to other parts of speech) in the first two years (Gentner, Reference Gentner and Kuczaj1982), and because a complete set of semantic feature norms for CDI nouns is available (Borovsky et al., Reference Borovsky, Peters, Cox and McRae2024). Only the 214 nouns that had a translation equivalent on the ASL adaptation of the CDI (ASL-CDI 2.0; Caselli et al. Reference Caselli, Lieberman and Pyers2020) were included to enable comparisons across the same set of concepts. For each of the 214 nouns per language, we estimated typical AoA, tallied the number of semantic features, and obtained a measure of frequency.
Deriving AoA for early-acquired nouns
Age of Acquisition (AoA): To calculate AoA, we used data from two language-specific vocabulary assessments: the MacArthur-Bates Communicative Development Inventory (English CDI; Fenson et al., Reference Fenson, Marchman, Thal, Dale, Reznick and Bates2007) and the American Sign Language Communicative Development Inventory 2.0 (ASL CDI; Caselli et al., Reference Caselli, Lieberman and Pyers2020). Caregivers are asked to report which vocabulary items on the inventory their child understands and/or produces; in the present study, we drew from the production measure. We pooled the data from these checklists from many participants, using openly available datasets for English (N English = 5,450 hearing, monolingual children; Wordbank database; Frank et al., 2016) and ASL (N ASL = 120 deaf signing children with deaf parents; Caselli et al., Reference Caselli, Lieberman and Pyers2020). AoA was then computed by calculating the proportion of children who were reported to produce each word at each age (months) over which the assessment was measured (English CDI: 16–30 months; ASL CDI: 9–73 months) and then fitting a logistic curve to those proportions to determine the first age when the curve crossed 50%. This AoA estimation method controls for some variability in measurement across time (Frank et al., Reference Frank, Braginsky, Yurovsky and Marchman2021). Because the English and ASL datasets differ in size and in age range (with the ASL sample being smaller and covering a broader age range), we z-scored AoA within languages for all analyses.
Calculating perceptual features
Semantic and Perceptual Features: We drew information about the number of semantic and perceptual features from the Language Learning and Meaning Acquisition lab Noun Norms (Borovsky et al., Reference Borovsky, Peters, Cox and McRae2024). This feature set was generated using a typical procedure for collecting semantic features: by asking multiple individuals (typically 30 or more; here, N raters = 33 – 126) to list semantic features that come to mind for various concepts, and retaining features that are mentioned by a reasonable proportion of individuals (typically at least 10–20%; here, 16.67%), resulting in a list of features that constitute canonical understanding of concepts across individuals, processing, and representation of concepts. These semantic features were categorized according to the Wu and Barsalou (Reference Wu and Barsalou2009) knowledge-type taxonomy (perceptual, functional, taxonomic, or encyclopedic), and then the perceptual features were subcategorized according to Cree & McRae’s (Reference Cree and McRae2003) brain region knowledge-type taxonomy (Auditory, Tactile, Gustatory, Olfactory, Visual-Color, Visual-Motion, and Visual-Form and Surface; see Table 1 for examples). For further details on how these semantic feature norms were generated, see Borovsky et al., Reference Borovsky, Peters, Cox and McRae2024.
Table 1. Example features for different semantic feature types and perceptual feature subtypes from Borovsky et al., Reference Borovsky, Peters, Cox and McRae2024

Note. Semantic features can be categorized into four main categories: perceptual, functional, taxonomic, and encyclopedic, following the Wu and Barsalou (Reference Wu and Barsalou2009) knowledge-type taxonomy. Perceptual features can be further broken down into subtypes (Cree & McRae, Reference Cree and McRae2003).
For each noun on the English and ASL CDIs, we then counted the number of perceptual features in each feature category. For example, lollipop (see Figure 1.B) in this metric would score as having one visual-color feature (<different_colors>), two tactile features (<is_hard>, <is_sticky>), two taste features (<tastes_sweet>, <different_flavors>), and three visual-form and surface features (<comes_on_a_stick>, <is_round>, <has_a_wrapper>), and zero features in each of the other perceptual categories, for a total of eight perceptual features.

Figure 1. Visualizing the perceptual-semantic features of early acquired nouns. The panels progress from left to right, offering an increasingly broad view of dataset variability. (a) A conceptual “feature list” highlights the perceptual-semantic properties of a single noun (frog), categorized by feature type (e.g., visual motion, tactile, and auditory). (b) Polar plots display the feature composition of four selected nouns (balloon, friend, frog, and lollipop), chosen to represent variation in feature subtype distribution. For example, balloon exhibits the most sound features, frog emphasizes visual and motion features, and friend has no perceptual features. Each filled rung of the circle represents one feature in that category. (c) A stacked bar chart showing the overall distribution of perceptual features across a subset of thirty nouns. Nouns are sorted by total feature count, with color coding indicating feature type. The full list of 214 nouns can be viewed in Supplementary Materials.
Deriving measures of frequency in ASL and English
Frequency: To control for the possibility that words that are simply higher frequency tend to have more perceptual features in different modalities, we included lexical frequency as a control variable in our analyses. We calculated lexical frequency separately for English and ASL. In English, following procedures in Peters and Borovsky (Reference Peters and Borovsky2019), we calculated the natural log of each English CDI noun’s frequency in speech directed to children within the English CDI age range (30 months or younger) drawn from the North American English CHILDES corpus (MacWhinney, 2002, childes-db-version-0.1.0; Sanchez et al., Reference Sanchez, Meylan, Braginsky, MacDonald, Yurovsky and Frank2019).
For ASL, because corpus-derived frequency estimates are not available, we used subjective frequency estimates from ASL-LEX 2.0 (Caselli et al., Reference Caselli, Sehyr, Cohen-Goldberg and Emmorey2017). These estimates were generated by aggregating subjective frequency ratings made by deaf ASL signers (N raters = 25–35; see Sehyr et al., Reference Sehyr, Caselli, Cohen-Goldberg and Emmorey2021), and are generally well correlated with corpus frequency counts (e.g., Chen & Dong, Reference Chen and Dong2019; Alonso et al., Reference Alonso, Fernandez and Díez2011; Fenlon et al., Reference Fenlon, Schembri, Rentelis, Vinson and Cormier2014; Vinson et al., Reference Vinson, Cormier, Denmark, Schembri and Vigliocco2008; Balota et al., Reference Balota, Pilotti and Cortese2001).
Approach to multivariate linear regression analysis
Our goal was to examine whether and how perceptual information across concepts contributed differentially to AoA as a function of language modality (i.e., English and ASL). We explored this question by measuring how the number of features across all noun concepts in the English and ASL CDIs predicts AoA while controlling for frequency. Then, to explore the independent contribution of each perceptual feature type while controlling for other feature types and frequency, we conducted multivariate linear regression analyses in three models: English-only, ASL-only, and then English+ASL together. In the models reported below, categorical predictors (language) were sum-coded, and continuous variables (frequency and number of perceptual features) were centered and scaled to allow for relative comparisons of the influence of each variable across our English and ASL models. Since the ASL measure is not a direct measure of frequency, for models that include both ASL and English together, we included only the frequency measures derived from corpora of North American English. All analyses were conducted in R (v4.2.2), figures were created with Canva (Figure 1.A) and ggplot (all others), and data and code are available on OSF. Each analysis is reported in greater detail below.
Results
Relations between semantic feature types and AoA
In our first multiple regression analysis, we explored how AoA for each concept associates with the number of broad feature types (Perceptual, Functional, Taxonomic, and Encyclopedic) for each language individually while controlling for lexical frequency in each language.
In English, this analysis directly replicates a previously reported analysis (Peters & Borovsky, Reference Peters and Borovsky2019) with a subset of 214 of their 359 noun concepts that have a translation equivalent in ASL. As expected, this English analysis revealed a strong association between the number of perceptual features attributed to a noun concept and AoA, after controlling for other semantic feature types; see Table 2. The model results align with evidence that children learning an auditorily presented language acquire words with referents that are frequent and perceptually accessible (e.g., Seidl et al., Reference Seidl, Indarjit and Borovsky2023).
Table 2. Effects of broad feature subtypes on AoA in English and ASL

We carried out the same analysis in ASL. As with the analysis in English, this analysis revealed a strong association between the number of perceptual features comprising a concept and AoA (β = −0.21, p<.001), even after controlling for other feature subtypes (not significant; all ps>.05) and sign frequency (higher frequency associated with earlier AoA; β = −0.37, p<.001); see Table 2. Together, these analyses indicate that irrespective of linguistic modality or sensory experience of the learner, learners prioritize the acquisition of concepts that have perceptually accessible components.
Relations between perceptual feature sub-types and AoA across language modalities
Given the clear perceptual differences between ASL and English in how they are produced and perceived by young learners, we next asked which perceptual modality subtypes contribute most strongly to AoA.
We first explored how different perceptual feature subtypes predicted AoA in English while controlling for lexical frequency in child-directed speech. The model included as predictors: frequency and the number of perceptual features in each of seven feature subtypes (Auditory, Gustatory, Olfactory, Tactile, Visual-Color, Visual-Form and Surface, and Visual-Motion). The results indicated that perceptual features were not all weighted equally: in particular, having more Auditory, Tactile, and Gustatory features was associated with earlier AoA in English (see Table 3); we did not observe relationships with olfactory features or any of the visual feature subtypes.
Table 3. Effect of perceptual feature subtypes on AoA in English and ASL

The parallel analysis in ASL revealed that a different subset of perceptual features contributed to variance in AoA. Here, Tactile and all three Visual feature subtypes (Color, Form and Surface, and Motion) were significant predictors of ASL AoA; concepts with more Tactile and Visual features are earlier-acquired for children learning ASL (see Table 3). We found no significant relationships between AoA and Olfactory, Auditory, or Gustatory features.
Finally, we directly compared whether perceptual features subtypes interact with language modality while controlling for frequency. To maximize power in our model to detect interactions, we dropped two terms (Olfactory and Gustatory features) and focused our comparison on the perceptual feature subtypes of interest (Sound, Tactile, and the three Visual feature subtypes). In this combined model, we found a main effect of frequency (β = −2.46, p < 0.001) as well as two significant interactions between feature subtype and language modality: Language x Visual-Motion features and Language x Tactile features. Full model results are described in Table 4 below.
Table 4. Comparing influence of different perceptual feature types across languages. Significant interactions indicate that feature subtypes differentially influence AoA across languages

As illustrated in Figures 2A and 2B, these significant interactions are characterized by increased effects of tactile (β = 0.80, p = 0.038) and visual-motion features (β = 1.27, p = 0.002) on AoA in ASL vs. English. In other words, deaf children learning ASL—a visual and tactile language—learn words that have visual and tactile semantic features earlier (β visual-motion = −1.60, p visual-motion < 0.001; β tactile = −1.42, p tactile < 0.001); these effects are absent (visual) or weaker (tactile) for children learning English. At the baseline level (ASL), there was no significant effect of auditory features on AoA (β = 0.12, p = 0.738). While the effect was numerically stronger in English (and significant in the English-only model [β = −0.43, p = 0.042]), there was no significant difference between the languages’ slopes (β = −0.47, p = 0.245; Figure 2C).

Figure 2. Illustrating the influence of Visual-Motion (A), Tactile (B), and Sound (C) features on AoA across ASL and English. To highlight differences in effect of features across languages, AoA has been mean-centered within languages, so that differences in slope reflect differences in the influence of each feature on AoA, with steeper slopes indicating a stronger relation to AoA; negative slopes indicate that the feature is associated with earlier word production, and positive slopes indicate that the feature is associated with later word production.
Discussion
In this study, we asked whether children’s perceptual experiences and first language modality shape the composition of their early vocabulary. We compared the effects of different semantic features on the AoA of words for deaf ASL learners and hearing English learners. We replicated the effect of perceptual semantic features on AoA in the vocabulary development of English-speaking children (Peters & Borovsky, Reference Peters and Borovsky2019) and extended that finding to deaf children learning ASL. Across groups, words with more perceptual-semantic features were learned earlier, an effect not observed for other semantic features. Yet we found that perceptual feature subtypes exerted different effects across the two languages. For hearing English learners only, the number of sound-related perceptual features (e.g., <croaks> for a frog) and taste-related perceptual features were associated with earlier word production. By contrast, for deaf signing children only, the number of visual features predicted earlier AoA. Tactile features exerted stronger effects on AoA in ASL than in English. We consider two possible non-competing explanations for this observed difference in perceptual types: (1) differences between ASL and English, and (2) differences between deaf and hearing learners.
Differences in the language
Prior research shows minimal differences between spoken and signed languages in acquisition (Newport & Meier, Reference Newport and Meier1985; Caselli & Pyers, Reference Caselli and Pyers2017) and in behavioral and neural processing (Emmorey, Reference Emmorey2021; MacSweeney et al., Reference MacSweeney, Woll, Campbell, McGuire, David, Williams, Suckling, Calvert and Brammer2002), so observing any cross-linguistic difference is notable.
One possible explanation is that children’s language input is tailored to their hearing status. Interlocutors are sensitive to their conversation partner’s sensory abilities and adjust their communication accordingly (Grigoroglou & Papafragou, Reference Grigoroglou and Papafragou2016; Hazan & Baker, Reference Hazan and Baker2011). For deaf children, adults may emphasize words with visual or tactile features while deemphasizing sound-related words. It is hypothetically possible that the children receive fewer auditory features in their language input not solely due to their own hearing status but also due to the hearing status of their primary caregivers and the perceptual features that are salient to those caregivers; i.e., hearing caregivers may use words with auditory features more frequently than deaf caregivers, and deaf caregivers may be more likely to use words with visual features. While lexical frequency in ASL was controlled, such measures cannot fully capture potential differences in child-directed input, especially given the current lack of a large corpus of parent-child interactions in ASL.
Another possible explanation for differences in AoA is the languages’ origins: ASL and English emerged to meet the needs of language users who differ in hearing status; ASL evolved organically through generations of North American deaf signers, whereas the vast majority (>95%) of English speakers are hearing (National Deaf Center on Postsecondary Outcomes, 2023). Despite these differences, ASL and other signed languages have a diverse lexicon for sound-related words (Emmorey et al., Reference Emmorey, Nicodemus and O’Gradyin press; Spread the Sign). Nevertheless, such differences may shape how words for sensory experience enter the lexicon and are used—in the same way that culture might affect the semantic organization of the lexicon (McGregor et al., Reference McGregor, Munro, Chen, Baker and Oleson2018).
Lastly, and perhaps most compellingly, the differences in iconic affordances of signed vs. spoken languages may drive our observed effects. Sensory features can be represented in a word through iconicity, a structured alignment between word form and meaning. For example, “moo” approximates the braying of a cow, and the ASL sign for DRINK resembles holding and tipping a glass toward the lips. Iconic mappings may facilitate learning by highlighting perceptual similarities between word-forms and their meanings (Imai & Kita, Reference Imai and Kita2014; Laing, Khattab, Sloggett & Keren-Portnoy, Reference Laing, Khattab, Sloggett and Keren-Portnoy2025). Indeed, in both signed and spoken languages, iconic words are produced earlier than non-iconic words (Caselli & Pyers, Reference Caselli and Pyers2017; Perry et al., Reference Perry, Perlman and Lupyan2015; Sidhu et al., Reference Sidhu, Williamson, Slavova and Pexman2021; Thompson et al., Reference Thompson, Vinson, Woll and Vigliocco2012), although how learners access iconicity may change with age and experience (Caselli & Pyers, Reference Caselli and Pyers2017; Magid & Pyers, Reference Magid and Pyers2017; Occhino et al, Reference Occhino, Anible, Wilkinson and Morford2017; Thompson et al., Reference Thompson, Vinson, Woll and Vigliocco2012).
The semantic characteristics of iconic words differ across modalities. Iconic mappings in sign language rarely represent auditory features of a referent, and more frequently align with tactile and visual features of form and meaning (Perlman et al., Reference Perlman, Little, Thompson and Thompson2018). In ASL, concepts with auditory features are often depicted iconically using visual or temporal properties (e.g., volume depicted by the degree of opening of the fingers; Emmorey et al., Reference Emmorey, Nicodemus and O’Gradyin press). Such iconic affordances may amplify the salience of modality-specific sensory features in child-directed input.
Beyond salience, iconic words may be overrepresented in the input to children (Montamedi et al., Reference Motamedi, Murgiano, Perniss, Wonnacott, Marshall, Goldin-Meadow and Vigliocco2021; Perry et al, Reference Perry, Perlman, Winter, Massaro and Lupyan2018) or may be modified during child-directed speech in ways that highlight the iconic mapping of the sensory properties (Fuks, Reference Fuks2020; Perniss et al., Reference Perniss, Lu, Morgan and Vigliocco2018; although c.f. Gappmayr et al., Reference Gappmayr, Lieberman, Pyers and Caselli2022). A more systematic analysis of corpus data would be a useful step toward answering this question.
Finally, through iconicity, phonological features may systematically convey semantic information (Campbell et al., Reference E.E., Sehyr, Pontecorvo, Cohen-Goldberg, Emmorey and Caselli2025). In ASL, systematic phonological features such as the location of the sign on the body may highlight specific perceptual features of the referent (e.g., the sign for FLOWER is located at the nose and is associated with smell; Cates et al., Reference Cates, Gutiérrez, Hafer, Barrett and Corina2013; many signs related to vision are produced at the eyes; Östling, Börstell & Courtaux, Reference Östling, Börstell and Courtaux2018). This systematic association may make it easier for children to learn new words with that same phonological and perceptual-semantic relationship.
Differences in the learner
Experimental work shows that toddlers learn words better when they can directly experience the referents of the word through the senses (Seidl et al., Reference Seidl, Indarjit and Borovsky2023). Accordingly, auditory features—which are largely inaccessible to deaf children—may not support the acquisition of words for deaf children. The nature of deaf children’s early experiences might, in turn, lead to an upweighting of visual, tactile, and motion features relative to auditory ones. If early word learning initially relies on perceptual salience (e.g., Pruden et al., Reference Pruden, Hirsh-Pasek, Golinkoff and Hennon2006), then these types of words might be more salient and thus more easily learned for deaf children, who are (by definition) less sensitive to auditory stimuli and possibly more sensitive to certain types of visual and tactile stimuli (Dhanik et al., Reference Dhanik, Pandey, Mishra, Keshri and Kumar2024; Gioiosa Maurno et al., Reference Gioiosa Maurno, Phillips-Silver and Daza González2024).
These results could also be viewed through a more active lens, wherein learners’ preferences shape their vocabulary. Experimental findings show that children more robustly learn words that interest them (Ackermann et al., Reference Ackermann, Hepach and Mani2020). Deaf children may gravitate toward visuo-motor or tactile experiences during play or interaction, prompting additional linguistic input related to these referents. This increased exposure and interest may support the encoding and retention of words associated with visual and tactile features. More systematic analyses of naturalistic interactions are needed to better understand how children’s exploratory behaviors shape their vocabulary acquisition.
Limitations and future directions
Because fewer data are presently available for ASL than for English, our estimates of ASL AoA are likely less precise than the English AoA estimates. It is possible that this noise masked real patterns that we were unable to detect (e.g., effects of sound). However, despite this limitation, we observed consistent and significant patterns in the ASL data, suggesting that these findings are robust.
By comparing deaf signers with deaf parents and hearing English speakers with hearing parents, language modality and the perceptual access of the learner and caregiver are all conflated. Future studies comparing the vocabulary development of hearing English-learners to that of deaf children learning spoken language (same language modality, but groups differ in auditory access) or hearing children learning ASL (different language modality, but groups have similar auditory access), as well as comparisons by caregiver hearing status, would better tease apart the effects of language modality, child hearing status, and caregiver hearing status.
Additionally, we substituted English semantic feature norms for ASL-specific semantic feature norms. In a recent study comparing semantic features collected for English words and Spanish words, researchers found that the norms were semantically similar, not language-specific (Vivas et al., Reference Vivas, Kogan, Romanelli, Lizarralde and Corda2020), suggesting that English norms could be a reasonable substitute in this context. However, understanding the semantic features that deaf signers associate with these signs may further elucidate the mechanisms underlying the observed effects on vocabulary composition.
Conclusions
Studying diverse language acquisition experiences is essential for understanding how variation in sensory and linguistic experiences shapes learning. This study shows that, across languages and learners, children were most likely to learn words that have meanings that are aligned with their sensory and linguistic experience. For deaf ASL learners, these were words linked to visual and tactile features, whereas for hearing English learners, they were words tied to auditory features. This study represents a rare example of a modality difference between deaf and hearing learners of signed and spoken languages, and in doing so, our findings illustrate one way learners’ experience with the world can fundamentally change language learning.
Replication package
All analysis, data, and code are available at: https://osf.io/m7v6k/.
Supplementary material
To view supplementary material for this article, please visit https://doi.org/10.1017/S0142716425100210
Competing interests
The authors have no conflicts of interest to declare.
Financial support
This research was supported by R01DC018593 and R21HD108730 to AB; NIH DC015272 to AL; NIH DC018279, NIH DC016104, NSF BCS-1918252, NIH DC018279-04S, and NSF BCS 2234787 to NC; James S. McDonnell Foundation to JP.
Artificial intelligence
We did not use AI in conducting this research study. All content has been carefully written, reviewed, and edited by human authors.
Ethics
This research received approval from the Boston University and Purdue University Institutional Review Boards.