December 2, 2016

Ongoing Projects

Neurocognitive Plasticity in Young Deaf Adults: Effects of Cochlear Implantation and Sign Language Experience

Funding Source: NIH R01DC016346 (PI: Matthew Dye)

According to the latest data from the NIDCD, approximately 2 to 3 out of every 1,000 children born in the United States have a measurable hearing loss at birth. For some of these children, that hearing loss is profound and can preclude typical acquisition of spoken language. As of 2012, around 38,000 children in the United States had received a cochlear implant (CI). For many of these children, the implant has permitted access to spoken language. However, what is perhaps most striking about spoken language outcomes following cochlear implantation is the variability. Understanding this variability is the first step in developing effective interventions to move a greater number of children towards a more successful outcome. The research proposed here will be one of the first large-scale studies to examine spoken language outcomes in young deaf adults who received their implants in childhood and are now enrolled at the National Technical Institute for the Deaf (RIT/NTID) in Rochester NY. The majority of these students were born with profound hearing losses, and they vary in terms of whether or not they use a CI, the age at which they received a CI and their primary mode of communication. This project aims to characterize cognitive deficits in young deaf adults as a function of their atypical central auditory development, determine the impact of cochlear implantation on the remediation of those cognitive deficits, and carefully examine the impact of communication mode (signed versus spoken) on cognitive deficits and spoken language outcomes. In a large sample of 480 young deaf adults: (i) high-density EEG will be used to document the effect of congenital profound deafness on central auditory cortical development by recording cortical responses to both auditory (CAEPs) and visual stimuli (VEPs), and (ii) domain-general measures of cognitive (sequence processing, executive function) and language outcomes will be obtained. Mediation analyses will be used to determine whether it is atypical auditory cortical development or cross-modal recruitment of auditory brain areas by vision that best predicts cognitive deficits and subsequent spoken language development. We will then test the hypothesis that one source of variability in CI outcomes stems from the extent to which age of implantation modulates auditory cortical maturation to remediate cognitive deficits. Finally, the unique sample of young adults at RIT/NTID, many of whom learned a natural sign language in infancy and wear a cochlear implant, affords the possibility of examining the role of early exposure to American Sign Language (ASL) in mitigating deficits in sequence processing and executive control, potentially boosting spoken language outcomes.\

Development of Temporal Visual Selective Attention in Deaf Children

Funding Source: NSF BCS 1550988 (PI: Matthew Dye)

Determining the effect of hearing loss on the development of visual functions is complicated by its corresponding impact on language development. To date, developmental studies of visual functions have either recruited deaf children with delayed (spoken) language exposure, or deaf children who are native users of a signed language. Those studies have also been cross-sectional (with just one exception), and have often lacked the sample sizes needed to draw developmental conclusions. The study proposed here overcomes many of the limitations of these previous studies. The deaf children to be recruited will vary in both their hearing loss (from severe to profound) and their exposure to language (from native to late learners of American Sign Language). A longitudinal design maximizes statistical power and will allow a moderation analysis that permits statistical conclusions about the influence of hearing loss and language background on the development of temporal visual selective attention. In addition, the recruitment of multiple cohorts in an accelerated longitudinal design will provide developmental data covering the 6- to 15-year old age range in just 2-3 years. Consequently, the proposed research will advance our understanding of how early sensory and linguistic experience impact cognitive development.

Multimethod Investigation of Articulatory and Perceptual Constraints on Natural Language Evolution

Funding Source: NSF BCS 1749376 (PI: Matthew Dye)

Languages change over time, such that the way we speak English now is very different than the speech patterns of elder generations and our distant ancestors. This project will exploit the visual nature of sign languages–where the body parts producing language are highly visible–to determine whether languages change so that they are easier to produce or so that they are easier to understand. In doing so, the project will address fundamental theoretical questions about language change that cannot be addressed by analyzing historical samples of spoken languages. To this end, the researchers will develop computational tools that allow 3D human body poses to be automatically extracted from 2D video. Such tools will be useful for the development of automated sign language recognition, promoting accessibility for deaf and hard-of-hearing people, and for developing automated systems for recognizing and classifying human gestures. The research will involve deaf and hard-of-hearing students, helping to increase diversity in the nation’s scientific workforce.

It is well documented that sign languages change over time, and it is a commonly held belief that those changes have resulted from successive generations making language easier to perceive. However, most of this evidence has been anecdotal and descriptive and has not quantified changes in the ease of perception and production of ASL over time. The research team will take advantage of the fully visible articulators of sign languages to develop novel pose estimation algorithms that are able to automatically extract information contained in 2D video to create accurate 3D models of articulator movement during language production. The recent birth and rapid evolution of Nicaraguan Sign Language (NSL) has allowed researchers to study language change, from the beginning, on a compressed time-scale. By leveraging an existing NSL database–comprised of 2D videos from four generations of Nicaraguan signers–and utilizing these novel pose estimation algorithms, the researchers will be able to empirically assess the extent to which linguistic changes are driven by perceptual constraints imposed by the human visual system and/or articulatory constraints imposed by the musculoskeletal system. The researchers will also query lexical databases of American Sign Language to test predictions about the perceptual form of modern day ASL, and conduct behavioral studies with deaf and hearing users of ASL to test hypotheses regarding the allocation of visual attention as a result of both deafness and acquisition of a sign language. In doing so, the research will provide valuable information about how the human brain changes the tools we use (in this case, language) and the way that those tools in turn shape the function of the human brain. This will provide a more complex understanding of language change that illuminates the complex interaction between languages and the human beings that use them.