Neurocognitive Plasticity in Young Deaf Adults: Effects of Cochlear Implantation and Sign Language Experience
Funding Source: NIH R01DC016346 (PI: Matthew Dye)
According to the latest data from the NIDCD, approximately 2 to 3 out of every 1,000 children born in the United States have a measurable hearing loss at birth. For some of these children, that hearing loss is profound and can preclude typical acquisition of spoken language. As of 2012, around 38,000 children in the United States had received a cochlear implant (CI). For many of these children, the implant has permitted access to spoken language. However, what is perhaps most striking about spoken language outcomes following cochlear implantation is the variability. Understanding this variability is the first step in developing effective interventions to move a greater number of children towards a more successful outcome. The research proposed here will be one of the first large-scale studies to examine spoken language outcomes in young deaf adults who received their implants in childhood and are now enrolled at the National Technical Institute for the Deaf (RIT/NTID) in Rochester NY. The majority of these students were born with profound hearing losses, and they vary in terms of whether or not they use a CI, the age at which they received a CI and their primary mode of communication. This project aims to characterize cognitive deficits in young deaf adults as a function of their atypical central auditory development, determine the impact of cochlear implantation on the remediation of those cognitive deficits, and carefully examine the impact of communication mode (signed versus spoken) on cognitive deficits and spoken language outcomes. In a large sample of 480 young deaf adults: (i) high-density EEG will be used to document the effect of congenital profound deafness on central auditory cortical development by recording cortical responses to both auditory (CAEPs) and visual stimuli (VEPs), and (ii) domain-general measures of cognitive (sequence processing, executive function) and language outcomes will be obtained. Mediation analyses will be used to determine whether it is atypical auditory cortical development or cross-modal recruitment of auditory brain areas by vision that best predicts cognitive deficits and subsequent spoken language development. We will then test the hypothesis that one source of variability in CI outcomes stems from the extent to which age of implantation modulates auditory cortical maturation to remediate cognitive deficits. Finally, the unique sample of young adults at RIT/NTID, many of whom learned a natural sign language in infancy and wear a cochlear implant, affords the possibility of examining the role of early exposure to American Sign Language (ASL) in mitigating deficits in sequence processing and executive control, potentially boosting spoken language outcomes.
Development of Temporal Visual Selective Attention in Deaf Children
Funding Source: NSF BCS 1550988 (PI: Matthew Dye)
Determining the effect of hearing loss on the development of visual functions is complicated by its corresponding impact on language development. To date, developmental studies of visual functions have either recruited deaf children with delayed (spoken) language exposure, or deaf children who are native users of a signed language. Those studies have also been cross-sectional (with just one exception), and have often lacked the sample sizes needed to draw developmental conclusions. The study proposed here overcomes many of the limitations of these previous studies. The deaf children to be recruited will vary in both their hearing loss (from severe to profound) and their exposure to language (from native to late learners of American Sign Language). A longitudinal design maximizes statistical power and will allow a moderation analysis that permits statistical conclusions about the influence of hearing loss and language background on the development of temporal visual selective attention. In addition, the recruitment of multiple cohorts in an accelerated longitudinal design will provide developmental data covering the 6- to 15-year old age range in just 2-3 years. Consequently, the proposed research will advance our understanding of how early sensory and linguistic experience impact cognitive development.
Mapping the Ontogenesis of Deaf Vision
Funding Source: Swiss National Science Foundation (PI: Roberto Caldara)
We are living in a world of rich dynamic multisensory sensory signals, which need to be rapidly integrated to create an effective and seamless unified internal representation of the environment. The human visual system is equipped with the most sophisticated machinery to effectively adapt to the visual world. Where, when and how the eyes move to gather information to adapt to the visual environment has been a question that has fascinated scientists for more than a century. We move our eyes to navigate, identify dangers, objects, and people, and in a wide range of situations during social interactions.
The development of technologies has increased the precision, the ease and affordability of modern non-invasive fixed and wearable eye tracking devices. To date, eye movements have become a method of choice in a wide variety of disciplines investigating how the mind works for achieving visual perception. In parallel, understanding how the brain achieves visual perception represents one of the major challenges and fields of investigation in cognitive neuroscience. A particular fate of nature might be helpful to achieve this feat: the occurrence of congenital deafness.
Deafness allows the identification of the consequences of deprivation in one important sensory channel (i.e., audition) for adaptation to our world. On this issue, two contrasting points of view have been put forward in the literature. The first hypothesis postulates that the loss of one sensory channel results in deficits in the development of the spared sensory channel. On the contrary, an alternative hypothesis suggests that the loss of one sensory channel enhances the sensitivity and efficiency of the spared channel to compensate for the hearing impairment. After more than 30 years of research on this question, this debate is still open and topical. Here, we are aiming to clarify this question by tracking the eye movements of congenitally deaf children and adults, with innovative techniques and analyses. In addition, most of the evidence on this question has been gathered in laboratories, questioning the transferability of those findings to everyday life situations and interactions.
This research proposal addresses this gap in the literature by using fixed and wearable eye trackers in ecologically valid situations. The overarching aim of the research program is to identify perceptual strategies used by deaf observers during face recognition, categorization of facial expressions of emotions, and natural social interactions.
The Validity of Avatar Stimuli for Psycholinguistic Research on ASL
Funding Source: National Technical Institute for the Deaf (PI: Matthew Dye)
Over the past few decades, studies have revealed that there are “sign sounds” in sign languages that are theoretically equivalent to “speech sounds” in spoken languages. For example, rather than combining vowels and consonants to make spoken words, sign languages combine handshapes and movement to create signs. However, significant progress has often been hampered by technological limitations. Within the speech realm, synthesizing speech stimuli on computers has allowed psycholinguists to explore speech comprehension in ways that were not possible using analog audio recordings. To date, no one has used computer animations of sign language stimuli to conduct research into how sign languages are perceived and comprehended.
The ability to use such stimuli – which we can loosely term “avatars” – would allow researchers to overcome methodological limitations imposed by the use of digital video recordings of actual human signers. For example, cross-linguistic studies that compare processing of two different sign languages must use two different signers in two different experimental conditions, or have a single signer produce signs in two different languages. In both cases there will be confounds, due to different sign models and different degrees of fluency respectively. The use of an avatar would allow such studies to be conducted using carefully controlled and fluent stimuli. It is also impossible for a human signer to produce an ASL sentence multiple times with the entire performance exactly the same, except for the one variable being modified in an experiment. Again, the use of avatars would overcome this limitation.
The goal of this project proposal is to demonstrate the validity of avatar stimuli in psycholinguistic research. We will do this by (i) developing human and avatar stimuli for lexical items and sentences in ASL, (ii) deploying these stimuli in “classic” and well-characterized psycholinguistic studies, and (iii) comparing effect sizes in the data generated across these different stimulus conditions.
We will generate three types of ASL word and sentence stimuli: human, avatar-rough, and avatar-smooth (the rough and smooth avatar variants will differ in the degree of realism – eye and torso movements – and the smoothness of transitions from one sign to another. These stimuli will de deployed in three experimental paradigms: (i) a short-term memory study to assess the phonological similarity effect, (ii) a priming study to look at the semantic facilitation effect, and (iii) an eye tracking study to measure the face fixation effect. Statistical analyses will explore the magnitude of these effects as a function of stimulus type (human, avatar-rough, or avatar-smooth).