Scholarly Colloquia and Events

  • 3/2 Physics Colloquium, Dr. Emily Myers

    UConn Physics Colloquium

     

    Dr. Emily Myers

    University of Connecticut

    Department of Speech, Language, and Hearing Sciences

     

    Dr. Emily Myers is an Associate Professor in the Department of Speech, Language, and Hearing Science and the Department of Psychological Sciences at the University of Connecticut.  Myers completed graduate and postdoctoral training at Brown University before joining the faculty of the University of Connecticut in 2010. Dr. Myers’ work focuses on a fundamental question in human behavior: how do listeners perceive the speech signal in order to map it to meaning? Through the use of neuroimaging methods (fMRI, ERP) together with standard psycholinguistic measures, work in her lab aims to understand the neural and behavioral mechanisms that underlie this process. Her research with unimpaired and language-disordered populations (aphasia, dyslexia, language impairment) informs functional models of speech and language processing, and adds to the understanding of how language processing breaks down. Dr. Myers’ work is currently funded by the National Institutes of Health (NIDCD) and the National Science Foundation.

     

    “There’s many a slip twixt ear and lip:

    Distortions, discontinuities, and disasters in speech perception”

     

    The sounds of speech are as complex as they are biologically important.  Fine-grained differences in the acoustics of speech allow the listener to understand the difference between bears and pears, and between uncles and ankles. Over a lifetime of experience with our native language, we develop exquisite sensitivity to some acoustic contrasts, yet lose the ability to perceive other distinctions, including some speech contrasts in other languages. The result is that our perceptual experience does not veridically reflect the acoustics of the speech stream.  Work from my lab and others suggests that the brain copes with the complexity of the speech signal by integrating information from context online to guide perception when the speech signal is ambiguous. Further, with sufficient training, listeners can overcome the perceptual barriers established during development and make inroads in distinguishing novel speech sound contrasts.

     

    Date: Friday, March 2, 2018

    Time: 3:30 p.m.

    Location: Gant Science Complex

    Location: Physics, Room PB-38

     

    Coffee and tea will be served prior to the talk, at 3:00 p.m., In Room P-103

     

    For more information, contact: Anna Huang at anna.huang@uconn.edu