My research investigates the perceptual strategies used by listeners to recognize speech. Speech communication has an extraordinary resistance to distortion: intelligibility is preserved when a substantial portion of the spectrum is eliminated by filtering, when large segments of the waveform are deleted or replaced by silence, or when the signal is embedded in background noise. Research in my lab has centered on a series of experimental investigations of the distortions introduced by competing voices, narrow bandpass filtering, spectral flattening, local time reversal, and frequency scaling. We are evaluating theoretical and computational models of speech perception to account for the extraordinary resilience of human speech communication to such distortions. Our studies of frequency-scaled speech have revealed the operation of perceptual mechanisms that help listeners cope with the enormous variability in the acoustic patterns of speech across talkers. Much of this variability is a direct consequence of size differences in the larynx and vocal tract as a function of age and sex.
Supported by a grant from the National Science Foundation, we are studying the detailed pattern of these changes in a database of speech recordings from adults and children ranging in age from 5 to 18 years. These recordings provide materials for constructing stimuli in listening experiments, acoustic parameters for realistic voice synthesis and voice conversion, and normative data for studies of speech perception and production in adults and children. This project is providing valuable information on the nature of speech development and the acoustic scaling transformations that take place as children grow into adults. Acoustic measurements from the recorded samples are incorporated into statistical pattern recognition models to predict the responses of listeners to natural and synthesized speech. These models provide a basis for testing and refining hypotheses about the perceptual transformations that listeners apply to cope with acoustic variability, and the processes by which they extract phonetic and indexical information in speech perception.
Listeners can extract information from speech produced under extreme conditions: for example, when the speaking rate is 400 words per minute; in high levels of background noise; and when the identity of the speaker is unknown. Current research in our laboratory considers how human listeners achieve this by looking at auditory, perceptual, and cognitive processes that intervene between the production of speech and its recognition. We are developing and testing models of the auditory and phonetic analysis of speech to describe how listeners extract information from speech when competing sound sources are present. When the competing sound source is another voice, listeners face the difficult problem of separating signals that are similar in their acoustic structure. This problem has serious implications for theoretical models of speech perception, and it has important practical consequences for two areas of applied speech research. First, because competing voices present difficulties for individuals suffering from sensorineural hearing impairments, research on the perceptual processes involved in speech-source segregation may provide insights into the problems faced by these listeners, and may suggest forms of signal processing to enhance the intelligibility of speech signals corrupted by background noise. Second, because competing voices severely degrade the performance of automatic speech recognizers, it is likely that a better understanding of human performance will lead to improvements in the design of robust and noise-resistant devices for automatic speech recognition.