Carlos Busso

Associate Professor - Electrical Engineering
Tags: Electrical Engineering Computer Engineering

Professional Preparation

Ph.D - Electrical Engineering
University of Southern California - 2008
M.S. - Electrical Engineering
University of Chile - 2003
ENG. - Electrical Engineering
University of Chile - 2003
B.S. - Electrical Engineering
University of Chile - 2000

Research Areas

Research Interests
  • Modeling and synthesis of human behavior
  • Affective state recognition
  • Multimodal speaker identification
  • Sensing participant interaction


Soroosh Mariooryad and Carlos Busso, "Compensating for speaker or lexical variabilities in speech for emotion recognition," Speech Communication, vol. 57, pp. 1-12, February 2014. 2014 - Publication
Carlos Busso, Soroosh Mariooryad, Angeliki Metallinou, and Shrikanth S. Narayanan, " Iterative feature normalization scheme for automatic emotion detection from speech," IEEE Transactions on Affective Computing, vol. In press, 2013. 2013 - Publication
Nanxiang Li, Jinesh J. Jain, and Carlos Busso, "Modeling of driver behavior in real world scenarios using multiple noninvasive sensors," IEEE Transactions on Multimedia, vol. 15, no. 5, pp. 1213-1225, August 2013. 2013 - Publication
Carlos Busso, Murtaza Bulut, and Shrikanth S. Narayanan, "Toward effective automatic recognition systems of emotion in speech," in Social emotions in nature and artifact: emotions in human and human-computer interaction, S. Marsella J. Gratch, Ed. Oxford University Press, New York, NY, USA, 2013. 2013 - Publication
Soroosh Mariooryad and Carlos Busso, "Exploring cross-modality affective reactions for audiovisual emotion recognition," IEEE Transactions on Affective Computing, vol. 4, no. 2, pp. 183-196, April-June 2013. 2013 - Publication
Juan Pablo Arias, Carlos Busso, and Nestor Becerra Yoma, "Shape-based modeling of the fundamental frequency contour for emotion detection in speech," Computer Speech and Language, vol. In Press, 2013. 2013 - Publication
Soroosh Mariooryad and Carlos Busso, "Generating human-like behaviors using joint, speech-driven models for conversational agents," IEEE Transactions on Audio, Speech and Language Processing, vol. 20, no. 8, pp. 2329-2340, October 2012. 2012 - Publication
Carlos Busso and Jinesh J. Jain, "Advances in multimodal tracking of driver distraction," in DSP for In-Vehicle Systems & Safety, J. Hansen, P. Boyraz, K. Takeda, and H. Abut, Eds., p. In Press. Springer, New York, NY, USA, 2012. 2012 - Publication
C.-C. Lee, E. Mower, C. Busso, S. Lee, and S.S. Narayanan, "Emotion recognition using a hierarchical binary decision tree approach," Speech Communication, vol. 53, no. 9-10, pp. 1162-1171, November-December 2011. 2011 - Publication
C. Busso, M. Bulut, S. Lee, and S.S. Narayanan, "Fundamental frequency analysis for speech emotion processing," in The Role of Prosody in Affective Speech, Sylvie Hancil, Ed., pp. 309-337. Peter Lang Publishing Group, Berlin, Germany, 2009. 2009 - Publication

News Articles

Professor Is Designing Tools to Help Computers Sense Emotion
Dr. Carlos Busso hopes computers will one day sense how you’re feeling. The associate professor of electrical engineering is designing speech recognition tools that understand human emotion.  
To further his research, Busso has received a National Science Foundation Faculty Early Career Development (CAREER) Award, which provides nearly $500,000 in funding over the next five years.
Engineering Professor Earns Award for Influential Audiovisual Study
Dr. Carlos Busso, assistant professor of electrical engineering in the Erik Jonsson School of Engineering and Computer Science, is the inaugural recipient of a 10-Year Technical Impact Award given by the Association for Computing Machinery International Conference on Multimodal Interaction.

The award was given for Busso’s work on one of the first studies about audiovisual emotion recognition. The work analyzed the limitations in solely detecting emotions from speech or facial recognition, and discussed the benefits of using both modalities at the same time.