AI News, Machine Learning Department - School of Computer Science

Machine Learning Department - School of Computer Science

His research focuses on building the computational foundations to enable computers with the abilities to analyze, recognize and predict subtle human communicative behaviors during social interactions.

In particular, Dr. Morency was lead co-investigator for the multi-institution effort that created SimSensei and MultiSense, two technologies to automatically assess nonverbal behavior indicators of psychological distress.

Abstract Human face-to-face communication is a little like a dance, in that participants continuously adjust their behaviors based on verbal and nonverbal cues from the social context.

Leveraging recent advances in machine learning, audio-visual signal processing and computational linguistic, my research focuses on creating computational technologies able to analyze, recognize and predict human subtle communicative behaviors in social context.

In this talk, I will present some of our recent achievements modeling multiple aspects of human communication dynamics, motivated by applications in healthcare (depression, PTSD, suicide, autism), education (learning analytics), business (negotiation, interpersonal skills) and social multimedia (opinion mining, social influence).

Social learning and multimodal interaction for designing artificial agents

CNRS-LTCI, Telecom-ParisTech, Paris, France Catherine Pelachaud is a Director of Research at CNRS in the laboratory LTCI, TELECOM ParisTech.She participated to the elaboration of the first embodied conversation agent system, GestureJack,with Justine Cassell, Norman Badler and Mark Steedman when being a post-doctorate at theUniversity of Pennsylvania.

Her research interest includes embodied conversational agent, nonverbal communication(face, gaze, and gesture), expressive behaviors and socio-emotional agents.

With her researchteam, she has been developing an interactive virtual agent platform GRETA that can display socio-emotional and communicative behaviors.

She has been involved and is still involved in severalEuropean projects related to believable embodied conversational agents, emotion and socialbehaviors.

She has co-edited several books on virtual agents and emotion-oriented systems.She participated to the organization of international conferences such as IVA, ACII and AAMAS,virtual agent track.

His research focuses on building the computational foundations to enable computers with the abilities to analyze, recognize and predict subtle human communicative behaviors during social interactions.

He formalized this new research endeavor with the Human Communication Dynamics framework, addressing four key computational challenges: behavioral dynamic, multimodal dynamic, interpersonal dynamic and societal dynamic.

This multi-disciplinary research topic overlaps the fields of multimodal interaction, social psychology, computer vision, machine learning and artificial intelligence, and has many applications in areas as diverse as medicine, robotics and education.

His research activities, carried out at the Institute for Intelligent Systems and Robotics, cover the areas of social signal processing and personal robotics through non-linear signal processing, feature extraction, pattern classification and machine learning.

His research focuses on Personal Robots, in particular on the modeling of human's identities and behaviours, to give to the robots the ability of interacting with humans in a more strict, customized and reliable way.

Her research interests are in the area of human-computer interaction, especially focusing on social signals analysis, expressive gesture.

During the last 8 years, she has participated in more than 30 R&D projects related to multimodal affective computing, human-machine interaction, Quality of Experience and Ambient Assisted Living.

Her research focuses on the enhancement and humanization of human-machine interaction through the use of computer vision and artificial intelligence techniques.

Since 2015, she is working as a postdoc researcher at Pierre and Marie Curie University (Paris, France), bridging the gap between interpersonal synchrony and affective computing.

Her research interests lie at the crossroads of affective behavioral computing and social robotics, and include the automatic analysis and recognition of human non-verbal behavior for adaptive human-computer and human robot interaction.

Her research interests include human-robot interaction, signal processing, emotion recognition, expression and modelling, and developmental robotics.

Her main research interests include: Human behavior understanding from motion, Human body modelling, Dynamics identification, Control of robot for human/robot interaction, Human affect recognition.

RI Seminar: Louis-Philippe Morency : Multimodal Machine Learning

Multimodal Machine Learning: Modeling Human Communication Dynamics Louis-Philippe Morency Assistant Professor, LTI October 09, 2015 Abstract Human ...

The Next Step in AI: Multimodal Perception | Louis-Philippe Morency | TEDxCMU

Human face-to-face communication is a little like a dance: participants continuously adjust their behaviors based on their interlocutor's speech, gestures and ...

Dr. Louis-Philippe Morency: Modeling Human Communication Dynamics

Modeling Human Communication Dynamics: From Depression Assessment to Multimodal Sentiment Analysis Feb 7, 2014 Dr. Louis-Philippe Morency University ...

"Leveraging Digital Diagnostics to Enhance Mental Health Assessment" Q+A from TIPS 2017

This panel discussion was part of the 2017 Technology in Psychiatry Summit, an event sponsored by the McLean Institute for Technology in Psychiatry ...