Carnegie Mellon University, USA
Louis-Philippe Morency (https://www.cs.cmu.edu/~morency/) is a tenure-track Faculty member at CMU Language Technology Institute where he leads the Multimodal Communication and Machine Learning Laboratory (MultiComp Lab). He was previously a Research Faculty at USC Computer Science Department. He received his Ph.D. in Computer Science from MIT Computer Science and Artificial Intelligence Laboratory. His research focuses on building the computational foundations to enable computers with the abilities to analyze, recognize and predict subtle human communicative behaviors during social interactions. Central to this research effort is the technical challenge of multimodal machine learning: mathematical foundation to study heterogeneous multimodal data and the contingency often found between modalities. This multi-disciplinary research topic overlaps the fields of multimodal interaction, social psychology, computer vision, machine learning and artificial intelligence, and has many applications in areas as diverse as medicine, robotics and education.
Princeton University, USA
Alexander Todorov (http://tlab.princeton.edu/) studies how people perceive, evaluate, and make sense of the social world. His research uses multiple methods: from behavioral experiments to building of computational models. Todorov’s research has appeared in a variety of publications, including Science, Nature Human Behavior, PNAS, Psychological Science, and Journal of Neuroscience. Media coverage of his research has spanned internationally. His most recent book publication is Face Value: The Irresistible Influence of First Impressions. Prior to joining Booth in 2020, Todorov was a professor of psychology at Princeton University. Todorov earned a PhD from New York University. Additionally, he holds a Research MA from the New School for Social Research, and a BA from Sofia University “St. Kliment Ohridski” in Sofia, Bulgaria. During his studies, he was a visiting researcher in the Department of Experimental Psychology at Oxford University.
University of Cambridge, UK
Hatice Gunes is an Associate Professor (University Senior Lecturer) in the Department of Computer Science and Technology, University of Cambridge. Her research expertise is in the areas of affective computing and social signal processing that lie at the crossroad of multimodal interaction, computer vision, signal processing, and machine learning fields applied to computer/robot mediated human-human interactions and human-robot interactions. Her research work develops novel computational frameworks for analysing and understanding human behaviour, social signals and affect from facial expressions, vocal nuances, body posture/ gesture, and physiological signals, and for modelling these phenomena for creating socio-emotionally intelligent games, assistive technologies, virtual agents and robotic systems. She has published over 100 papers in these areas. Her current research vision is to embrace the challenges present in the area of health and empower the lives of people through technology by continuing her research with applications to social robotics and wellbeing. She has recently been awarded the prestigious EPSRC Early Career Fellowship (2019-2024) to investigate adaptive robotic emotional intelligence for wellbeing. Gunes is the President of the Association for the Advancement of Affective Computing (AAAC), the General Co-Chair of ACII 2019, and the Program Co-Chair of ACM/IEEE HRI 2020 and IEEE FG 2017. She is the Chair of the Steering Board of IEEE Transactions on Affective Computing, and has served as an Associate Editor of IEEE Transactions on Affective Computing, IEEE Transactions on Multimedia, and Image and Vision Computing Journal.
Prof. Daniel Gatica-Perez directs the Social Computing Group at Idiap Research Institute and the Swiss Federal Institute of Technology in Lausanne (EPFL), Switzerland. His work integrates methods from ubiquitous computing, social media, machine learning, and social sciences to understand human and social behavior in everyday life. His research has studied connections between behavioral cues and human traits and states in social video and face-to-face interaction.
Rensselaer Polytechnic Institute, USA
Qiang Ji (https://www.ecse.rpi.edu/~qji/) received his Ph.D degree in electrical engineering from the University of Washington. He is currently a Professor with the Department of Electrical, Computer, and Systems engineering at RPI. From January, 2009 to August, 2010, he served as a program director at the National Science Foundation, managing NSF's machine learning and computer vision programs. Prior to joining RPI in 2001, he was an assistant professor with the Department of Computer Science, University of Nevada at Reno. He also held research and visiting positions with the Beckman Institute at University of Illinois at Urbana-Champaign, the Robotics Institute at Carnegie Mellon University, and the US Air Force Research Laboratory. Dr. Ji currently serves as the director of the Intelligent Systems Laboratory (ISL). Prof. Ji is a fellow of the IEEE and the IAPR.
Carnegie Mellon University, USA
Yaser Sheikh (http://www.cs.cmu.edu/~yaser/) is an Associate Professor at the Robotics Institute, Carnegie Mellon University. He founded and directs Facebook Reality Labs in Pittsburgh focused on pursuing "metric telepresence": remote interactions in AR/VR that are indistinguishable from reality. His research is focused on machine perception and rendering of social behavior, spanning sub-disciplines in computer vision, computer graphics, and machine learning. His research is sponsored by various government research offices, including NSF and DARPA, and several industrial partners including the Intel Corporation, the Walt Disney Company, Nissan, Honda, Toyota, and the Samsung Group. He received his PhD in 2006 from the University of Central Florida advised by Prof. Mubarak Shah, and completed a postdoctoral fellowship in 2008 at Carnegie Mellon University under the mentorship of Takeo Kanade.
UC Santa Barbara, USA
Norah Dunbar (https://www.comm.ucsb.edu/people/norah-dunbar) is a Professor of Communication at UC Santa Barbara (UCSB). She teaches courses in nonverbal and interpersonal communication, communication theory, and deception detection. She is also Affiliate Faculty in the Center for Information, Technology & Society; the Center for Digital Games Research; and the Quantitative Methods in the Social Sciences program. She has received over $13 Million in research funding from agencies such as the Intelligence Advanced Research Projects Activity, the National Science Foundation, the Department of Defense, and the Center for Identification Technology Research. She has served on the editorial board of over a dozen disciplinary journals and as the Chair of the Nonverbal Division of the National Communication Association in 2014-2016. She is the current Chair of the Communication Department at UCSB.