Workshop program


Nov. 16, 2020, 9 a.m. Workshop schedule

Final program

Monday, November 16th 2020. Times are described in Buenos Aires time zone. 

Time Presentation
9:00 -
9:05

Introduction

Cristina Palmero

9:05 -
9:55

Keynote 1: "Artificial Emotional Intelligence for Well-being" + Q&A,

Speaker: Hatice Gunes

Abstract: Designing artificially intelligent systems and interfaces with socio-emotional skills is a challenging task. Progress in industry and developments in academia provide us a positive outlook, however, the artificial emotional intelligence of the current technology is still limited. In this talk, I will present some of our research explorations in this area, including virtual reality based cognitive training with affective adaptation, and telepresence robotics that is sensitive to socio-emotional phenomena with its potential use for well-being.

9:55 - 10:30 Full talks and Q&A of 3 selected papers
 

1. "The Emotionally Intelligent Robot: Multimodal Emotion Recognition for Improving Socially-Assistive Robot Navigation", (spotlight version here),

Aniket Bera 

 

2. "Explainable Early Stopping for Action Unit Recognition", (spotlight version here)

Ciprian Corneanu, Meysam Madadi, Sergio Escalera, Aleix Martinez

 

3. "Spectral Correspondence Framework for building a 3D Baby Face Model", (spotlight version here), 

Araceli Morales, Antonio Reyes Porras Perez, Liyun Tu, Marius George Linguraru, Gemma Piella, Federico Sukno

10:30  - 11:20

Keynote 2: "Understanding faces and gestures" + Q&A,

Speaker: Aleix M. Martinez

Abstract: We now have computer vision algorithms that can successfully segment regions of interest in an image, recognize objects and scenes, and even create accurate 3D models of them.  But how about higher level, abstract concepts like the understanding of what other people do, what they are interested in, and how they feel? This talk will introduce the first algorithms to successfully solve some of these problems. I will first summarize our research uncovering the image features used by the human visual system to recognize emotion in others, which include facial muscle articulation, facial color modulations, body pose, and context. I will detail how these results can be used to define computer vision systems that can work “in the wild” (i.e., outside controlled, in-lab conditions). We will then discuss how these concepts can be used to design systems that can interpret the intent of others and how we can develop a computational theory of mind for AI systems.

Slides available here.

Break  
12:30 - 13:20

Keynote 3: "Human-centered computer vision in support of welfare" + Q&A,

Speaker: Antonis Argyros

Abstract: Computer vision aims at developing technical systems capable of perceiving the environment through image and video processing and analysis. In this talk, we mainly focus on issues related to human-centered computer vision, that deals with the computational visual perception of aspects of human presence and the ability of a technical system to estimate the geometry and motion of the human body and to recognize its actions and behavior. In this special area, we give specific examples of our research activities that span several spatiotemporal scales and levels of abstraction. We also give examples of applications developed based on these technologies in the field of robotics and ambient intelligence environments.

Slides available here.

13:20 - 13:40 Spotlight presentations
 

1. "Vision-based Individual Factors Acquisition for Thermal Comfort Assessment in a Built Environment",

Jinsong Liu, Isak Worre Foged, Thomas B. Moeslund

 

2. "Impairments in decoding facial and vocal emotional expressions in high functioning autistic adults and adolescents",

Anna Esposito, Italia Cirillo, Antonietta Esposito, Leopoldina Fortunati, Gian Luca Foresti, Sergio Escalera, Nikolaos Bourbakis

 

3. "Introduction and Analysis of an Event-Based Sign Language Dataset",

Ajay Vasudevan, Pablo Negri, Teresa Serrano-Gotarredona, Bernabe Linares-Barranco

 

4. "Feature Selection for Zero-Shot Gesture Recognition",

Naveen Madapana, Juan Wachs

 

5. "SILFA: Sign Language Facial Action Database for the Development of Assistive Technologies for the Deaf",

Emely P Silva, Paula Costa, Kate Kumada, José De Martino

 

6. "Seniors’ ability to decode differently aged facial emotional expressions",

Anna Esposito, Terry Amorese, Nelson Maldonato, Alessandro Vinciarelli, Maria Inés Torres, Sergio Escalera, Gennaro Cordasco

 

7. "FineHand: Learning Hand Shapes for American Sign Language Recognition",

Al Amin Hosain, Panneer Selvam Santhalingam, Parth Pathak, Huzefa Rangwala, Jana Kosecka

 

8. "The MAGIC of E-Health: A Gesture-Based Approach to Estimate Understanding and Performance in Remote Ultrasound Tasks",

Edgar J Rojas Muñoz, Juan Wachs

 

9. "Automatic stress detection evaluating models of facial action units",

Giorgos Giannakakis, Anastasios  Roussos

13:40 - 13:45

Final remarks,

Cristina Palmero


News


Coronavirus information.

FG2020 Posponed

Dear FG community,

IEEE has been monitoring the developing Coronavirus outbreak.

The safety and well-being of all conference participants is our priority. After studying and evaluating the announcements, guidance, and news released by relevant national departments, we are sorry to announce that IEEE FG 2020, scheduled to be held 18-22th May/Buenos Aires will be postponed to a later date.

We have tentatively rescheduled FG2020 for 16 to 20 November. We remain committed to hosting IEEE FG 2020 in Buenos Aires. In the unfortunate case that that proves impossible, we are considering a virtual conference. The conference proceedings and registration will be delayed. We will report back to you about the proceedings once more is known.

Again, we apologize for any inconvenience this has caused and please contact info@fg2020.org

Regards,

FG 2020 chairs