This track will focus on automatic personality recognition of single individuals (i.e., a target person) during a dyadic interaction, from two individual views. Context information (e.g., information about the person they are interacting with, their relationship, the difficulty of the activity, etc.) is expected to be exploited to solve the problem. Audio-visual data associated with this track, as well as the self-reported Big-Five personality labels (and metadata information such as gender, type of interaction, if they know each other, and so on) are already available and ready to use. Utterance level transcriptions will be provided so that verbal communication can also be exploited.