PhD Fellow: Jay Kejriwal
Alignment in both human-human and human-machine interactions has been shown to be beneficial on several social dimensions such as likeability, task success, learning gains, or trust. ESR12 will test the central hypothesis that people (and machines) who adapt to their interlocutor(s) influence their interlocutors’ behavior and emotional state more than people/machines who do not adapt.
- Scenario design for gauging the effect of alignment on users’ states;
- Knowledge regarding the potential of speech alignment to affect the emotional states and decision-making preferences;
- Better understanding of the differences between machines and human interlocutors.
Based in Bratislava, Slovakia
Full-time three-year contract, starting September 2020
PhD enrolment at: Slovak Technical University, Bratislava
Main supervisor’s institution: IISAS, Bratislava
Main supervisor: Prof Štefan Beňuš
- Orange Labs, Lannion: strategies for dealing with displeased users of automatic conversational systems, identifying the markers of alignment and integrating them in the architecture of conversational systems and Internet of Things devices (5 months);
- Hong Kong Polytechnic University: ontological and linguistic models for representation and verification of alignment (specifically, alignment of emotion stated by checking explicit clues such as pre-emotion cause and post-emotion (re)action) (5,5 months).
- Orange Labs, Lannion, France
- Hong Kong Polytechnic University