ESR12: The influence of alignment

PhD Fellow: Jay Kejriwal

Hi! I am tech-savvy. That has fairly inspired my journey till now. I applied to COBRA because I find it a great opportunity that combines academic research and industry knowledge in one place. I’ve my bachelor’s in Computer Science engineering from MD university from Rohtak, India, and a master’s in computational linguistics from the University of Tübingen, Germany.

I also have industrial experience of 4 years where I was working on various projects like providing IT solutions in infrastructure, surveillance technology, chip-level solution, etc.

I’m incredibly excited to be a part of this network and I look forward to meeting more Furhat robots and also humans too (maybe robots more :P).

Objectives:

Alignment in both human-human and human-machine interactions has been shown to be beneficial on several social dimensions such as likeability, task success, learning gains, or trust. ESR12 will test the central hypothesis that people (and machines) who adapt to their interlocutor(s) influence their interlocutors’ behavior and emotional state more than people/machines who do not adapt.


Expected results:

  • Scenario design for gauging the effect of alignment on users’ states;
  • Knowledge regarding the potential of speech alignment to affect the emotional states and decision-making preferences;
  • Better understanding of the differences between machines and human interlocutors.

Based in Bratislava, Slovakia

Full-time three-year contract, starting September 2020

PhD enrolment at: Slovak Technical University, Bratislava

Main supervisor’s institution: IISAS, Bratislava

Main supervisor: Prof Štefan Beňuš

Secondments:

  • Orange Labs, Lannion: strategies for dealing with displeased users of automatic conversational systems, identifying the markers of alignment and integrating them in the architecture of conversational systems and Internet of Things devices (5 months);
  • Hong Kong Polytechnic University: ontological and linguistic models for representation and verification of alignment (specifically, alignment of emotion stated by checking explicit clues such as pre-emotion cause and post-emotion (re)action) (5,5 months).

Co-supervisors’ institutions:

  • Orange Labs, Lannion, France
  • Hong Kong Polytechnic University


Scroll to top