Skip to content

Code and data for our IROS paper: "Are Large Language Models Aligned with People's Social Intuitions for Human–Robot Interactions?"

Notifications You must be signed in to change notification settings

lwachowiak/LLMs-for-Social-Robotics

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Are Large Language Models Aligned with People's Social Intuitions for Human–Robot Interactions?

In this paper, we investigate the alignment between LLMs and people in experiments from social HRI.

@INPROCEEDINGS{wachowiak2024large,
  author={Wachowiak, Lennart and Coles, Andrew and Celiktutan, Oya and Canal, Gerard},
  booktitle={2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, 
  title={Are Large Language Models Aligned with People’s Social Intuitions for Human–Robot Interactions?}, 
  year={2024},
  pages={2520-2527},
  doi={10.1109/IROS58592.2024.10801325}
}

Results

Correlations are highest with GPT-4, as shown in the following scatterplots:

Experiment 1 Correlations for Exp1 with GPT-4

Experiment 2 Correlations for Exp1 with GPT-4 Correlations for Exp1 with GPT-4

For full results, refer to the paper. Scatterplots for other models can be found here for Experiment 1 and here for Experiment 2.

Video Stimuli

To get the video stimuli, use the following GitHub: https://github.com/lwachowiak/HRI-Video-Survey-on-Preferred-Robot-Responses

About

Code and data for our IROS paper: "Are Large Language Models Aligned with People's Social Intuitions for Human–Robot Interactions?"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published