In this paper, we investigate the alignment between LLMs and people in experiments from social HRI.
@INPROCEEDINGS{wachowiak2024large,
author={Wachowiak, Lennart and Coles, Andrew and Celiktutan, Oya and Canal, Gerard},
booktitle={2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
title={Are Large Language Models Aligned with People’s Social Intuitions for Human–Robot Interactions?},
year={2024},
pages={2520-2527},
doi={10.1109/IROS58592.2024.10801325}
}
Correlations are highest with GPT-4, as shown in the following scatterplots:
For full results, refer to the paper. Scatterplots for other models can be found here for Experiment 1 and here for Experiment 2.
To get the video stimuli, use the following GitHub: https://github.com/lwachowiak/HRI-Video-Survey-on-Preferred-Robot-Responses



