Exploring the Naturalness of Cognitive Status-Informed Referring Form Selection Models

Gabriel Del Castillo, Grace Clark, Zhao Han, Tom Williams

Paper

In Sessions:

INLG Oral Session 4: Evaluation and linguistic analysis of NLG systems: (Thursday, 14:00 CEST, Sun II , Watch on Zoom , Chat on Discord )

Poster

Exploring the Naturalness of Cognitive Status-Informed Referring Form Selection Models

Abstract: Language-capable robots must be able to efficiently and naturally communicate about objects in the environment. A key part of communication is Referring Form Selection (RFS): the process of selecting a form like it, that, or the N to use when referring to an object. Recent cognitive status-informed computational RFS models have been evaluated in terms of goodness-of-fit to human data. But it is as yet unclear whether these models actually select referring forms that are any more natural than baseline alternatives, regardless of goodness-of-fit. Through a human subject study designed to assess this question, we show that even though cognitive status-informed referring selection models achieve good fit to human data, they do not (yet) produce concrete benefits in terms of naturality. On the other hand, our results show that human utterances also had high variability in perceived naturality, demonstrating the challenges of evaluating RFS naturality.