ChatGPT vs. Crowdsourcing vs. Experts: Annotating Open-Domain Conversations With Speech Functions

Lidiia Ostyakova, Veronika Smilga, Kseniia Petukhova, Maria Molchanova, Daniel Kornev

Paper

In Sessions:

Sigdial Oral Session 2: LLM for dialogue: (Wednesday, 15:40 CEST, Sun I , Watch on Zoom , Chat on Discord )

Poster

ChatGPT vs. Crowdsourcing vs. Experts: Annotating Open-Domain Conversations With Speech Functions

Abstract: This paper deals with the task of annotating open-domain conversations with speech functions. We propose a semi-automated method for annotating dialogs following the topic-oriented, multi-layered taxonomy of speech functions with the use of hierarchical guidelines using Large Language Models. These guidelines comprise simple questions about the topic and speaker change, sentence types, pragmatic aspects of the utterance, and examples that aid untrained annotators in understanding the taxonomy. We compare the results of dialog annotation performed by experts, crowdsourcing workers, and ChatGPT. To improve the performance of ChatGPT, several experiments utilising different prompt engineering techniques were conducted. We demonstrate that in some cases large language models can achieve human-like performance following a multi-step tree-like annotation pipeline on complex discourse annotation, which is usually challenging and costly in terms of time and money when performed by humans.