Prompting, Retrieval, Training: An Exploration of Different Approaches for Task-Oriented Dialogue Generation

Gonçalo Raposo, Luisa Coheur, Bruno Martins

Paper

In Sessions:

Sigdial Poster Session 2: (Thursday, 14:00 CEST, Foyer , Chat on Discord )

Poster

Prompting, Retrieval, Training: An Exploration of Different Approaches for Task-Oriented Dialogue Generation

Abstract: Task-oriented dialogue systems need to generate appropriate responses to help fulfill users' requests. This paper explores different strategies, namely prompting, retrieval, and fine-tuning, for task-oriented dialogue generation. Through a systematic evaluation, we aim to provide valuable insights and guidelines for researchers and practitioners working on developing efficient and effective dialogue systems for real-world applications. Evaluation is performed on the MultiWOZ and Taskmaster-2 datasets, and we test various versions of FLAN-T5, GPT-3.5, and GPT-4 models. Costs associated with running these models are analyzed, and dialogue evaluation is briefly discussed. Our findings suggest that when testing data differs from the training data, fine-tuning may decrease performance, favoring a combination of a more general language model and a prompting mechanism based on retrieved examples.