Fine-Tuning GPT-3 for Synthetic Danish News Generation

Mina Almasi, Anton Schiønning


In Sessions:

INLG Oral Session 3: Leveraging Large Language Models for NLG: (Thursday, 10:30 CEST, Sun II , Watch on Zoom , Chat on Discord )


Fine-Tuning GPT-3 for Synthetic Danish News Generation

Abstract: While GPT-3 has garnered significant attention for its capabilities in natural language generation, research on its use outside of English is still relatively limited. We focus on how GPT-3 can be fine-tuned for generating synthetic news articles in a low-resource language, namely Danish. The model's performance is evaluated on the dimensions of human and machine detection in two separate experiments. When presented with either a real or GPT-3 generated news article, human participants achieve a 58.1% classification accuracy. Contrarily, a fine-tuned BERT classifier obtains a 92.7% accuracy on the same task. This discrepancy likely pertains to the fine-tuned GPT-3 model oversampling high-likelihood tokens in its text generation. Although this is undetectable to the human eye, it leaves a statistical discrepancy for machine classifiers to detect. We address how decisions in the experimental design favoured the machine classifiers over the human evaluators, and whether the produced synthetic articles are applicable in a real-world context.