An investigation into the validity of some metrics for automatically evaluating Natural Language Generation systems

Ehud Reiter, Anja Belz

Research output: Contribution to journalArticlepeer-review

Abstract

There is growing interest in using automatically computed corpus-based evaluation metrics to evaluate Natural Language Generation (NLG) systems, because these are often considerably cheaper than the human-based evaluations which have traditionally been used in NLG. We review previous workon NLG evaluation and on validation of automatic metrics in NLP, and then present the results of two studies of how well some metrics which are popular in other areas of NLP (notably BLEU and ROUGE) correlate with human judgments in the domain of computer-generated weather forecasts. Our results suggest that, at least in this domain, metrics may provide a useful measure of language quality, although the evidence for this is not as strong as we would ideally like to see; however, they do not provide a useful measure of content quality. We also discuss a number of caveats which must be kept in mind when interpreting this and other validation studies.
Original languageEnglish
Pages (from-to)529-558
Number of pages30
JournalComputational Linguistics
Volume35
Issue number4
DOIs
Publication statusPublished - 31 Dec 2009

Fingerprint

Dive into the research topics of 'An investigation into the validity of some metrics for automatically evaluating Natural Language Generation systems'. Together they form a unique fingerprint.

Cite this