Comparing rating scales and preference judgements in language evaluation

Anja Belz, Eric Kow

Research output: Chapter in Book/Conference proceeding with ISSN or ISBNConference contribution with ISSN or ISBNpeer-review

Abstract

Rating-scale evaluations are common in NLP, but are problematic for a range of reasons, e.g. they can be unintuitive for evaluators, inter-evaluator agreement and self-consistency tend to be low, and the parametric statistics commonly applied to the results are not generally considered appropriate for ordinal data. In this paper, we compare rating scales with an alternative evaluation paradigm, preference-strength judgement experiments (PJEs), where evaluators have the simpler task of deciding which of two texts is better in terms of a given quality criterion. We present three pairs of evaluation experiments assessing text fluency and clarity for different data sets, where one of each pair of experiments is a rating-scale experiment, and the other is a PJE. We find the PJE versions of the experiments have better evaluator self-consistency and inter-evaluator agreement, and a larger proportion of variation accounted for by system differences, resulting in a larger number of significant differences being found.
Original languageEnglish
Title of host publicationProceedings of the 6th International Language Generation Congerence (INLG'10)
Place of PublicationStroudsburg, PA, USA
PublisherAssociation for Computational Linguistics
Pages7-15
Number of pages9
DOIs
Publication statusPublished - 1 Jan 2010
EventProceedings of the 6th International Language Generation Congerence (INLG'10) - Dublin, Ireland
Duration: 1 Jan 2010 → …

Conference

ConferenceProceedings of the 6th International Language Generation Congerence (INLG'10)
Period1/01/10 → …

Fingerprint

Dive into the research topics of 'Comparing rating scales and preference judgements in language evaluation'. Together they form a unique fingerprint.

Cite this