Shared-task evaluations in HLT: lessons for NLG

Anja Belz, Adam Kilgarriff

Research output: Chapter in Book/Conference proceeding with ISSN or ISBNConference contribution with ISSN or ISBNpeer-review

Abstract

While natural language generation (NLG) has a strong evaluation tradition, in particular in userbased and task-oriented evaluation, it has never evaluated different approaches and techniques by comparing their performance on the same tasks (shared-task evaluation, STE). NLG is characterised by a lack of consolidation of results, and by isolation from the rest of NLP where STE is now standard. It is, moreover, a shrinking field (state-of-the-art MT and summarisation no longer perform generation as a subtask) which lacks the kind of funding and participation that natural language understanding (NLU) has attracted.
Original languageEnglish
Title of host publicationProceedings of the 4th International Conference on Natural Language Generation (INLG'06)
Place of PublicationGermany
PublisherDBLP
Pages133-135
Number of pages3
Publication statusPublished - 1 Jan 2006
EventProceedings of the 4th International Conference on Natural Language Generation (INLG'06) - Sydney, Australia
Duration: 1 Jan 2006 → …

Conference

ConferenceProceedings of the 4th International Conference on Natural Language Generation (INLG'06)
Period1/01/06 → …

Keywords

  • Natural language generation

Fingerprint

Dive into the research topics of 'Shared-task evaluations in HLT: lessons for NLG'. Together they form a unique fingerprint.

Cite this