Abstract
While natural language generation (NLG) has a strong evaluation tradition, in particular in userbased and task-oriented evaluation, it has never evaluated different approaches and techniques by comparing their performance on the same tasks (shared-task evaluation, STE). NLG is characterised by a lack of consolidation of results, and by isolation from the rest of NLP where STE is now standard. It is, moreover, a shrinking field (state-of-the-art MT and summarisation no longer perform generation as a subtask) which lacks the kind of funding and participation that natural language understanding (NLU) has attracted.
Original language | English |
---|---|
Title of host publication | Proceedings of the 4th International Conference on Natural Language Generation (INLG'06) |
Place of Publication | Germany |
Publisher | DBLP |
Pages | 133-135 |
Number of pages | 3 |
Publication status | Published - 1 Jan 2006 |
Event | Proceedings of the 4th International Conference on Natural Language Generation (INLG'06) - Sydney, Australia Duration: 1 Jan 2006 → … |
Conference
Conference | Proceedings of the 4th International Conference on Natural Language Generation (INLG'06) |
---|---|
Period | 1/01/06 → … |
Keywords
- Natural language generation