Abstract
A regular fixture on the mid 1990s international research seminar circuit was the billion-neuron artificial brain talk. The idea behind this project was simple: in order to create artificial intelligence, what was needed first of all was a very large artificial brain; if a big enough set of interconnected modules of neurons could be implemented, then it would be possible to evolve mammalian-level behavior with current computational- neuron technology. The talk included progress reports on the current size of the artificial brain, its structure, update rate, and power consumption, and explained how intelli- gent behavior was going to develop by mechanisms simulating biological evolution. What the talk didnt mention was what kind of functionality the team had so far managed to evolve, and so the first comment at the end of the talk was inevitably nice work, but have you actually done anything with the brain yet?1 In human language technology (HLT) research, we currently report a range of evaluation scores that measure and assess various aspects of systems, in particular the similarity of their outputs to samples of human language or to human-produced gold- standard annotations, but are we leaving ourselves open to the same question as the billion-neuron artificial brain researchers?
Original language | English |
---|---|
Pages (from-to) | 111-118 |
Number of pages | 8 |
Journal | Computational Linguistics |
Volume | 35 |
Issue number | 1 |
DOIs | |
Publication status | Published - 1 Jan 2009 |