Two Hidden Layers are Usually Better than One

Alan Thomas, Miltiadis Petridis, Simon Walters, Mohammad Malekshahi Gheytassi, Robert Morgan

Research output: Chapter in Book/Conference proceeding with ISSN or ISBNConference contribution with ISSN or ISBNpeer-review

Abstract

This study investigates whether feedforward neural networks with two hidden layers generalise better than those with one. In contrast to the existing literature, a method is proposed which allows these networks to be compared empirically on a hidden-node-by-hidden-node basis. This is applied to ten public domain function approximation datasets. Networks with two hidden layers were found to be better generalisers in nine of the ten cases, although the actual degree of improvement is case dependent. The proposed method can be used to rapidly determine whether it is worth considering two hidden layers for a given problem.
Original languageEnglish
Title of host publicationEANN: International Conference on Engineering Applications of Neural Networks
Place of PublicationSwitzerland
PublisherSpringer International Publishing
Pages279-290
Number of pages12
Volume744
ISBN (Electronic)9783319651729
ISBN (Print)9783319651712
DOIs
Publication statusPublished - 2 Aug 2017
EventEANN: International Conference on Engineering Applications of Neural Networks - Athens, Greece, 25-27 August 2017
Duration: 2 Aug 2017 → …

Publication series

NameCommunications in Computer and Information Sciences

Conference

ConferenceEANN: International Conference on Engineering Applications of Neural Networks
Period2/08/17 → …

Bibliographical note

The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-65172-9_24

Fingerprint

Dive into the research topics of 'Two Hidden Layers are Usually Better than One'. Together they form a unique fingerprint.

Cite this