Abstract
Original language | English |
---|---|
Title of host publication | EANN: International Conference on Engineering Applications of Neural Networks |
Place of Publication | Switzerland |
Publisher | Springer International Publishing |
Pages | 279-290 |
Number of pages | 12 |
Volume | 744 |
ISBN (Electronic) | 9783319651729 |
ISBN (Print) | 9783319651712 |
DOIs | |
Publication status | Published - 2 Aug 2017 |
Event | EANN: International Conference on Engineering Applications of Neural Networks - Athens, Greece, 25-27 August 2017 Duration: 2 Aug 2017 → … |
Publication series
Name | Communications in Computer and Information Sciences |
---|
Conference
Conference | EANN: International Conference on Engineering Applications of Neural Networks |
---|---|
Period | 2/08/17 → … |
Fingerprint
Bibliographical note
The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-65172-9_24Cite this
}
Two Hidden Layers are Usually Better than One. / Thomas, Alan; Petridis, Miltiadis; Walters, Simon; Malekshahi Gheytassi, Mohammad; Morgan, Robert.
EANN: International Conference on Engineering Applications of Neural Networks. Vol. 744 Switzerland : Springer International Publishing, 2017. p. 279-290 (Communications in Computer and Information Sciences).Research output: Chapter in Book/Conference proceeding with ISSN or ISBN › Conference contribution with ISSN or ISBN
TY - GEN
T1 - Two Hidden Layers are Usually Better than One
AU - Thomas, Alan
AU - Petridis, Miltiadis
AU - Walters, Simon
AU - Malekshahi Gheytassi, Mohammad
AU - Morgan, Robert
N1 - The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-65172-9_24
PY - 2017/8/2
Y1 - 2017/8/2
N2 - This study investigates whether feedforward neural networks with two hidden layers generalise better than those with one. In contrast to the existing literature, a method is proposed which allows these networks to be compared empirically on a hidden-node-by-hidden-node basis. This is applied to ten public domain function approximation datasets. Networks with two hidden layers were found to be better generalisers in nine of the ten cases, although the actual degree of improvement is case dependent. The proposed method can be used to rapidly determine whether it is worth considering two hidden layers for a given problem.
AB - This study investigates whether feedforward neural networks with two hidden layers generalise better than those with one. In contrast to the existing literature, a method is proposed which allows these networks to be compared empirically on a hidden-node-by-hidden-node basis. This is applied to ten public domain function approximation datasets. Networks with two hidden layers were found to be better generalisers in nine of the ten cases, although the actual degree of improvement is case dependent. The proposed method can be used to rapidly determine whether it is worth considering two hidden layers for a given problem.
U2 - 10.1007/978-3-319-65172-9_24
DO - 10.1007/978-3-319-65172-9_24
M3 - Conference contribution with ISSN or ISBN
SN - 9783319651712
VL - 744
T3 - Communications in Computer and Information Sciences
SP - 279
EP - 290
BT - EANN: International Conference on Engineering Applications of Neural Networks
PB - Springer International Publishing
CY - Switzerland
ER -