TY - GEN
T1 - Two Hidden Layers are Usually Better than One
AU - Thomas, Alan
AU - Petridis, Miltiadis
AU - Walters, Simon
AU - Malekshahi Gheytassi, Mohammad
AU - Morgan, Robert
N1 - The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-65172-9_24
PY - 2017/8/2
Y1 - 2017/8/2
N2 - This study investigates whether feedforward neural networks with two hidden layers generalise better than those with one. In contrast to the existing literature, a method is proposed which allows these networks to be compared empirically on a hidden-node-by-hidden-node basis. This is applied to ten public domain function approximation datasets. Networks with two hidden layers were found to be better generalisers in nine of the ten cases, although the actual degree of improvement is case dependent. The proposed method can be used to rapidly determine whether it is worth considering two hidden layers for a given problem.
AB - This study investigates whether feedforward neural networks with two hidden layers generalise better than those with one. In contrast to the existing literature, a method is proposed which allows these networks to be compared empirically on a hidden-node-by-hidden-node basis. This is applied to ten public domain function approximation datasets. Networks with two hidden layers were found to be better generalisers in nine of the ten cases, although the actual degree of improvement is case dependent. The proposed method can be used to rapidly determine whether it is worth considering two hidden layers for a given problem.
U2 - 10.1007/978-3-319-65172-9_24
DO - 10.1007/978-3-319-65172-9_24
M3 - Conference contribution with ISSN or ISBN
SN - 9783319651712
VL - 744
T3 - Communications in Computer and Information Sciences
SP - 279
EP - 290
BT - EANN: International Conference on Engineering Applications of Neural Networks
PB - Springer International Publishing
CY - Switzerland
T2 - EANN: International Conference on Engineering Applications of Neural Networks
Y2 - 2 August 2017
ER -