On the Optimal Node Ratio between Hidden Layers: A Probabilistic Study

Alan Thomas, Simon Walters, Mohammad Malekshahi Gheytassi, Robert Morgan, Miltiadis Petridis

Research output: Contribution to journalArticle

Abstract

Two-hidden layer feedforward neural networks (TLFNs) have been shown to outperform single-hidden-layer neural networks (SLFNs) for function approximation in many cases. However, their added complexity makes them more difficult to find. Given a constant number of hidden nodes nh, this paper investigates how their allocation between the first and second hidden layers (n h = n1 + n2 ) affects the likelihood of finding the best generaliser. The experiments were carried out over a total of ten public domain datasets with nh = 8 and 16. The findings were that the heuristic n1 = 0.5nh + 1 has an average probability of at least 0.85 of finding a network with a generalisation error within 0.18% of the best generaliser. Furthermore, the worst case over all data sets was within 0.23% for nh = 8, and within 0.15% for nh = 16. These findings could be used to reduce the complexity of the search for TLFNs from quadratic to linear, or alternatively for ‘topology mapping’ between TLFNs and SLFNs, given the same number of hidden nodes, to compare their performance.
Original languageEnglish
Pages (from-to)241-247
Number of pages7
JournalInternational Journal of Machine Learning and Computing
Volume6
Issue number5
DOIs
Publication statusPublished - 7 Oct 2016

Keywords

  • ANN
  • optimal node ratio
  • topology mapping
  • two-hidden-layer feedforward
  • function approximation

Fingerprint Dive into the research topics of 'On the Optimal Node Ratio between Hidden Layers: A Probabilistic Study'. Together they form a unique fingerprint.

  • Cite this