Catalysis of neural activation functions: Adaptive feed-forward training for big data applications

Sagnik Sarkar, Shaashwat Agrawal, Thar Baker, Praveen Kumar Reddy Maddikunta, Thippa Reddy Gadekallu

Research output: Contribution to journalArticlepeer-review

Abstract

Deep Learning in the field of Big Data has become essential for the analysis and perception of trends. Activation functions play a crucial role in the outcome of these deep learning frameworks. The existing activation functions are hugely focused on data translation from one neural layer to another. Although they have been proven useful and have given consistent results, they are static and mostly non-parametric. In this paper, we propose a new function for modified training of neural networks that is more flexible and adaptable to the data. The proposed catalysis function works over Rectified Linear Unit (ReLU), sigmoid, tanh and all other activation functions to provide adaptive feed-forward training. The function uses vector components of the activation function to provide variational flow of input. The performance of this algorithm is tested on Modified National Institute of Standards and Technology (MNIST) and Canadian Institute for Advanced Research (CIFAR10) datasets against the conventional activation functions. Visual Geometry Group (VGG) blocks and Residual Neural Network (ResNet) architectures are used for experimentation. The proposed function has shown significant improvements in comparison to the traditional functions with a 75 ± 2.5% acuuracy across activation functions. The adaptive nature of training has drastically decreased the probability of under-fitting. The parameterization has helped increase the data learning capacity of models. On performing sensitivity analysis, the catalysis activation show slight or no changes on varying initialization parameters.
Original languageEnglish
Pages (from-to)13364–13383
Number of pages20
JournalApplied Intelligence
Volume52
Issue number12
DOIs
Publication statusPublished - 24 Mar 2022

Keywords

  • Activation function
  • Big data
  • Catalysis function
  • Neural networks
  • Rectified linear unit (Relu)

Fingerprint

Dive into the research topics of 'Catalysis of neural activation functions: Adaptive feed-forward training for big data applications'. Together they form a unique fingerprint.

Cite this