Mugunthan
3 min readApr 4, 2021

Neural Network

Before starting to define “what is a neural network ?” let's discuss why. Why the name “Neural network” and why the neural network is used?.

A. Neuron B. Simple unit of Artificial Neural Network

Do you find any similarities?. Yeah, that’s right. Artificial Neural Network or simply Neural Network is inspired by the human nervous system. The simplest form of Artificial Neural Network is perceptron. The cell body is similar to the processing unit of the perceptron.

Now Why Neural Network?. Machine Learning involves analyzing data and feature engineering to be done by hand which is the most complex and important task to do. Neural Networks by using various algorithms can recognize hidden patterns and features from raw data.

What is a Neural Network?. Neural Networks comprises interconnected nodes and layers that can learn and mimic human behavior.

How does it work?

Neural Networks are to be trained so that they can learn the features that are needed for the task to be performed. Training is a process by which the features are learned and weights and biases (parameters of machine learning) are set. Using these parameters, ANN predicts the value.

Let’s get the training and weights updation for another episode !

Inputs(x) are multiplied with weights(W) (that are learned) added to bias(b) and fed to the activation function. Uh, wait What is the Activation function?. The activation function decides whether the function is activated or not (if the value is greater than the threshold, the neuron is activated). The purpose of the activation function is to introduce non-linearity in the neural network.

z = Wx + b

There are different types of activation functions.

  • Linear
  • ReLU
  • tanh
  • Sigmoid
  • Leaky ReLU
  • Parametrized ReLU
  • Softmax

Let's discuss few activation functions

ReLU:

Rectified Linear Unit or ReLU is a piecewise activation function that returns only a positive value or zero.

f(x) = max(0, x)

ReLU Activation Function

Sigmoid:

It is one of the most popular used non-linear activation function. Sigmoid transforms the output between 0 to 1.

f(x) = 1 / (1 + e^(-x))

Sigmoid Activation Function

tanh:

It can be seen as scaled version of the sigmoid activation function. It transform the output between -1 to 1.

f(x) = (2 / (1 + e^(-2x))) -1 or f(x) = 2 * sigmoid(-2x) -1

tanh Activation Function
Mugunthan
Mugunthan

Written by Mugunthan

Enthusiastic and Open-minded. Interested in Computer Vision and machine perception. Feel free to connect with me https://www.linkedin.com/in/mugunthan-ramesh/