#### EXPLAIN NEURAL NETWORK?

##### So, what is neural network and how it work?

When you hear about neural network, you usually think of neurons, neurons that what compose your brain. Actually neurons are connected in some kind of way, so let’s assume four neurons and these are connected in some kind of pattern like in random manner. • Neurons are either fire or not fire, so you need to be on or off, just like ‘1’ or ‘0’. O for some reason if one of the neuron is fired but that one neuron is connected all other neuron, so it will look at other neurons and a connection and it will possibly cause its connected neurons to fire or to not fire.
• Neural network essentially is a connected layer of neurons or connected layers, so multiple of neurons.

Let’s take an example, as there is input layer having four neurons and you have one more layer that only contains one neuron. In neural network connections are happening in different way. Neuron of second layer is connected to all neurons of first layer which is also known as fully connected neural network in which neuron of one layer is connected to neuron other layer exactly one time. So, if you want to add another neuron here then what would happen is each of these neurons would also connected to this that new neuron one time as-: So there are total of eight connections, so this is the neural network having each connection with respective weights. When you are actually using neural network to make predictions or to train it as-: As in above diagram there are respective inputs and outputs given in this diagram. Now move with some math stuff but start by designing what’s known as the architecture of neural network, so as there are inputs and outputs, where inputs are connected to outputs and each connection is known as weight. Each of input neuron has a value right in the respective case as ‘1’ or ‘0’. Now these values can be different such as decimal values, between 0 and 100. So with the formula, output should be the sum of the (values – multiplied by weights)-:

Sum of weights (I, i=1) [Vi*W] which actually means V1W1+V2W2+V3W3+V4W4 and this will be the output and that’s what output layer is going to have as a value which is left uncompleted. So for more accurate result follow the formula as-:

Sum of weights (I, i=1) [Vi*W:] + b; where b is bias value which is added to each weight.

V1W1+V2W2+V3W3+V4W4+b1+b2+b3+b4

Now with this train the network by providing output formula in which you are getting weighted sum.

For more understanding about how to train the system is with the example of the game of the snake and you will get all of the different inputs and all the different outputs, so what you do is you randomly decide like a recommended direction or just take the state of the snake, with which you will train the network using that information and network receive all of this information and it will start adjusting the biases value and the weights to get the correct output.

But in case of wrong output then what you will do is to add 1 to bias value and 2 is multiplied to weight, son this is the actually reason why neural networks typically take a massive amount of information to train because what you do is you pass it all of this information and then it keep going through the network.

• Just move to activation functions- It’s essentially a non-linear function that will allow you to add a degree of complexity to your network, so that you can have more of a function.
• Activation function shrinks down your data so that it is not as large so far. For example you are looking with data i.e. like hundreds of thousands like characters long or digits so with function normalize this data to access it easily.

To get output with activation function, you have formula like-:

F[sum(wiVi + bi)] f(x) , with this you get your output of neuron and reason you do that again when you are adjusting your weights and biases and you add that activation function.

• Another activation function is rectified linear unit and more.
• Activation function provides a condition that if output comes negative then it indicates 0 and if output will be positive then outcome will be 1.

• As there is an ideal case in which input can be zero, so to prevent the zero input, a bias value (+1) is added, so that input will not be zero.

F(x) = b0 + ∑nn=0 (bi * wi)

So here bi is input and wiis weight.