How To Create A Neural Network In Python
To create a neural network, you need to decide what you want to learn. Here, I'm going to choose a fairly simple goal: to implement a three-input XOR gate. (It's an exclusive OR gate.) The table shows the function we want to implement as an array. I will use the information in the table below to create a neural network with python code only:

for the Neural Network
Before I get into building a neural network with Python, I will suggest that you first go through this article to understand what a neural network is and how it works. Now let's get started with this task to build a neural network with Python.
Also, Read – GroupBy Function in Python.
Neural Network with Python:
I'll only be using the Python library called NumPy, which provides a great set of functions to help us organize our neural network and also simplifies the calculations.
Now, let start with the task of building a neural network with python by importing NumPy:
Code language: JavaScript ( javascript )
import numpy as np
Next, we define the eight possibilities of our inputs X1 – X3 and the output Y1 from the table above:
Code language: PHP ( php )
# X = input of our 3 input XOR gate # set up the inputs of the neural network (right from the table) X = np.array(([0,0,0],[0,0,1],[0,1,0], \ [0,1,1],[1,0,0],[1,0,1],[1,1,0],[1,1,1]), dtype=float) # y = our output of our neural network y = np.array(([1], [0], [0], [0], [0], \ [0], [0], [1]), dtype=float)
We must now choose a value to predict:
Code language: PHP ( php )
# what value we want to predict xPredicted = np.array(([0,0,1]), dtype=float) X = X/np.amax(X, axis=0) # maximum of X input array # maximum of xPredicted (our input data for the prediction) xPredicted = xPredicted/np.amax(xPredicted, axis=0)
Save our squared loss results in a file to be used by Excel by epoch:
Code language: PHP ( php )
# set up our Loss file for graphing lossFile = open("SumSquaredLossList.csv", "w")
Build the Neural_Network class for our problem. The table above shows the network we are building. You can see that each of the layers is represented by a line in the network:
Code language: Python ( python )
class Neural_Network (object): def __init__ (self): #parameters self.inputLayerSize = 3 # X1,X2,X3 self.outputLayerSize = 1 # Y1 self.hiddenLayerSize = 4 # Size of the hidden layer
Now set all the weights in the network to random values to start:
Code language: PHP ( php )
# build weights of each layer # set to random values # look at the interconnection diagram to make sense of this # 3x4 matrix for input to hidden self.W1 = np.random.randn(self.inputLayerSize, self.hiddenLayerSize) # 4x1 matrix for hidden layer to output self.W2 = np.random.randn(self.hiddenLayerSize, self.outputLayerSize)
The function below implements the feed-forward path through our neural network:
Code language: PHP ( php )
def feedForward(self, X): # feedForward propagation through our network # dot product of X (input) and first set of 3x4 weights self.z = np.dot(X, self.W1) # the activationSigmoid activation function - neural magic self.z2 = self.activationSigmoid(self.z) # dot product of hidden layer (z2) and second set of 4x1 weights self.z3 = np.dot(self.z2, self.W2) # final activation function - more neural magic o = self.activationSigmoid(self.z3) return o
And now we need to add the backwardPropagate function which implements the real trial and error learning that our neural network uses:
Code language: PHP ( php )
def backwardPropagate(self, X, y, o): # backward propagate through the network # calculate the error in output self.o_error = y - o # apply derivative of activationSigmoid to error self.o_delta = self.o_error*self.activationSigmoidPrime(o) # z2 error: how much our hidden layer weights contributed to output # error self.z2_error = self.o_delta.dot(self.W2.T) # applying derivative of activationSigmoid to z2 error self.z2_delta = self.z2_error*self.activationSigmoidPrime(self.z2) # adjusting first set (inputLayer --> hiddenLayer) weights self.W1 += X.T.dot(self.z2_delta) # adjusting second set (hiddenLayer --> outputLayer) weights self.W2 += self.z2.T.dot(self.o_delta)
To train the network at a particular time, we will call the backwardPropagate and feedForward functions each time we train the network:
Code language: PHP ( php )
def trainNetwork(self, X, y): # feed forward the loop o = self.feedForward(X) # and then back propagate the values (feedback) self.backwardPropagate(X, y, o)
The sigmoid activation function and the first derivative of the sigmoid activation function are as follows:
Code language: PHP ( php )
def activationSigmoid(self, s): # activation function # simple activationSigmoid curve as in the book return 1/(1+np.exp(-s)) def activationSigmoidPrime(self, s): # First derivative of activationSigmoid # calculus time! return s * (1 - s)
Then save the epoch values of the loss function to a file for Excel and the neural weights:
Code language: PHP ( php )
def saveSumSquaredLossList(self,i,error): lossFile.write(str(i)+","+str(error.tolist())+'\n') def saveWeights(self): # save this in order to reproduce our cool network np.savetxt("weightsLayer1.txt", self.W1, fmt="%s") np.savetxt("weightsLayer2.txt", self.W2, fmt="%s")
Next, we run our neural network to predict the outputs based on the weights currently being trained:
Code language: PHP ( php )
def predictOutput(self): print ("Predicted XOR output data based on trained weights: ") print ("Expected (X1-X3): \n" + str(xPredicted)) print ("Output (Y1): \n" + str(self.feedForward(xPredicted))) myNeuralNetwork = Neural_Network() trainingEpochs = 1000 #trainingEpochs = 100000
What follows is the main learning loop that crosses all requested eras. Edit the trainingEpochs variable above to vary the number of epochs you want to train your network:
Code language: PHP ( php )
for i in range(trainingEpochs): print ("Epoch # " + str(i) + "\n") print ("Network Input : \n" + str(X)) print ("Expected Output of XOR Gate Neural Network: \n" + str(y)) print ("Actual Output from XOR Gate Neural Network: \n" + \ str(myNeuralNetwork.feedForward(X)) # mean sum squared loss Loss = np.mean(np.square(y - myNeuralNetwork.feedForward(X))) myNeuralNetwork.saveSumSquaredLossList(i,Loss) print ("Sum Squared Loss: \n" + str(Loss)) print ("\n") myNeuralNetwork.trainNetwork(X, y)
Save your training results for reuse and predict the output of the requested value:
Code language: CSS ( css )
myNeuralNetwork .saveWeights() myNeuralNetwork .predictOutput()
Now after running your python file, you will see the program start to cycle through 1000 training epochs, print the results of each epoch, and then finally show the final input and output.
Epoch # 999 Network Input : [[0. 0. 0.] [0. 0. 1.] [0. 1. 0.] [0. 1. 1.] [1. 0. 0.] [1. 0. 1.] [1. 1. 0.] [1. 1. 1.]] Expected Output of XOR Gate Neural Network: [[1.] [0.] [0.] [0.] [0.] [0.] [0.] [1.]] Actual Output from XOR Gate Neural Network: [[0.93419893] [0.04425737] [0.01636304] [0.03906686] [0.04377351] [0.01744497] [0.0391143 ] [0.93197489]] Sum Squared Loss: 0.0020575319565093496 Predicted XOR output data based on trained weights: Expected (X1-X3): [0. 0. 1.] Output (Y1): [0.04422615]
Also, Read – Lung Segmentation with Machine Learning.
So this is how to build a neural network with Python code only. I hope you liked this article on building a neural network with python. Feel free to ask your valuable questions in the comments section below. You can also follow me on Medium to learn every topic of Machine Learning and Python.
Follow Us:
How To Create A Neural Network In Python
Source: https://thecleverprogrammer.com/2020/09/07/neural-network-with-python-code/
Posted by: priceboying.blogspot.com
0 Response to "How To Create A Neural Network In Python"
Post a Comment