## Linear Regression in Tensorflow

In linear regression train your model in order to find the line that best fit your data and so to start off you will need to import the standard libraries or import pandas and port number you need my library and lastly you need tensorflow as-:

Now here, use the dataset that map’s the relationship between years of experience in salary and so you will take the first column-:

X_train = dataset [:, 0]

Use it for independent variable and then you will the second column and use that for your dependent variable as-:

Y_train = dataset [:, 1]

Plt.scatter(x_train, Y_train)

Result is as-:

So as from above graph result, there are people with four years of experience tend to make roughly 60,000.

Define your hyper parameters as-:

Epochs are the number of iterations the number of times you end up computing the gradient and then the learning rate is the magnitude of the gradient, so it’s the number of steps you take at every iteration or the size of steps, and next you will need to define your tensorflow variables –:

X= tf.placeholder (tf.float32)

Y= tf.placeholder (tf. Float32)

This is start by defining the random weight and random bias and you can use numpy to do that-:

W= tf. Variable (np.random.randn(), name= ‘weight’)

B= tf. Variable (np.random.randn(), name= ‘bias’)

In linear regression a model is simply the equation for a line and so all you need to do is multiply W by X and add the bias.

y_predicted = W * X+B

Now you need to define the cost function and use the mean_square_error

Mean_square_error = tf.reduce_sum (Y = y_predicted) ** 2 / x_train.shape

• Next you need to do is, define your optimizer and in this example, you will be using gradient descent and what an optimizer does is it essentially tries to optimize your cost function or minimize it and you are going to pass in the step size which is learning rate and then you are going to tell it to minimize your main spray area.

Optimizer = tf.train.GradientDescentOptimizer (learning_rate).minimize(mean_square_error)

• Now you need to initialize your global variables as-:

Init = tf.global_variables_intializer()

• Now move ahead and use tensor flows high level API to train your model and you need to create a session and are initialized variables as-:
```With tf.Session() as sesh:
Sesh.run(init)
For each in range (epochs):
```

For x,y in zip(x_train, y_train);

Sesh.run (optimizer, feed_dict= (X:x, Y:y))

In above code for every epoch it will run the optimizer and pass it a dictionary.

Further you are going to do is- printout the mean square error, the weight and the bias every 10 epochs as-:

So run this command.

So when you run the command, at start you can see that sum of all the arrows and then the taking the average you would get the value. As you train your model this value gets lower because your line is doing approximating your data and so the line ends up being closer to the data points and you can see in above diagram that the weight which is you’re the slope to your line is converging towards roughly 10,000 and the bias is increasing now. If you wanted to plot the line, you could do -:

After above code alteration, run the command again-:

So above you can see the line that is roughly approximating your data points and if you were given someone with six years of experience and you would predict that they are making roughly \$80000 which isn’t too far off this data point right here.

Apply now for Advanced Python Training Course