This post is part of a tutorial series:
- Getting through Deep Learning – CNNs (part 1)
- Getting through Deep Learning – TensorFlow intro (part 2)
- Getting through Deep Learning – TensorFlow intro (part 3)
Alright, lets move on to more interesting stuff: linear regression. Since the main focus in TensorFlow, and given the abundancy of online resources on the subject, I’ll just assume you are familiar with Linear Regressions.
As previously mentioned, a linear regression has the following formula:
Where Y is the dependent variable, X is the independent variable, and b0 and b1 being the parameters we want to adjust.
Let us generate random data, and feed that random data into a linear function. Then, as opposed to using the closed-form solution, we use an iterative algorithm to progressively become closer to a minimal cost, in this case using gradient descent to fit a linear regression.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import tensorflow as tf | |
import numpy as np | |
# Start by creating random values: | |
x_data = np.random.rand(100).astype(np.float32) | |
_y_data = 3 + 5 * x_data | |
y_data = np.vectorize(lambda y: y + np.random.normal(loc=0.0, scale=0.1))(_y_data) | |
# … and then initializing the variables a and b, with any random guess, | |
# and then defining the linear function: | |
b0 = tf.Variable(0.5) | |
b1 = tf.Variable(1.0) | |
y = b0 + b1 * x_data | |
# Loss function | |
loss = tf.reduce_mean(tf.square(y – y_data)) | |
optimizer = tf.train.GradientDescentOptimizer(0.5) | |
train = optimizer.minimize(loss) | |
init = tf.global_variables_initializer() | |
# Note: on older tensorflow versions the function was called: initialize_all_variables() | |
ses = tf.Session() | |
ses.run(init) | |
train_data = [] | |
for step in range(100): | |
evals = ses.run([train, b0, b1])[1:] | |
if step % 5 == 0: | |
print("Step: {step}, evaluation: {evals}".format(step=step, evals=evals)) | |
# Step: 0, evaluation: [4.9611549, 3.5183027] | |
# Step: 5, evaluation: [3.7335129, 3.6137826] | |
# Step: 10, evaluation: [3.5216508, 4.0207472] | |
# Step: 15, evaluation: [3.37235, 4.3058095] | |
# Step: 20, evaluation: [3.2676356, 4.5057435] | |
# Step: 25, evaluation: [3.1941922, 4.6459718] | |
# Step: 30, evaluation: [3.1426811, 4.7443233] | |
# Step: 35, evaluation: [3.1065528, 4.8133044] | |
# Step: 40, evaluation: [3.0812135, 4.8616853] | |
# Step: 45, evaluation: [3.0634415, 4.895618] | |
# Step: 50, evaluation: [3.0509768, 4.9194169] | |
# Step: 55, evaluation: [3.0422344, 4.9361095] | |
# Step: 60, evaluation: [3.0361025, 4.9478168] | |
# Step: 65, evaluation: [3.0318019, 4.956028] | |
# Step: 70, evaluation: [3.0287859, 4.9617872] | |
# Step: 75, evaluation: [3.02667, 4.965827] | |
# Step: 80, evaluation: [3.0251865, 4.9686594] | |
# Step: 85, evaluation: [3.0241456, 4.9706464] | |
# Step: 90, evaluation: [3.0234158, 4.9720407] | |
# Step: 95, evaluation: [3.0229037, 4.9730182] |
We start by initializing the weights – b0 and b1 – with random variables, which naturally results in a poor fit. However, as one can see through the print statements as the model trains, b0 approaches the target value of 3, and b1 of 5, with the last printed step: [3.0229037, 4.9730182]
The next figure illustrates the progressive fitting of lines of the model, as weights change:
Sources:
- Big Data University course “Deep Learning with TensorFlow“
- Demystifying Linear Regression Analysis
2 thoughts on “Getting through Deep Learning – Tensorflow intro (part 3)”