Alright, lets move on to more interesting stuff: linear regression. Since the main focus in TensorFlow, and given the abundancy of online resources on the subject, I’ll just assume you are familiar with Linear Regressions.

As previously mentioned, a linear regression has the following formula:

Where Y is the dependent variable, X is the independent variable, and b0 and b1 being the parameters we want to adjust.

Let us generate random data, and feed that random data into a linear function. Then, as opposed to using the closed-form solution, we use an iterative algorithm to progressively become closer to a minimal cost, in this case using gradient descent to fit a linear regression.

We start by initializing the weights – b0 and b1 – with random variables, which naturally results in a poor fit. However, as one can see through the print statements as the model trains, b0 approaches the target value of 3, and b1 of 5, with the last printed step: [3.0229037, 4.9730182]

The next figure illustrates the progressive fitting of lines of the model, as weights change:

I'm a Big Data/Machine Learning Engineer and co-founder of GoSmarten.
I blog about data related topics, from collecting and transforming massive amounts of data, to predictive systems and setting those systems up with DevOps approaches. Mainly I try to share learnings from behind the trenches that helped me and might also help others.
In case you need help to kickstart or with a current data project, feel free to contact at diogo [at] gosmarten.com
lin: https://de.linkedin.com/in/diogoaurelio
View all posts by Diogo Aurélio