Member-only story

Descending with Derivatives Cause Walking in Circles is Too Easy.

Neural Networks Made Easy : A Hands-On Introduction Part 2

Damini Vadrevu
9 min read1 day ago

We left off at the activation function and entered the realm of non-linearity in our last article. I recommend reading that one before you read this because I do take series of events quite seriously. We’re still going to be talking about the basics before putting it all together and synchronizing this context with neurons and layers and hidden layers because I take prerequisites also quite seriously. Math shouldn’t be too hard, especially when we’re going to be measuring disappointment just like your life.

Image by freepik

1. Loss Function

Once again, we are predicting house prices. You have the actual house price, and then you have the house price your model predicted. To see how bad or good your model did in predicting, seeing how far off the predictions are from the true values seems logical — This is the measure of disappointment if your model was horrible at predicting.

  • The actual price of a house is $500,000, but the model predicts $450,000.
  • The loss is the difference between the predicted value and the actual value, which is $50,000 (disappointment)

--

--

Damini Vadrevu
Damini Vadrevu

Written by Damini Vadrevu

Humans are complex, and so is our data. I make data science easy to understand here. Welcome!

No responses yet