Hands-On Neural Networks with Keras
上QQ阅读APP看书,第一时间看更新

Quantifying loss

Since the loss value gives us an indication of the difference between our predicted and actual outputs, it stands to reason that if our loss value is high, then there is a big difference between our model's predictions and the actual output. Conversely, a low loss value indicates that our model is closing the distance between the predicted and actual output. Ideally, we want our loss to converge to zero, which means that there is in effect not much difference between what our model thinks it sees, and what it is actually shown. We make our loss converge to zero by simply using another mathematical trick, grounded in calculus. How, you ask?