上QQ阅读APP看书,第一时间看更新
Training error and generalization error
The mistakes that a model makes while predicting during its training phase are collectively referred to as its training error. The mistakes that model makes when tested on either the validation set or the test set are referred to as its generalization error.
If we were to draw a relationship between these two types of error and bias and variance (and eventually overfitting and underfitting), this would look something like the following (although the relationship may not be linear every time as depicted in the diagrams):
If an ML model is underfitting (high bias), then its training error has to be high. On the other hand, if the model is overfitting (high variance), then its generalization error is high.
We will look at a standard ML workflow in the following section.