machine learning - Training Error -- what's the point? -
what's overall point of training error in goal of regression (i.e, making predictions)?
you might like, "well, see, training error can determine model of complexity best use. "
and that, say, "no can't. low training error mean model conforming whatever data you're training model with, a.k.a overfitting"
what's point of calculating training error if it's not predictive measure of performance?
especially when go through , say, hell training error, use validation error..
when ever use training error?
low training error can indicative of overfitting.. use of it?
training error can bad metric of model performance, have correctly pointed out. however, there no going around fact need train model make meaningful predictions.
that why need training, validation , test phases , data sets. overfitting can happen in training dataset can alleviated extent using randomly sub-sampled validation dataset because if have overfitted, model not generalize (you should see training error down monotonically model complexity increases validation error plateaus @ point , additional model complexity increases validation error). however, if not training of model, not have model validate!
the model needs trained. there no getting around that. however, training error useless. 1 needs perform cross-validation in order ensure model generalizable. bottom line anything use model has seen during training phase evaluate it's performance invalid. useful model fitting not evaluation. correct way cross validation regardless of op claims in discussion below.
you should concept of bias-variance tradeoff has direct bearing on question , should clarify doubt.
Comments
Post a Comment