Friday, February 15, 2019

Learning curve (machine learning)

Justin Ormont: Created as stub w/ content from Learning_curve#In_machine_learning and http://bit.ly/2GHq2oC (BSD licensed)




[[File:Learning Curves (Naive Bayes).png|thumb|Learning curve showing training score and cross validation score]]


A '''learning curve''' shows the validation and training score of an estimator for varying numbers of training samples. It is a tool to find out how much a [[machine learning]] model benefits from adding more training data and whether the estimator suffers more from a variance error or a bias error. If both the validation score and the training score converge to a value that is too low with increasing size of the training set, it will not benefit much from more training data. <ref name="scikit-learn_learning-curve">Liquid error: wrong number of arguments (1 for 2)</ref>

The machine learning curve is useful for many purposes including comparing different algorithms,<ref>Liquid error: wrong number of arguments (1 for 2)</ref> choosing model parameters during design,<ref></ref> adjusting optimization to improve convergence, and determining the amount of data used for training.<ref>Liquid error: wrong number of arguments (1 for 2)</ref>

In the machine learning domain, there are two connotations of learning curves differing in the x-axis of the curves, with experience of the model graphed either as the the number of training examples used for learning or the number of iterations used in training the model.<ref></ref>

==References==




from Wikipedia - New pages [en] http://bit.ly/2GJSma3
via IFTTT

No comments:

Post a Comment