Supervised learning
In this form of ML, the algorithm is presented with a huge number of training samples, which contain information about all of the parameters, or features, that would be used to determine an output feature. This output feature could be a continuous range of values or a discrete collection of labels. Based on this, supervised ML algorithms are divided into two parts:
- Classification: Algorithms that produce discrete labels in the output feature, such as normal and not normal or a set of news categories
- Regression: When the output feature has real values, for example, the number of votes a political party might receive in an election, or the temperature of a material at which it is predicted to reach its melting point
Most ML enthusiasts, when they begin their study of machine learning, tend to familiarize themselves with supervised learning first due to its intuitive simplicity. It has some of the simplest algorithms, which are easy to understand without a deep knowledge of mathematics and are even derived from what mathematics students learn in their final years at schools. Some of the most well known supervised learning algorithms are linear regression, logistic regression, support vector machines, and k-nearest neighbors.