上QQ阅读APP看书,第一时间看更新
Cost functions
To quickly recap, we know how a basic perceptron works and its pitfalls. We then saw how activation functions overcame the perceptron's pitfalls, giving rise to other neuron types that are in use today.
Now, we are going to look at how we can tell when the neurons are wrong. For any type of neuron to learn, it needs to know when it outputs the wrong value and by what margin. The most common way to measure how wrong the neural network is, is to use a cost function.
A cost function quantifies the difference between the output we get from a neuron to an output that we need from that neuron. There are two common types of cost functions that are used: mean squared error and cross entropy.