Applied Deep Learning with Keras
上QQ阅读APP看书,第一时间看更新

Introduction

In this chapter, you will learn how to implement your first neural network using Keras. This chapter covers the basics of deep learning and will provide you with the foundation necessary to build highly complex neural network architectures. We start by extending the logistic regression model to a simple single-layer neural network and then proceed to more complicated neural networks with multiple hidden layers. In this process, you will learn about the underlying basic concepts of neural networks, including forward propagation for making predictions, computing loss, backpropagation for computing derivative of loss with respect to model parameters, and finally gradient descent for learning optimal parameters for the model. You will also learn about the various choices available to build and train a neural network in terms of activation functions, loss functions, and optimizers.

Furthermore, you will learn how to evaluate your model while understanding issues such as overfitting and underfitting, looking at how they can impact the performance of your model and how to detect them. You will learn about the drawbacks of evaluating a model on the same dataset used for training, and the alternative approach of holding back a part of the available dataset for evaluation purposes. Subsequently, you will learn how comparing the model error rate on each of these two subsets of the dataset can be used to detect problems such as high bias and high variance in the model. Lastly, you will learn about a technique called early stopping to reduce overfitting, which is again based on comparing the model's error rate on the two subsets of the dataset.