
上QQ阅读APP看书,第一时间看更新
The generator network
To implement the generator network, we need to create a Keras model and add the neural network layers. The steps required to implement the generator network are as follows:
- Start by specifying the values for different hyperparameters:
z_size = 200
gen_filters = [512, 256, 128, 64, 1]
gen_kernel_sizes = [4, 4, 4, 4, 4]
gen_strides = [1, 2, 2, 2, 2]
gen_input_shape = (1, 1, 1, z_size)
gen_activations = ['relu', 'relu', 'relu', 'relu', 'sigmoid']
gen_convolutional_blocks = 5
- Next, create an input layer to allow the network to take input. The input to the generator network is a vector sampled from a probabilistic latent space:
input_layer = Input(shape=gen_input_shape)
- Then, add the first 3D transpose convolution (or 3D deconvolution) block, as shown in the following code:
# First 3D transpose convolution( or 3D deconvolution) block
a = Deconv3D(filters=gen_filters[0],
kernel_size=gen_kernel_sizes[0],
strides=gen_strides[0])(input_layer)
a = BatchNormalization()(a, training=True)
a = Activation(activation=gen_activations[0])(a)
- Next, add four more 3D transpose convolutions (or 3D deconvolution) blocks, shown as follows:
# Next 4 3D transpose convolution( or 3D deconvolution) blocks
for i in range(gen_convolutional_blocks - 1):
a = Deconv3D(filters=gen_filters[i + 1],
kernel_size=gen_kernel_sizes[i + 1],
strides=gen_strides[i + 1], padding='same')(a)
a = BatchNormalization()(a, training=True)
a = Activation(activation=gen_activations[i + 1])(a)
- Then, create a Keras model and specify the inputs and the outputs for the generator network:
model = Model(inputs=input_layer, outputs=a)
- Wrap the entire code for the generator network inside a function called build_generator():
def build_generator():
"""
Create a Generator Model with hyperparameters values defined as follows
:return: Generator network
"""
z_size = 200
gen_filters = [512, 256, 128, 64, 1]
gen_kernel_sizes = [4, 4, 4, 4, 4]
gen_strides = [1, 2, 2, 2, 2]
gen_input_shape = (1, 1, 1, z_size)
gen_activations = ['relu', 'relu', 'relu', 'relu', 'sigmoid']
gen_convolutional_blocks = 5
input_layer = Input(shape=gen_input_shape)
# First 3D transpose convolution(or 3D deconvolution) block
a = Deconv3D(filters=gen_filters[0],
kernel_size=gen_kernel_sizes[0],
strides=gen_strides[0])(input_layer)
a = BatchNormalization()(a, training=True)
a = Activation(activation='relu')(a)
# Next 4 3D transpose convolution(or 3D deconvolution) blocks
for i in range(gen_convolutional_blocks - 1):
a = Deconv3D(filters=gen_filters[i + 1],
kernel_size=gen_kernel_sizes[i + 1],
strides=gen_strides[i + 1], padding='same')(a)
a = BatchNormalization()(a, training=True)
a = Activation(activation=gen_activations[i + 1])(a)
gen_model = Model(inputs=input_layer, outputs=a)
gen_model.summary()
return gen_model
We have successfully created a Keras model for the generator network. Next, create a Keras model for the discriminator network.