Intelligent Projects Using Python
上QQ阅读APP看书,第一时间看更新

Model checkpoints based on validation log loss 

It is always a good practice to save the model when the validation score chosen for evaluation improves. For our project, we will be tracking the validation log loss, and will save the model as the validation score improves over the different epochs. This way, after the training, we will save the model weights that provided the best validation score, and not the final model weights from when we stopped the training. The training will continue until the maximum number of epochs defined for the training is reached, or until the validation log loss hasn't reduced for 10 epochs in a row. We will also reduce the learning rate when the validation log loss doesn't improve for 3 epochs. The following code block can be used to perform the learning rate reduction and checkpoint operation:

reduce_lr = keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.50,
patience=3, min_lr=0.000001)

callbacks = [
EarlyStopping(monitor='val_loss', patience=10, mode='min', verbose=1),
CSVLogger('keras-5fold-run-01-v1-epochs_ib.log', separator=',', append=False),reduce_lr,
ModelCheckpoint(
'kera1-5fold-run-01-v1-fold-' + str('%02d' % (k + 1)) + '-run-' + str('%02d' % (1 + 1)) + '.check',
monitor='val_loss', mode='min', # mode must be set to max or keras will be confused
save_best_only=True,
verbose=1)
]

As you can see in the preceding code block, the learning rate reduces to half (0.50) if the validation loss hasn't improved in 3 (patience=3) epochs. Similarly, we stop the training (by performing EarlyStopping) if the validation loss is not reduced in 10 (patience = 10) epochs. The model is saved whenever the validation log loss reduces, as shown in the following code snippet:

'kera1-5fold-run-01-v1-fold-' + str('%02d' % (k + 1)) + '-run-' + str('%02d' % (1 + 1)) + '.check'

The validation log loss in each epoch of the training process is tracked in the keras-5fold-run-01-v1-epochs_ib.log log file, and is referred to in order to save the model if the validation log loss improves, or to decide when to reduce the learning rate or stop the training.

The models in each fold are saved by using the keras save function in user-defined paths, while during inference, the models are loaded into memory by using the keras.load_model function.