Training (fine-tuning)

In order to fine tune the network, we will need to unfreeze some of those frozen layers. How many layers you unfreeze is your choice and you can unfreeze as much of the network as you like. In practice, most of the time, we only see benefits from unfreezing the top-most layers. Here I'm unfreezing only the very last inception block, which starts at layer 249 on the graph. The following code depicts the this technique:

def build_model_fine_tuning(model, learning_rate=0.0001, momentum=0.9):        for layer in model.layers[:249]:            layer.trainable = False        for layer in model.layers[249:]:            layer.trainable = True        model.compile(optimizer=SGD(lr=learning_rate,          momentum=momentum), loss='binary_crossentropy', metrics=           ['accuracy']) ...

Get Deep Learning Quick Reference now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.