Freezing layers: how to obtain the names of the layers to freeze?

I am trying to do continued training (fine-tuning) on an in-domain dataset and would like to freeze the embedding layers of the encoder and decoder in the pre-trained model. It can be done in the config file:

# this example is from the documentation
# (optional) List of layer to not optimize.
freeze_layers:
   - "encoder/layers/0"
   - "decoder/output_layer"

However, where can I get the right names of the embedding layers to specify in the config file (like those "encoder/layers/0" and "decoder/output_layer" in the example above)?

Good question.

The name is referring to the path of attributes that should be followed from the model object. For example, the name encoder/layers/0 refers to the layer model.encoder.layers[0]. So you may need to do some code exploration to infer the layer names.

In your case, you can freeze both embedding layers with:

freeze_layers:
  - "examples_inputter"

The examples_inputter attribute is shared by all models and wraps the input layers.

It’s working: I didn’t see any indications of the layers been frozen in the training log, but after training for a bit, I printed out the weights. The embedding layers remained the same, whereas other had got updated.

Thanks, Guillaume!

Yes, we should definitely add some logs.

1 Like