I wanna run my model on mobile devices.
As far as I know OpenNMT-py models are not compatible with PyTorch Mobile.
If you are not afraid of compiling projects from sources, you can look into converting your model to CTranslate2 and compile the library for your mobile system.
On the other hand, some OpenNMT-tf models do support conversion to TensorFlow Lite which is another solution to go mobile:
Thank you so much! Is there any scripts to transform OpenNMT-py model to OpenNMT-tf model? Or maybe I should manually do it.
There is no such script at the moment. If you take that direction, you should probably retrain the model.
Thanks! There is one more question, how to print the model parameters of OpenNMT-tf model? I try to use model.get_weights() as it is a subclass from tf.keras.layers.Layer. But the result is always an empty list.
Weights are created on the first use. You can also call model.create_variables()
to create them.
Hi, @guillaumekln . Thank you for your response. I just try to transform the OpenNMT-py model to a OpenNMT-tf model. I did this before on a pytorch RNN model and it works great. So I checked the weights between OpenNMT-py and OpenNMT-tf, I found their parameters can be exactly aligned except the position embedding. As position embedding is a sin function which is always the same for any models so I think it’s not a problem.
Then I try to assign all the parameters to tensorflow model like this:
data = torch.load(ckpt_path) num_layers = 2 from collections import defaultdict torch_enc_names = defaultdict() torch_dec_names = defaultdict() torch_enc_names[".self_attn"] = [".linear_keys", ".linear_values", ".linear_query", ".final_linear"] torch_enc_names[".layer_norm"] = [""] torch_enc_names[".feed_forward"] = [".w_1", ".w_2", ".layer_norm"] torch_dec_names[".self_attn"] = [".linear_keys", ".linear_values", ".linear_query", ".final_linear"] torch_dec_names[".layer_norm_1"] = [""] torch_dec_names[".context_attn"] = [".linear_keys", ".linear_values", ".linear_query", ".final_linear"] torch_dec_names[".layer_norm_2"] = [""] torch_dec_names[".feed_forward"] = [".w_1", ".w_2", ".layer_norm"] tf_names = {} tf_names["self_attention_encoder_layer_"] = [] torch_all_names = ["encoder.embeddings.make_embedding.emb_luts.0.weight", "decoder.embeddings.make_embedding.emb_luts.0.weight"] base_torch_enc = "encoder.transformer.{}{}{}" base_torch_dec = "decoder.transformer_layers.{}{}{}" base_tf_enc = "transformer_1/self_attention_encoder_1/" for base in [base_torch_enc, base_torch_dec]: if base == base_torch_enc: torch_all_names.append("encoder.layer_norm.weight") torch_all_names.append("encoder.layer_norm.bias") torch_names = torch_enc_names else: torch_all_names.append("decoder.layer_norm.weight") torch_all_names.append("decoder.layer_norm.bias") torch_names = torch_dec_names for i in range(num_layers): for key in torch_names.keys(): for v in torch_names[key]: torch_full_name_1 = base.format(i, key, v) + ".weight" torch_full_name_2 = base.format(i, key, v) + ".bias" torch_all_names.append(torch_full_name_1) torch_all_names.append(torch_full_name_2) torch_all_names.append("0.weight") torch_all_names.append("0.bias") data["model"]["0.weight"] = data["generator"]["0.weight"] data["model"]["0.bias"] = data["generator"]["0.bias"] #for name in torch_all_names: # print(name) print(model.weights[0][:10]) import numpy as np for i in range(len(model.weights)): #print(f"tf: {model.weights[i].name} torch: {torch_all_names[i]}") if (("feed_forward" in torch_all_names[i] or torch_all_names[i] == "0.weight" ) and len(data["model"][torch_all_names[i]].size()) == 2): model.weights[i].assign(data["model"][torch_all_names[i]].transpose(0, 1).numpy()) else: model.weights[i].assign(data["model"][torch_all_names[i]].cpu().numpy()) print(np.sum(np.abs(model.weights[i].value().numpy() - data["model"][torch_all_names[i]].numpy())))
This does give the tensorflow model the correct parameters I think. But when I try to inference the model it returns totally wrong results. So, I want to ask is there anything else that I need to consider? Personally, I think this method should work.
Thx in advance!
The position encodings are actually slightly different between the 2 implementations. In OpenNMT-tf, the sin and cos transformations are concatenated while in OpenNMT-py they are interleaved.
For someone who may need this. If you wanna deploy your OpenNMT-py model to mobile device, you can transform the model to OpenNMT-tf then convert it to tflite model. The parameters of tf and torch model are mostly the same. Position embedding need to be changed from sinusoidal to absolute as the implementation is a little bit different.
Hi @guillaumekln,
I am novice to ct2 and openNMT models.
Could you please share some leads on how to compile Ct2 model for a mobile system? As in, the steps to be followed?
Also, any leads on how to convert openNMT-py to openNMT-tf would be helpful.
Hi @SefaZeng, could you please share some inputs and steps to be followed to convert OpenNMT-py to OpenNMT-tf?