Error with lora_weights.py

Hi, I got the following error with last branch when I tried to merge lora weights.

File “/home/m.barroso/OpenNMT_nllb/OpenNMT-py/tools/lora_weights.py”, line 45, in
base_checkpoint = load_checkpoint(opt.base_model, map_location=torch.device(“cpu”))
TypeError: load_checkpoint() got an unexpected keyword argument ‘map_location’

When I checked load_checkpoint function, I saw that this parameter did not exist:

[2023-06-06 07:55:51,968 INFO] Loading checkpoint from llama13B-vicuna-onmt.pt [2023-06-06 07:56:04,625 INFO] Loading checkpoint from finetuned_llama13B/llama13B-vicuna-onmt_step_1500.pt
Traceback (most recent call last):
File “/home/m.barroso/OpenNMT_nllb/llms/OpenNMT-py/tools/lora_weights.py”, line 49, in lora_checkpoint = load_checkpoint(
File “/home/m.barroso/OpenNMT_nllb/llms/OpenNMT-py/onmt/models/model_saver.py”, line 36, in load_checkpoint if “0.weight” in checkpoint[“generator”]:
TypeError: argument of type ‘NoneType’ is not iterable

I fix it by commenting the map_location argument

Other problem that I got after this was in:

model.load_state_dict(lora_checkpoint, strict=False)

Inside load_state_dict() and, later on, load_checkpoint() I had to change some if that uses checkpoint["generator"]

if  "0.weight" in checkpoint["generator"]:
...
if "0.bias" in checkpoint["generator"]:
...
name == "generator" and len(checkpoint["generator"].keys()) > 0

by adding before this in a checkpoint["generator"] (to check first if is None or not). Before the new branch and now lora checkpoing have generator key in the .pt file but is empy, which fails when you try to iterate over it, to do an in or to get the keys.

After all this I could merge my lora weights and model works fine.

thanks for reporting this, do you mind opening an issue on Github ? I’ll fix it later today.

1 Like