torch.optim.Optimizer.load_state_dict#
- Optimizer.load_state_dict(state_dict)[source]#
Load the optimizer state.
- Parameters
state_dict (dict) – optimizer state. Should be an object returned from a call to
state_dict().
Warning
Make sure this method is called after initializing
torch.optim.lr_scheduler.LRScheduler, as calling it beforehand will overwrite the loaded learning rates.Note
The names of the parameters (if they exist under the “param_names” key of each param group in
state_dict()) will not affect the loading process. To use the parameters’ names for custom cases (such as when the parameters in the loaded state dict differ from those initialized in the optimizer), a customregister_load_state_dict_pre_hookshould be implemented to adapt the loaded dict accordingly. Ifparam_namesexist in loaded state dictparam_groupsthey will be saved and override the current names, if present, in the optimizer state. If they do not exist in loaded state dict, the optimizerparam_nameswill remain unchanged.Example
>>> model = torch.nn.Linear(10, 10) >>> optim = torch.optim.SGD(model.parameters(), lr=3e-4) >>> scheduler1 = torch.optim.lr_scheduler.LinearLR( ... optim, ... start_factor=0.1, ... end_factor=1, ... total_iters=20, ... ) >>> scheduler2 = torch.optim.lr_scheduler.CosineAnnealingLR( ... optim, ... T_max=80, ... eta_min=3e-5, ... ) >>> lr = torch.optim.lr_scheduler.SequentialLR( ... optim, ... schedulers=[scheduler1, scheduler2], ... milestones=[20], ... ) >>> lr.load_state_dict(torch.load("./save_seq.pt")) >>> # now load the optimizer checkpoint after loading the LRScheduler >>> optim.load_state_dict(torch.load("./save_optim.pt"))