SequentialLR#
- class torch.optim.lr_scheduler.SequentialLR(optimizer, schedulers, milestones, last_epoch=-1)[source]#
Contains a list of schedulers expected to be called sequentially during the optimization process.
Specifically, the schedulers will be called according to the milestone points, which should provide exact intervals by which each scheduler should be called at a given epoch.
- Parameters
Example
>>> # Assuming optimizer uses lr = 0.05 for all groups >>> # lr = 0.005 if epoch == 0 >>> # lr = 0.005 if epoch == 1 >>> # lr = 0.005 if epoch == 2 >>> # ... >>> # lr = 0.05 if epoch == 20 >>> # lr = 0.045 if epoch == 21 >>> # lr = 0.0405 if epoch == 22 >>> scheduler1 = ConstantLR(optimizer, factor=0.1, total_iters=20) >>> scheduler2 = ExponentialLR(optimizer, gamma=0.9) >>> scheduler = SequentialLR( ... optimizer, ... schedulers=[scheduler1, scheduler2], ... milestones=[20], ... ) >>> for epoch in range(100): >>> train(...) >>> validate(...) >>> scheduler.step()
- get_last_lr()[source]#
Get the most recent learning rates computed by this scheduler.
- Returns
A
list
of learning rates with entries for each of the optimizer’sparam_groups
, with the same types as theirgroup["lr"]
s.- Return type
Note
The returned
Tensor
s are copies, and never alias the optimizer’sgroup["lr"]
s.
- get_lr()[source]#
Compute the next learning rate for each of the optimizer’s
param_groups
.- Returns
A
list
of learning rates for each of the optimizer’sparam_groups
with the same types as their currentgroup["lr"]
s.- Return type
Note
If you’re trying to inspect the most recent learning rate, use
get_last_lr()
instead.Note
The returned
Tensor
s are copies, and never alias the optimizer’sgroup["lr"]
s.
- load_state_dict(state_dict)[source]#
Load the scheduler’s state.
- Parameters
state_dict (dict) – scheduler state. Should be an object returned from a call to
state_dict()
.
- recursive_undo(sched=None)[source]#
Recursively undo any step performed by the initialisation of schedulers.