LambdaLR#
- class torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda, last_epoch=-1)[source]#
Sets the initial learning rate.
The learning rate of each parameter group is set to the initial lr times a given function. When last_epoch=-1, sets initial lr as lr.
- Parameters
Example
>>> # Assuming optimizer has two groups. >>> num_epochs = 100 >>> lambda1 = lambda epoch: epoch // 30 >>> lambda2 = lambda epoch: 0.95**epoch >>> scheduler = LambdaLR(optimizer, lr_lambda=[lambda1, lambda2]) >>> for epoch in range(num_epochs): >>> train(...) >>> validate(...) >>> scheduler.step() >>> >>> # Alternatively, you can use a single lambda function for all groups. >>> scheduler = LambdaLR(opt, lr_lambda=lambda epoch: epoch // 30) >>> for epoch in range(num_epochs): >>> train(...) >>> validate(...) >>> scheduler.step()
- get_last_lr()[source]#
Get the most recent learning rates computed by this scheduler.
- Returns
A
list
of learning rates with entries for each of the optimizer’sparam_groups
, with the same types as theirgroup["lr"]
s.- Return type
Note
The returned
Tensor
s are copies, and never alias the optimizer’sgroup["lr"]
s.
- get_lr()[source]#
Compute the next learning rate for each of the optimizer’s
param_groups
.Scales the
base_lrs
by the outputs of thelr_lambdas
atlast_epoch
.- Returns
A
list
of learning rates for each of the optimizer’sparam_groups
with the same types as their currentgroup["lr"]
s.- Return type
Note
If you’re trying to inspect the most recent learning rate, use
get_last_lr()
instead.Note
The returned
Tensor
s are copies, and never alias the optimizer’sgroup["lr"]
s.
- load_state_dict(state_dict)[source]#
Load the scheduler’s state.
When saving or loading the scheduler, please make sure to also save or load the state of the optimizer.
- Parameters
state_dict (dict) – scheduler state. Should be an object returned from a call to
state_dict()
.
- state_dict()[source]#
Return the state of the scheduler as a
dict
.It contains an entry for every variable in
self.__dict__
which is not the optimizer. The learning rate lambda functions will only be saved if they are callable objects and not if they are functions or lambdas.When saving or loading the scheduler, please make sure to also save or load the state of the optimizer.
- step(epoch=None)[source]#
Step the scheduler.
- Parameters
epoch (int, optional) –
Deprecated since version 1.4: If provided, sets
last_epoch
toepoch
and uses_get_closed_form_lr()
if it is available. This is not universally supported. Usestep()
without arguments instead.
Note
Call this method after calling the optimizer’s
step()
.