Rate this Page

LambdaLR#

class torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda, last_epoch=-1)[source]#

Sets the initial learning rate.

The learning rate of each parameter group is set to the initial lr times a given function. When last_epoch=-1, sets initial lr as lr.

Parameters
  • optimizer (Optimizer) – Wrapped optimizer.

  • lr_lambda (function or list) – A function which computes a multiplicative factor given an integer parameter epoch, or a list of such functions, one for each group in optimizer.param_groups.

  • last_epoch (int) – The index of last epoch. Default: -1.

Example

>>> # Assuming optimizer has two groups.
>>> num_epochs = 100
>>> lambda1 = lambda epoch: epoch // 30
>>> lambda2 = lambda epoch: 0.95**epoch
>>> scheduler = LambdaLR(optimizer, lr_lambda=[lambda1, lambda2])
>>> for epoch in range(num_epochs):
>>>     train(...)
>>>     validate(...)
>>>     scheduler.step()
>>>
>>> # Alternatively, you can use a single lambda function for all groups.
>>> scheduler = LambdaLR(opt, lr_lambda=lambda epoch: epoch // 30)
>>> for epoch in range(num_epochs):
>>>     train(...)
>>>     validate(...)
>>>     scheduler.step()
../_images/LambdaLR.png
get_last_lr()[source]#

Get the most recent learning rates computed by this scheduler.

Returns

A list of learning rates with entries for each of the optimizer’s param_groups, with the same types as their group["lr"]s.

Return type

list[float | Tensor]

Note

The returned Tensors are copies, and never alias the optimizer’s group["lr"]s.

get_lr()[source]#

Compute the next learning rate for each of the optimizer’s param_groups.

Scales the base_lrs by the outputs of the lr_lambdas at last_epoch.

Returns

A list of learning rates for each of the optimizer’s param_groups with the same types as their current group["lr"]s.

Return type

list[float | Tensor]

Note

If you’re trying to inspect the most recent learning rate, use get_last_lr() instead.

Note

The returned Tensors are copies, and never alias the optimizer’s group["lr"]s.

load_state_dict(state_dict)[source]#

Load the scheduler’s state.

When saving or loading the scheduler, please make sure to also save or load the state of the optimizer.

Parameters

state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict().

state_dict()[source]#

Return the state of the scheduler as a dict.

It contains an entry for every variable in self.__dict__ which is not the optimizer. The learning rate lambda functions will only be saved if they are callable objects and not if they are functions or lambdas.

When saving or loading the scheduler, please make sure to also save or load the state of the optimizer.

Return type

dict[str, Any]

step(epoch=None)[source]#

Step the scheduler.

Parameters

epoch (int, optional) –

Deprecated since version 1.4: If provided, sets last_epoch to epoch and uses _get_closed_form_lr() if it is available. This is not universally supported. Use step() without arguments instead.

Note

Call this method after calling the optimizer’s step().