ml.lr_schedulers.linear_no_decay

Defines a linear warmup scheduler without decay.

class ml.lr_schedulers.linear_no_decay.LinearNoDecayLRSchedulerConfig(name: str = '???', warmup_steps: int = 1000)[source]

Bases: BaseLRSchedulerConfig

warmup_steps: int = 1000
class ml.lr_schedulers.linear_no_decay.LinearNoDecayLRScheduler(config: BaseConfigT)[source]

Bases: BaseLRScheduler[LinearNoDecayLRSchedulerConfig]

get_lr_scale(state: State) float[source]

Given a state, returns the current learning rate.

Parameters:

state – The current trainer state

Returns:

The computed learning rate to use