gluonts.mx.trainer package#
- class gluonts.mx.trainer.Trainer(ctx: Optional[Context] = None, epochs: int = 100, num_batches_per_epoch: int = 50, learning_rate: float = 0.001, clip_gradient: float = 10.0, weight_decay: float = 1e-08, init: Union[str, Initializer] = 'xavier', hybridize: bool = True, callbacks: Optional[List[Callback]] = None, add_default_callbacks: bool = True)[source]#
Bases:
object
A trainer specifies how a network is going to be trained.
A trainer is mainly defined by two sets of parameters. The first one determines the number of examples that the network will be trained on (epochs, num_batches_per_epoch), while the second one specifies how the gradient updates are performed (learning_rate, clip_gradient and weight_decay).
- Parameters:
ctx –
epochs – Number of epochs that the network will train (default: 100).
num_batches_per_epoch – Number of batches at each epoch (default: 50).
learning_rate – Initial learning rate (default: \(10^{-3}\)).
clip_gradient – Maximum value of gradient. The gradient is clipped if it is too large (default: 10).
weight_decay – The weight decay (or L2 regularization) coefficient. Modifies objective by adding a penalty for having large weights (default \(10^{-8}\)).
init – Initializer of the weights of the network (default: “xavier”).
hybridize – If set to True the network will be hybridized before training
callbacks – A list of gluonts.mx.trainer.callback.Callback to control the training.
add_default_callbacks – bool, True by default. If True, LearningRateReduction and ModelAveragingCallbacks are used in addition to the callbacks specified in the callbacks argument. Make sure that you only set this to true if you don’t specify one of the default callbacks yourself or there will be “duplicate callbacks”. default callbacks: >>> callbacks = [ … ModelAveraging(avg_strategy=SelectNBestMean(num_models=1)), … LearningRateReduction( … base_lr=1e-3, # learning_rate … decay_factor=0.5, # learning_rate_decay_factor … patience=10, # patience … min_lr=5e-5, # minimum_learning_rate … objective=”min”, … ) … ]
Submodules#
- gluonts.mx.trainer.callback module
Callback
Callback.on_epoch_end()
Callback.on_network_initializing_end()
Callback.on_train_batch_end()
Callback.on_train_end()
Callback.on_train_epoch_end()
Callback.on_train_epoch_start()
Callback.on_train_start()
Callback.on_validation_batch_end()
Callback.on_validation_epoch_end()
Callback.on_validation_epoch_start()
CallbackList
CallbackList.callbacks
CallbackList.on_epoch_end()
CallbackList.on_network_initializing_end()
CallbackList.on_train_batch_end()
CallbackList.on_train_end()
CallbackList.on_train_epoch_end()
CallbackList.on_train_epoch_start()
CallbackList.on_train_start()
CallbackList.on_validation_batch_end()
CallbackList.on_validation_epoch_end()
CallbackList.on_validation_epoch_start()
TerminateOnNaN
TrainingHistory
TrainingTimeLimit
WarmStart
- gluonts.mx.trainer.learning_rate_scheduler module
- gluonts.mx.trainer.model_averaging module
- gluonts.mx.trainer.model_iteration_averaging module
Alpha_Suffix
IterationAveragingStrategy
IterationAveragingStrategy.apply()
IterationAveragingStrategy.average_counter
IterationAveragingStrategy.averaged_model
IterationAveragingStrategy.averaging_started
IterationAveragingStrategy.cached_model
IterationAveragingStrategy.load_averaged_model()
IterationAveragingStrategy.load_cached_model()
IterationAveragingStrategy.update_average()
IterationAveragingStrategy.update_average_trigger()
ModelIterationAveraging
NTA