gluonts.torch package#

class gluonts.torch.DLinearEstimator(prediction_length: int, context_length: Optional[int] = None, hidden_dimension: Optional[int] = None, lr: float = 0.001, weight_decay: float = 1e-08, scaling: Optional[str] = 'mean', distr_output: gluonts.torch.distributions.output.Output = gluonts.torch.distributions.studentT.StudentTOutput(beta=0.0), kernel_size: int = 25, batch_size: int = 32, num_batches_per_epoch: int = 50, trainer_kwargs: Optional[Dict[str, Any]] = None, train_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None, validation_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None)[source]#

Bases: gluonts.torch.model.estimator.PyTorchLightningEstimator

An estimator training the d-linear model form the paper https://arxiv.org/pdf/2205.13504.pdf extended for probabilistic forecasting.

This class is uses the model defined in DLinearModel, and wraps it into a DLinearLightningModule for training purposes: training is performed using PyTorch Lightning’s pl.Trainer class.

Parameters
  • prediction_length (int) – Length of the prediction horizon.

  • context_length – Number of time steps prior to prediction time that the model takes as inputs (default: 10 * prediction_length).

  • hidden_dimension – Size of representation.

  • lr – Learning rate (default: 1e-3).

  • weight_decay – Weight decay regularization parameter (default: 1e-8).

  • distr_output – Distribution to use to evaluate observations and sample predictions (default: StudentTOutput()).

  • kernel_size

  • batch_size – The size of the batches to be used for training (default: 32).

  • num_batches_per_epoch

    Number of batches to be processed in each training epoch

    (default: 50).

  • trainer_kwargs – Additional arguments to provide to pl.Trainer for construction.

  • train_sampler – Controls the sampling of windows during training.

  • validation_sampler – Controls the sampling of windows during validation.

create_lightning_module() lightning.pytorch.core.module.LightningModule[source]#

Create and return the network used for training (i.e., computing the loss).

Returns

The network that computes the loss given input data.

Return type

pl.LightningModule

create_predictor(transformation: gluonts.transform._base.Transformation, module) gluonts.torch.model.predictor.PyTorchPredictor[source]#

Create and return a predictor object.

Parameters
  • transformation – Transformation to be applied to data before it goes into the model.

  • module – A trained pl.LightningModule object.

Returns

A predictor wrapping a nn.Module used for inference.

Return type

Predictor

create_training_data_loader(data: gluonts.dataset.Dataset, module: gluonts.torch.model.d_linear.lightning_module.DLinearLightningModule, shuffle_buffer_length: Optional[int] = None, **kwargs) Iterable[source]#

Create a data loader for training purposes.

Parameters
  • data – Dataset from which to create the data loader.

  • module – The pl.LightningModule object that will receive the batches from the data loader.

Returns

The data loader, i.e. and iterable over batches of data.

Return type

Iterable

create_transformation() gluonts.transform._base.Transformation[source]#

Create and return the transformation needed for training and inference.

Returns

The transformation that will be applied entry-wise to datasets, at training and inference time.

Return type

Transformation

create_validation_data_loader(data: gluonts.dataset.Dataset, module: gluonts.torch.model.d_linear.lightning_module.DLinearLightningModule, **kwargs) Iterable[source]#

Create a data loader for validation purposes.

Parameters
  • data – Dataset from which to create the data loader.

  • module – The pl.LightningModule object that will receive the batches from the data loader.

Returns

The data loader, i.e. and iterable over batches of data.

Return type

Iterable

lead_time: int#
prediction_length: int#
class gluonts.torch.DeepAREstimator(freq: str, prediction_length: int, context_length: Optional[int] = None, num_layers: int = 2, hidden_size: int = 40, lr: float = 0.001, weight_decay: float = 1e-08, dropout_rate: float = 0.1, patience: int = 10, num_feat_dynamic_real: int = 0, num_feat_static_cat: int = 0, num_feat_static_real: int = 0, cardinality: Optional[List[int]] = None, embedding_dimension: Optional[List[int]] = None, distr_output: gluonts.torch.distributions.distribution_output.DistributionOutput = gluonts.torch.distributions.studentT.StudentTOutput(beta=0.0), scaling: bool = True, default_scale: Optional[float] = None, lags_seq: Optional[List[int]] = None, time_features: Optional[List[Callable[[pandas.core.indexes.period.PeriodIndex], numpy.ndarray]]] = None, num_parallel_samples: int = 100, batch_size: int = 32, num_batches_per_epoch: int = 50, imputation_method: Optional[gluonts.transform.feature.MissingValueImputation] = None, trainer_kwargs: Optional[Dict[str, Any]] = None, train_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None, validation_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None, nonnegative_pred_samples: bool = False)[source]#

Bases: gluonts.torch.model.estimator.PyTorchLightningEstimator

Estimator class to train a DeepAR model, as described in [SFG17].

This class is uses the model defined in DeepARModel, and wraps it into a DeepARLightningModule for training purposes: training is performed using PyTorch Lightning’s pl.Trainer class.

Note: the code of this model is unrelated to the implementation behind SageMaker’s DeepAR Forecasting Algorithm.

Parameters
  • freq – Frequency of the data to train on and predict.

  • prediction_length (int) – Length of the prediction horizon.

  • context_length – Number of steps to unroll the RNN for before computing predictions (default: None, in which case context_length = prediction_length).

  • num_layers – Number of RNN layers (default: 2).

  • hidden_size – Number of RNN cells for each layer (default: 40).

  • lr – Learning rate (default: 1e-3).

  • weight_decay – Weight decay regularization parameter (default: 1e-8).

  • dropout_rate – Dropout regularization parameter (default: 0.1).

  • patience – Patience parameter for learning rate scheduler.

  • num_feat_dynamic_real – Number of dynamic real features in the data (default: 0).

  • num_feat_static_real – Number of static real features in the data (default: 0).

  • num_feat_static_cat – Number of static categorical features in the data (default: 0).

  • cardinality – Number of values of each categorical feature. This must be set if num_feat_static_cat > 0 (default: None).

  • embedding_dimension – Dimension of the embeddings for categorical features (default: [min(50, (cat+1)//2) for cat in cardinality]).

  • distr_output – Distribution to use to evaluate observations and sample predictions (default: StudentTOutput()).

  • scaling – Whether to automatically scale the target values (default: true).

  • default_scale – Default scale that is applied if the context length window is completely unobserved. If not set, the scale in this case will be the mean scale in the batch.

  • lags_seq – Indices of the lagged target values to use as inputs of the RNN (default: None, in which case these are automatically determined based on freq).

  • time_features – List of time features, from gluonts.time_feature, to use as inputs of the RNN in addition to the provided data (default: None, in which case these are automatically determined based on freq).

  • num_parallel_samples – Number of samples per time series to that the resulting predictor should produce (default: 100).

  • batch_size – The size of the batches to be used for training (default: 32).

  • num_batches_per_epoch – Number of batches to be processed in each training epoch (default: 50).

  • trainer_kwargs – Additional arguments to provide to pl.Trainer for construction.

  • train_sampler – Controls the sampling of windows during training.

  • validation_sampler – Controls the sampling of windows during validation.

  • nonnegative_pred_samples – Should final prediction samples be non-negative? If yes, an activation function is applied to ensure non-negative. Observe that this is applied only to the final samples and this is not applied during training.

create_lightning_module() gluonts.torch.model.deepar.lightning_module.DeepARLightningModule[source]#

Create and return the network used for training (i.e., computing the loss).

Returns

The network that computes the loss given input data.

Return type

pl.LightningModule

create_predictor(transformation: gluonts.transform._base.Transformation, module: gluonts.torch.model.deepar.lightning_module.DeepARLightningModule) gluonts.torch.model.predictor.PyTorchPredictor[source]#

Create and return a predictor object.

Parameters
  • transformation – Transformation to be applied to data before it goes into the model.

  • module – A trained pl.LightningModule object.

Returns

A predictor wrapping a nn.Module used for inference.

Return type

Predictor

create_training_data_loader(data: gluonts.dataset.Dataset, module: gluonts.torch.model.deepar.lightning_module.DeepARLightningModule, shuffle_buffer_length: Optional[int] = None, **kwargs) Iterable[source]#

Create a data loader for training purposes.

Parameters
  • data – Dataset from which to create the data loader.

  • module – The pl.LightningModule object that will receive the batches from the data loader.

Returns

The data loader, i.e. and iterable over batches of data.

Return type

Iterable

create_transformation() gluonts.transform._base.Transformation[source]#

Create and return the transformation needed for training and inference.

Returns

The transformation that will be applied entry-wise to datasets, at training and inference time.

Return type

Transformation

create_validation_data_loader(data: gluonts.dataset.Dataset, module: gluonts.torch.model.deepar.lightning_module.DeepARLightningModule, **kwargs) Iterable[source]#

Create a data loader for validation purposes.

Parameters
  • data – Dataset from which to create the data loader.

  • module – The pl.LightningModule object that will receive the batches from the data loader.

Returns

The data loader, i.e. and iterable over batches of data.

Return type

Iterable

classmethod derive_auto_fields(train_iter)[source]#
lead_time: int#
prediction_length: int#
class gluonts.torch.DeepNPTSEstimator(freq: str, prediction_length: int, context_length: int, num_hidden_nodes: typing.Optional[typing.List[int]] = None, batch_norm: bool = False, use_feat_static_cat: bool = False, num_feat_static_real: int = 0, num_feat_dynamic_real: int = 0, cardinality: typing.Optional[typing.List[int]] = None, embedding_dimension: typing.Optional[typing.List[int]] = None, input_scaling: typing.Optional[typing.Union[typing.Callable, str]] = None, dropout_rate: float = 0.0, network_type: gluonts.torch.model.deep_npts._network.DeepNPTSNetwork = <class 'gluonts.torch.model.deep_npts._network.DeepNPTSNetworkDiscrete'>, epochs: int = 100, lr: float = 1e-05, batch_size: int = 32, num_batches_per_epoch: int = 100, cache_data: bool = False, loss_scaling: typing.Optional[typing.Union[typing.Callable, str]] = None)[source]#

Bases: gluonts.model.estimator.Estimator

Construct a DeepNPTS estimator. This is a tunable extension of NPTS where the sampling probabilities are learned from the data. This is a global- model unlike NPTS.

Currently two variants of the model are implemented: (i) DeepNPTSNetworkDiscrete: the forecast distribution is a discrete distribution similar to NPTS and the forecasts are sampled from the observations in the context window. (ii) DeepNPTSNetworkSmooth: the forecast distribution is a smoothed mixture distribution where the components of the mixture are Gaussians centered around the observations in the context window. The mixing probabilities and the width of the Gaussians are learned. Here the forecast can contain values not observed in the context window.

Parameters
  • freq – Frequency of the data to train on and predict

  • prediction_length (int) – Length of the prediction horizon

  • context_length – Number of steps to unroll the RNN for before computing predictions (default: None, in which case context_length = prediction_length)

  • num_hidden_nodes – A list containing the number of nodes in each hidden layer

  • batch_norm – Flag to indicate if batch normalization should be applied at every layer

  • use_feat_static_cat – Whether to use the feat_static_cat field from the data (default: False)

  • num_feat_static_real – Number of static real features in the data set

  • num_feat_dynamic_real – Number of dynamic features in the data set. These features are added to the time series features that are automatically created based on the frequency

  • cardinality – Number of values of each categorical feature This must be set if use_feat_static_cat == True (default: None)

  • embedding_dimension – Dimension of the embeddings for categorical features (default: [min(50, (cat+1)//2) for cat in cardinality])

  • input_scaling – The scaling to be applied to the target values. Available options: “min_max_scaling” and “standard_normal_scaling” (default: no scaling)

  • dropout_rate – Dropout regularization parameter (default: no dropout)

  • network_type – The network to be used: either the discrete version DeepNPTSNetworkDiscrete or the smoothed version DeepNPTSNetworkSmooth (default: DeepNPTSNetworkDiscrete)

get_predictor(net: torch.nn.modules.module.Module, device='cpu') gluonts.torch.model.predictor.PyTorchPredictor[source]#
input_transform() gluonts.transform._base.Transformation[source]#
instance_splitter(instance_sampler, is_train: bool = True) gluonts.transform.split.InstanceSplitter[source]#
lead_time: int#
prediction_length: int#
train(training_data: gluonts.dataset.Dataset, validation_data: Optional[gluonts.dataset.Dataset] = None, cache_data: bool = False) gluonts.torch.model.predictor.PyTorchPredictor[source]#

Train the estimator on the given data.

Parameters
  • training_data – Dataset to train the model on.

  • validation_data – Dataset to validate the model on during training.

Returns

The predictor containing the trained model.

Return type

Predictor

train_model(training_data: gluonts.dataset.Dataset, cache_data: bool = False) gluonts.torch.model.deep_npts._network.DeepNPTSNetwork[source]#
training_data_loader(training_dataset, batch_size: int, num_batches_per_epoch: int) Iterable[Dict[str, Any]][source]#
class gluonts.torch.LagTSTEstimator(freq: str, prediction_length: int, context_length: Optional[int] = None, d_model: int = 32, nhead: int = 4, dim_feedforward: int = 128, lags_seq: Optional[List[int]] = None, dropout: float = 0.1, activation: str = 'relu', norm_first: bool = False, num_encoder_layers: int = 2, lr: float = 0.001, weight_decay: float = 1e-08, scaling: Optional[str] = 'mean', distr_output: gluonts.torch.distributions.output.Output = gluonts.torch.distributions.studentT.StudentTOutput(beta=0.0), batch_size: int = 32, num_batches_per_epoch: int = 50, trainer_kwargs: Optional[Dict[str, Any]] = None, train_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None, validation_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None)[source]#

Bases: gluonts.torch.model.estimator.PyTorchLightningEstimator

An estimator training the LagTST model for forecasting.

This class is uses the model defined in SimpleFeedForwardModel, and wraps it into a LagTSTLightningModule for training purposes: training is performed using PyTorch Lightning’s pl.Trainer class.

Parameters
  • freq – Frequency of the data to train on and predict.

  • prediction_length (int) – Length of the prediction horizon.

  • context_length – Number of time steps prior to prediction time that the model takes as inputs (default: 10 * prediction_length).

  • lags_seq – Indices of the lagged target values to use as inputs of the RNN (default: None, in which case these are automatically determined based on freq).

  • d_model – Size of hidden layers in the Transformer encoder.

  • nhead – Number of attention heads in the Transformer encoder.

  • dim_feedforward – Size of hidden layers in the Transformer encoder.

  • dropout – Dropout probability in the Transformer encoder.

  • activation – Activation function in the Transformer encoder.

  • norm_first – Whether to apply normalization before or after the attention.

  • num_encoder_layers – Number of layers in the Transformer encoder.

  • lr – Learning rate (default: 1e-3).

  • weight_decay – Weight decay regularization parameter (default: 1e-8).

  • scaling – Scaling parameter can be “mean”, “std” or None.

  • distr_output – Distribution to use to evaluate observations and sample predictions (default: StudentTOutput()).

  • batch_size – The size of the batches to be used for training (default: 32).

  • num_batches_per_epoch

    Number of batches to be processed in each training epoch

    (default: 50).

  • trainer_kwargs – Additional arguments to provide to pl.Trainer for construction.

  • train_sampler – Controls the sampling of windows during training.

  • validation_sampler – Controls the sampling of windows during validation.

create_lightning_module() lightning.pytorch.core.module.LightningModule[source]#

Create and return the network used for training (i.e., computing the loss).

Returns

The network that computes the loss given input data.

Return type

pl.LightningModule

create_predictor(transformation: gluonts.transform._base.Transformation, module) gluonts.torch.model.predictor.PyTorchPredictor[source]#

Create and return a predictor object.

Parameters
  • transformation – Transformation to be applied to data before it goes into the model.

  • module – A trained pl.LightningModule object.

Returns

A predictor wrapping a nn.Module used for inference.

Return type

Predictor

create_training_data_loader(data: gluonts.dataset.Dataset, module: gluonts.torch.model.lag_tst.lightning_module.LagTSTLightningModule, shuffle_buffer_length: Optional[int] = None, **kwargs) Iterable[source]#

Create a data loader for training purposes.

Parameters
  • data – Dataset from which to create the data loader.

  • module – The pl.LightningModule object that will receive the batches from the data loader.

Returns

The data loader, i.e. and iterable over batches of data.

Return type

Iterable

create_transformation() gluonts.transform._base.Transformation[source]#

Create and return the transformation needed for training and inference.

Returns

The transformation that will be applied entry-wise to datasets, at training and inference time.

Return type

Transformation

create_validation_data_loader(data: gluonts.dataset.Dataset, module: gluonts.torch.model.lag_tst.lightning_module.LagTSTLightningModule, **kwargs) Iterable[source]#

Create a data loader for validation purposes.

Parameters
  • data – Dataset from which to create the data loader.

  • module – The pl.LightningModule object that will receive the batches from the data loader.

Returns

The data loader, i.e. and iterable over batches of data.

Return type

Iterable

lead_time: int#
prediction_length: int#
class gluonts.torch.PatchTSTEstimator(prediction_length: int, patch_len: int, context_length: Optional[int] = None, stride: int = 8, padding_patch: str = 'end', d_model: int = 32, nhead: int = 4, dim_feedforward: int = 128, dropout: float = 0.1, activation: str = 'relu', norm_first: bool = False, num_encoder_layers: int = 2, lr: float = 0.001, weight_decay: float = 1e-08, scaling: Optional[str] = 'mean', distr_output: gluonts.torch.distributions.output.Output = gluonts.torch.distributions.studentT.StudentTOutput(beta=0.0), batch_size: int = 32, num_batches_per_epoch: int = 50, trainer_kwargs: Optional[Dict[str, Any]] = None, train_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None, validation_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None)[source]#

Bases: gluonts.torch.model.estimator.PyTorchLightningEstimator

An estimator training the PatchTST model for forecasting as described in https://arxiv.org/abs/2211.14730 extended to be probabilistic.

This class uses the model defined in PatchTSTModel, and wraps it into a PatchTSTLightningModule for training purposes: training is performed using PyTorch Lightning’s pl.Trainer class.

Parameters
  • prediction_length (int) – Length of the prediction horizon.

  • context_length – Number of time steps prior to prediction time that the model takes as inputs (default: 10 * prediction_length).

  • patch_len – Length of the patch.

  • stride – Stride of the patch.

  • padding_patch – Padding of the patch.

  • d_model – Size of hidden layers in the Transformer encoder.

  • nhead – Number of attention heads in the Transformer encoder which must divide d_model.

  • dim_feedforward – Size of hidden layers in the Transformer encoder.

  • dropout – Dropout probability in the Transformer encoder.

  • activation – Activation function in the Transformer encoder.

  • norm_first – Whether to apply normalization before or after the attention.

  • num_encoder_layers – Number of layers in the Transformer encoder.

  • lr – Learning rate (default: 1e-3).

  • weight_decay – Weight decay regularization parameter (default: 1e-8).

  • scaling – Scaling parameter can be “mean”, “std” or None.

  • distr_output – Distribution to use to evaluate observations and sample predictions (default: StudentTOutput()).

  • batch_size – The size of the batches to be used for training (default: 32).

  • num_batches_per_epoch

    Number of batches to be processed in each training epoch

    (default: 50).

  • trainer_kwargs – Additional arguments to provide to pl.Trainer for construction.

  • train_sampler – Controls the sampling of windows during training.

  • validation_sampler – Controls the sampling of windows during validation.

create_lightning_module() lightning.pytorch.core.module.LightningModule[source]#

Create and return the network used for training (i.e., computing the loss).

Returns

The network that computes the loss given input data.

Return type

pl.LightningModule

create_predictor(transformation: gluonts.transform._base.Transformation, module) gluonts.torch.model.predictor.PyTorchPredictor[source]#

Create and return a predictor object.

Parameters
  • transformation – Transformation to be applied to data before it goes into the model.

  • module – A trained pl.LightningModule object.

Returns

A predictor wrapping a nn.Module used for inference.

Return type

Predictor

create_training_data_loader(data: gluonts.dataset.Dataset, module: gluonts.torch.model.patch_tst.lightning_module.PatchTSTLightningModule, shuffle_buffer_length: Optional[int] = None, **kwargs) Iterable[source]#

Create a data loader for training purposes.

Parameters
  • data – Dataset from which to create the data loader.

  • module – The pl.LightningModule object that will receive the batches from the data loader.

Returns

The data loader, i.e. and iterable over batches of data.

Return type

Iterable

create_transformation() gluonts.transform._base.Transformation[source]#

Create and return the transformation needed for training and inference.

Returns

The transformation that will be applied entry-wise to datasets, at training and inference time.

Return type

Transformation

create_validation_data_loader(data: gluonts.dataset.Dataset, module: gluonts.torch.model.patch_tst.lightning_module.PatchTSTLightningModule, **kwargs) Iterable[source]#

Create a data loader for validation purposes.

Parameters
  • data – Dataset from which to create the data loader.

  • module – The pl.LightningModule object that will receive the batches from the data loader.

Returns

The data loader, i.e. and iterable over batches of data.

Return type

Iterable

lead_time: int#
prediction_length: int#
class gluonts.torch.PyTorchLightningEstimator(trainer_kwargs: Dict[str, Any], lead_time: int = 0)[source]#

Bases: gluonts.model.estimator.Estimator

An Estimator type with utilities for creating PyTorch-Lightning-based models.

To extend this class, one needs to implement three methods: create_transformation, create_training_network, create_predictor, create_training_data_loader, and create_validation_data_loader.

create_lightning_module() lightning.pytorch.core.module.LightningModule[source]#

Create and return the network used for training (i.e., computing the loss).

Returns

The network that computes the loss given input data.

Return type

pl.LightningModule

create_predictor(transformation: gluonts.transform._base.Transformation, module) gluonts.torch.model.predictor.PyTorchPredictor[source]#

Create and return a predictor object.

Parameters
  • transformation – Transformation to be applied to data before it goes into the model.

  • module – A trained pl.LightningModule object.

Returns

A predictor wrapping a nn.Module used for inference.

Return type

Predictor

create_training_data_loader(data: gluonts.dataset.Dataset, module, **kwargs) Iterable[source]#

Create a data loader for training purposes.

Parameters
  • data – Dataset from which to create the data loader.

  • module – The pl.LightningModule object that will receive the batches from the data loader.

Returns

The data loader, i.e. and iterable over batches of data.

Return type

Iterable

create_transformation() gluonts.transform._base.Transformation[source]#

Create and return the transformation needed for training and inference.

Returns

The transformation that will be applied entry-wise to datasets, at training and inference time.

Return type

Transformation

create_validation_data_loader(data: gluonts.dataset.Dataset, module, **kwargs) Iterable[source]#

Create a data loader for validation purposes.

Parameters
  • data – Dataset from which to create the data loader.

  • module – The pl.LightningModule object that will receive the batches from the data loader.

Returns

The data loader, i.e. and iterable over batches of data.

Return type

Iterable

lead_time: int#
prediction_length: int#
train(training_data: gluonts.dataset.Dataset, validation_data: Optional[gluonts.dataset.Dataset] = None, shuffle_buffer_length: Optional[int] = None, cache_data: bool = False, ckpt_path: Optional[str] = None, **kwargs) gluonts.torch.model.predictor.PyTorchPredictor[source]#

Train the estimator on the given data.

Parameters
  • training_data – Dataset to train the model on.

  • validation_data – Dataset to validate the model on during training.

Returns

The predictor containing the trained model.

Return type

Predictor

train_from(predictor: gluonts.model.predictor.Predictor, training_data: gluonts.dataset.Dataset, validation_data: Optional[gluonts.dataset.Dataset] = None, shuffle_buffer_length: Optional[int] = None, cache_data: bool = False, ckpt_path: Optional[str] = None) gluonts.torch.model.predictor.PyTorchPredictor[source]#
train_model(training_data: gluonts.dataset.Dataset, validation_data: Optional[gluonts.dataset.Dataset] = None, from_predictor: Optional[gluonts.torch.model.predictor.PyTorchPredictor] = None, shuffle_buffer_length: Optional[int] = None, cache_data: bool = False, ckpt_path: Optional[str] = None, **kwargs) gluonts.torch.model.estimator.TrainOutput[source]#
class gluonts.torch.PyTorchPredictor(input_names: List[str], prediction_net: torch.nn.modules.module.Module, batch_size: int, prediction_length: int, input_transform: gluonts.transform._base.Transformation, forecast_generator: gluonts.model.forecast_generator.ForecastGenerator = gluonts.model.forecast_generator.SampleForecastGenerator(), output_transform: Optional[Callable[[Dict[str, Any], numpy.ndarray], numpy.ndarray]] = None, lead_time: int = 0, device: Union[str, torch.device] = 'auto')[source]#

Bases: gluonts.model.predictor.RepresentablePredictor

classmethod deserialize(path: pathlib.Path, device: Optional[Union[torch.device, str]] = None) gluonts.torch.model.predictor.PyTorchPredictor[source]#

Load a serialized predictor from the given path.

Parameters
  • path – Path to the serialized files predictor.

  • **kwargs – Optional context/device parameter to be used with the predictor. If nothing is passed will use the GPU if available and CPU otherwise.

property network: torch.nn.modules.module.Module#
predict(dataset: gluonts.dataset.Dataset, num_samples: Optional[int] = None) Iterator[gluonts.model.forecast.Forecast][source]#

Compute forecasts for the time series in the provided dataset. This method is not implemented in this abstract class; please use one of the subclasses. :param dataset: The dataset containing the time series to predict.

Returns

Iterator over the forecasts, in the same order as the dataset iterable was provided.

Return type

Iterator[Forecast]

serialize(path: pathlib.Path) None[source]#
to(device: Union[str, torch.device]) gluonts.torch.model.predictor.PyTorchPredictor[source]#
class gluonts.torch.SimpleFeedForwardEstimator(prediction_length: int, context_length: Optional[int] = None, hidden_dimensions: Optional[List[int]] = None, lr: float = 0.001, weight_decay: float = 1e-08, distr_output: gluonts.torch.distributions.output.Output = gluonts.torch.distributions.studentT.StudentTOutput(beta=0.0), batch_norm: bool = False, batch_size: int = 32, num_batches_per_epoch: int = 50, trainer_kwargs: Optional[Dict[str, Any]] = None, train_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None, validation_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None)[source]#

Bases: gluonts.torch.model.estimator.PyTorchLightningEstimator

An estimator training a feed-forward model for forecasting.

This class is uses the model defined in SimpleFeedForwardModel, and wraps it into a SimpleFeedForwardLightningModule for training purposes: training is performed using PyTorch Lightning’s pl.Trainer class.

Parameters
  • prediction_length (int) – Length of the prediction horizon.

  • context_length – Number of time steps prior to prediction time that the model takes as inputs (default: 10 * prediction_length).

  • hidden_dimensions – Size of hidden layers in the feed-forward network (default: [20, 20]).

  • lr – Learning rate (default: 1e-3).

  • weight_decay – Weight decay regularization parameter (default: 1e-8).

  • distr_output – Distribution to use to evaluate observations and sample predictions (default: StudentTOutput()).

  • batch_norm – Whether to apply batch normalization.

  • batch_size – The size of the batches to be used for training (default: 32).

  • num_batches_per_epoch

    Number of batches to be processed in each training epoch

    (default: 50).

  • trainer_kwargs – Additional arguments to provide to pl.Trainer for construction.

  • train_sampler – Controls the sampling of windows during training.

  • validation_sampler – Controls the sampling of windows during validation.

create_lightning_module() lightning.pytorch.core.module.LightningModule[source]#

Create and return the network used for training (i.e., computing the loss).

Returns

The network that computes the loss given input data.

Return type

pl.LightningModule

create_predictor(transformation: gluonts.transform._base.Transformation, module) gluonts.torch.model.predictor.PyTorchPredictor[source]#

Create and return a predictor object.

Parameters
  • transformation – Transformation to be applied to data before it goes into the model.

  • module – A trained pl.LightningModule object.

Returns

A predictor wrapping a nn.Module used for inference.

Return type

Predictor

create_training_data_loader(data: gluonts.dataset.Dataset, module: gluonts.torch.model.simple_feedforward.lightning_module.SimpleFeedForwardLightningModule, shuffle_buffer_length: Optional[int] = None, **kwargs) Iterable[source]#

Create a data loader for training purposes.

Parameters
  • data – Dataset from which to create the data loader.

  • module – The pl.LightningModule object that will receive the batches from the data loader.

Returns

The data loader, i.e. and iterable over batches of data.

Return type

Iterable

create_transformation() gluonts.transform._base.Transformation[source]#

Create and return the transformation needed for training and inference.

Returns

The transformation that will be applied entry-wise to datasets, at training and inference time.

Return type

Transformation

create_validation_data_loader(data: gluonts.dataset.Dataset, module: gluonts.torch.model.simple_feedforward.lightning_module.SimpleFeedForwardLightningModule, **kwargs) Iterable[source]#

Create a data loader for validation purposes.

Parameters
  • data – Dataset from which to create the data loader.

  • module – The pl.LightningModule object that will receive the batches from the data loader.

Returns

The data loader, i.e. and iterable over batches of data.

Return type

Iterable

lead_time: int#
prediction_length: int#
class gluonts.torch.TemporalFusionTransformerEstimator(freq: str, prediction_length: int, context_length: Optional[int] = None, quantiles: Optional[List[float]] = None, distr_output: Optional[gluonts.torch.distributions.output.Output] = None, num_heads: int = 4, hidden_dim: int = 32, variable_dim: int = 32, static_dims: Optional[List[int]] = None, dynamic_dims: Optional[List[int]] = None, past_dynamic_dims: Optional[List[int]] = None, static_cardinalities: Optional[List[int]] = None, dynamic_cardinalities: Optional[List[int]] = None, past_dynamic_cardinalities: Optional[List[int]] = None, time_features: Optional[List[Callable[[pandas.core.indexes.period.PeriodIndex], numpy.ndarray]]] = None, lr: float = 0.001, weight_decay: float = 1e-08, dropout_rate: float = 0.1, patience: int = 10, batch_size: int = 32, num_batches_per_epoch: int = 50, trainer_kwargs: Optional[Dict[str, Any]] = None, train_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None, validation_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None)[source]#

Bases: gluonts.torch.model.estimator.PyTorchLightningEstimator

Estimator class to train a Temporal Fusion Transformer (TFT) model, as described in [LAL+21].

TFT internally performs feature selection when making forecasts. For this reason, the dimensions of real-valued features can be grouped together if they correspond to the same variable (e.g., treat weather features as a one feature and holiday indicators as another feature).

For example, if the dataset contains key “feat_static_real” with shape [batch_size, 3], we can, e.g., - set static_dims = [3] to treat all three dimensions as a single feature - set static_dims = [1, 1, 1] to treat each dimension as a separate feature - set static_dims = [2, 1] to treat the first two dims as a single feature

See gluonts.torch.model.tft.TemporalFusionTransformerModel.input_shapes for more details on how the model configuration corresponds to the expected input shapes.

Parameters
  • freq – Frequency of the data to train on and predict.

  • prediction_length (int) – Length of the prediction horizon.

  • context_length – Number of previous time series values provided as input to the encoder. (default: None, in which case context_length = prediction_length).

  • quantiles – List of quantiles that the model will learn to predict. Defaults to [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]

  • distr_output – Distribution output to use (default: QuantileOutput).

  • num_heads – Number of attention heads in self-attention layer in the decoder.

  • hidden_dim – Size of the LSTM & transformer hidden states.

  • variable_dim – Size of the feature embeddings.

  • static_dims – Sizes of the real-valued static features.

  • dynamic_dims – Sizes of the real-valued dynamic features that are known in the future.

  • past_dynamic_dims – Sizes of the real-valued dynamic features that are only known in the past.

  • static_cardinalities – Cardinalities of the categorical static features.

  • dynamic_cardinalities – Cardinalities of the categorical dynamic features that are known in the future.

  • past_dynamic_cardinalities – Cardinalities of the categorical dynamic features that are ony known in the past.

  • time_features – List of time features, from gluonts.time_feature, to use as dynamic real features in addition to the provided data (default: None, in which case these are automatically determined based on freq).

  • lr – Learning rate (default: 1e-3).

  • weight_decay – Weight decay (default: 1e-8).

  • dropout_rate – Dropout regularization parameter (default: 0.1).

  • patience – Patience parameter for learning rate scheduler.

  • batch_size – The size of the batches to be used for training (default: 32).

  • num_batches_per_epoch (int = 50,) – Number of batches to be processed in each training epoch (default: 50).

  • trainer_kwargs – Additional arguments to provide to pl.Trainer for construction.

  • train_sampler – Controls the sampling of windows during training.

  • validation_sampler – Controls the sampling of windows during validation.

create_lightning_module() gluonts.torch.model.tft.lightning_module.TemporalFusionTransformerLightningModule[source]#

Create and return the network used for training (i.e., computing the loss).

Returns

The network that computes the loss given input data.

Return type

pl.LightningModule

create_predictor(transformation: gluonts.transform._base.Transformation, module: gluonts.torch.model.tft.lightning_module.TemporalFusionTransformerLightningModule) gluonts.torch.model.predictor.PyTorchPredictor[source]#

Create and return a predictor object.

Parameters
  • transformation – Transformation to be applied to data before it goes into the model.

  • module – A trained pl.LightningModule object.

Returns

A predictor wrapping a nn.Module used for inference.

Return type

Predictor

create_training_data_loader(data: gluonts.dataset.Dataset, module: gluonts.torch.model.tft.lightning_module.TemporalFusionTransformerLightningModule, shuffle_buffer_length: Optional[int] = None, **kwargs) Iterable[source]#

Create a data loader for training purposes.

Parameters
  • data – Dataset from which to create the data loader.

  • module – The pl.LightningModule object that will receive the batches from the data loader.

Returns

The data loader, i.e. and iterable over batches of data.

Return type

Iterable

create_transformation() gluonts.transform._base.Transformation[source]#

Create and return the transformation needed for training and inference.

Returns

The transformation that will be applied entry-wise to datasets, at training and inference time.

Return type

Transformation

create_validation_data_loader(data: gluonts.dataset.Dataset, module: gluonts.torch.model.tft.lightning_module.TemporalFusionTransformerLightningModule, **kwargs) Iterable[source]#

Create a data loader for validation purposes.

Parameters
  • data – Dataset from which to create the data loader.

  • module – The pl.LightningModule object that will receive the batches from the data loader.

Returns

The data loader, i.e. and iterable over batches of data.

Return type

Iterable

input_names()[source]#
lead_time: int#
prediction_length: int#
class gluonts.torch.TiDEEstimator(freq: str, prediction_length: int, context_length: Optional[int] = None, feat_proj_hidden_dim: Optional[int] = None, encoder_hidden_dim: Optional[int] = None, decoder_hidden_dim: Optional[int] = None, temporal_hidden_dim: Optional[int] = None, distr_hidden_dim: Optional[int] = None, num_layers_encoder: Optional[int] = None, num_layers_decoder: Optional[int] = None, decoder_output_dim: Optional[int] = None, dropout_rate: Optional[float] = None, num_feat_dynamic_proj: Optional[int] = None, num_feat_dynamic_real: int = 0, num_feat_static_real: int = 0, num_feat_static_cat: int = 0, cardinality: Optional[List[int]] = None, embedding_dimension: Optional[List[int]] = None, layer_norm: bool = False, lr: float = 0.001, weight_decay: float = 1e-08, patience: int = 10, scaling: Optional[str] = 'mean', distr_output: gluonts.torch.distributions.output.Output = gluonts.torch.distributions.studentT.StudentTOutput(beta=0.0), batch_size: int = 32, num_batches_per_epoch: int = 50, trainer_kwargs: Optional[Dict[str, Any]] = None, train_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None, validation_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None)[source]#

Bases: gluonts.torch.model.estimator.PyTorchLightningEstimator

An estimator training the TiDE model form the paper https://arxiv.org/abs/2304.08424 extended for probabilistic forecasting.

This class is uses the model defined in TiDEModel, and wraps it into a TiDELightningModule for training purposes: training is performed using PyTorch Lightning’s pl.Trainer class.

Parameters
  • freq – Frequency of the data to train on and predict.

  • prediction_length (int) – Length of the prediction horizon.

  • context_length – Number of time steps prior to prediction time that the model takes as inputs (default: prediction_length).

  • feat_proj_hidden_dim – Size of the feature projection layer (default: 4).

  • encoder_hidden_dim – Size of the dense encoder layer (default: 4).

  • decoder_hidden_dim – Size of the dense decoder layer (default: 4).

  • temporal_hidden_dim – Size of the temporal decoder layer (default: 4).

  • distr_hidden_dim – Size of the distribution projection layer (default: 4).

  • num_layers_encoder – Number of layers in dense encoder (default: 1).

  • num_layers_decoder – Number of layers in dense decoder (default: 1).

  • decoder_output_dim – Output size of dense decoder (default: 4).

  • dropout_rate – Dropout regularization parameter (default: 0.3).

  • num_feat_dynamic_proj – Output size of feature projection layer (default: 2).

  • num_feat_dynamic_real – Number of dynamic real features in the data (default: 0).

  • num_feat_static_real – Number of static real features in the data (default: 0).

  • num_feat_static_cat – Number of static categorical features in the data (default: 0).

  • cardinality – Number of values of each categorical feature. This must be set if num_feat_static_cat > 0 (default: None).

  • embedding_dimension – Dimension of the embeddings for categorical features (default: [16 for cat in cardinality]).

  • layer_norm – Enable layer normalization or not (default: False).

  • lr – Learning rate (default: 1e-3).

  • weight_decay – Weight decay regularization parameter (default: 1e-8).

  • patience – Patience parameter for learning rate scheduler (default: 10).

  • distr_output – Distribution to use to evaluate observations and sample predictions (default: StudentTOutput()).

  • scaling – Which scaling method to use to scale the target values (default: mean).

  • batch_size – The size of the batches to be used for training (default: 32).

  • num_batches_per_epoch – Number of batches to be processed in each training epoch (default: 50).

  • trainer_kwargs – Additional arguments to provide to pl.Trainer for construction.

  • train_sampler – Controls the sampling of windows during training.

  • validation_sampler – Controls the sampling of windows during validation.

create_lightning_module() lightning.pytorch.core.module.LightningModule[source]#

Create and return the network used for training (i.e., computing the loss).

Returns

The network that computes the loss given input data.

Return type

pl.LightningModule

create_predictor(transformation: gluonts.transform._base.Transformation, module) gluonts.torch.model.predictor.PyTorchPredictor[source]#

Create and return a predictor object.

Parameters
  • transformation – Transformation to be applied to data before it goes into the model.

  • module – A trained pl.LightningModule object.

Returns

A predictor wrapping a nn.Module used for inference.

Return type

Predictor

create_training_data_loader(data: gluonts.dataset.Dataset, module: gluonts.torch.model.tide.lightning_module.TiDELightningModule, shuffle_buffer_length: Optional[int] = None, **kwargs) Iterable[source]#

Create a data loader for training purposes.

Parameters
  • data – Dataset from which to create the data loader.

  • module – The pl.LightningModule object that will receive the batches from the data loader.

Returns

The data loader, i.e. and iterable over batches of data.

Return type

Iterable

create_transformation() gluonts.transform._base.Transformation[source]#

Create and return the transformation needed for training and inference.

Returns

The transformation that will be applied entry-wise to datasets, at training and inference time.

Return type

Transformation

create_validation_data_loader(data: gluonts.dataset.Dataset, module: gluonts.torch.model.tide.lightning_module.TiDELightningModule, **kwargs) Iterable[source]#

Create a data loader for validation purposes.

Parameters
  • data – Dataset from which to create the data loader.

  • module – The pl.LightningModule object that will receive the batches from the data loader.

Returns

The data loader, i.e. and iterable over batches of data.

Return type

Iterable

lead_time: int#
prediction_length: int#
class gluonts.torch.WaveNetEstimator(freq: str, prediction_length: int, num_bins: int = 1024, num_residual_channels: int = 24, num_skip_channels: int = 32, dilation_depth: Optional[int] = None, num_stacks: int = 1, temperature: float = 1.0, num_feat_dynamic_real: int = 0, num_feat_static_cat: int = 0, num_feat_static_real: int = 0, cardinality: List[int] = [1], seasonality: Optional[int] = None, embedding_dimension: int = 5, use_log_scale_feature: bool = True, time_features: Optional[List[Callable[[pandas.core.indexes.period.PeriodIndex], numpy.ndarray]]] = None, lr: float = 0.001, weight_decay: float = 1e-08, train_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None, validation_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None, batch_size: int = 32, num_batches_per_epoch: int = 50, num_parallel_samples: int = 100, negative_data: bool = False, trainer_kwargs: Optional[Dict[str, Any]] = None)[source]#

Bases: gluonts.torch.model.estimator.PyTorchLightningEstimator

create_lightning_module() lightning.pytorch.core.module.LightningModule[source]#

Create and return the network used for training (i.e., computing the loss).

Returns

The network that computes the loss given input data.

Return type

pl.LightningModule

create_predictor(transformation: gluonts.transform._base.Transformation, module: gluonts.torch.model.wavenet.lightning_module.WaveNetLightningModule) gluonts.torch.model.predictor.PyTorchPredictor[source]#

Create and return a predictor object.

Parameters
  • transformation – Transformation to be applied to data before it goes into the model.

  • module – A trained pl.LightningModule object.

Returns

A predictor wrapping a nn.Module used for inference.

Return type

Predictor

create_training_data_loader(data: gluonts.dataset.Dataset, module: gluonts.torch.model.wavenet.lightning_module.WaveNetLightningModule, shuffle_buffer_length: Optional[int] = None, **kwargs) Iterable[source]#

Create a data loader for training purposes.

Parameters
  • data – Dataset from which to create the data loader.

  • module – The pl.LightningModule object that will receive the batches from the data loader.

Returns

The data loader, i.e. and iterable over batches of data.

Return type

Iterable

create_transformation() gluonts.transform._base.Transformation[source]#

Create and return the transformation needed for training and inference.

Returns

The transformation that will be applied entry-wise to datasets, at training and inference time.

Return type

Transformation

create_validation_data_loader(data: gluonts.dataset.Dataset, module: gluonts.torch.model.wavenet.lightning_module.WaveNetLightningModule, **kwargs) Iterable[source]#

Create a data loader for validation purposes.

Parameters
  • data – Dataset from which to create the data loader.

  • module – The pl.LightningModule object that will receive the batches from the data loader.

Returns

The data loader, i.e. and iterable over batches of data.

Return type

Iterable

lead_time: int#
prediction_length: int#

Subpackages#