gluonts.model.simple_feedforward package

class gluonts.model.simple_feedforward.SimpleFeedForwardEstimator(freq: str, prediction_length: int, sampling: bool = True, trainer: =, batch_size=None, callbacks=None, clip_gradient=10.0, ctx=None, epochs=100, hybridize=True, init="xavier", learning_rate=0.001, learning_rate_decay_factor=0.5, minimum_learning_rate=5e-05, num_batches_per_epoch=50, patience=10, weight_decay=1e-08), num_hidden_dimensions: Optional[List[int]] = None, context_length: Optional[int] = None, distr_output: =, imputation_method: Optional[gluonts.transform.feature.MissingValueImputation] = None, batch_normalization: bool = False, mean_scaling: bool = True, num_parallel_samples: int = 100, train_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None, validation_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None, batch_size: int = 32)[source]


SimpleFeedForwardEstimator shows how to build a simple MLP model predicting the next target time-steps given the previous ones.

Given that we want to define a gluon model trainable by SGD, we inherit the parent class GluonEstimator that handles most of the logic for fitting a neural-network.

We thus only have to define:

  1. How the data is transformed before being fed to our model:

    def create_transformation(self) -> Transformation
  2. How the training happens:

    def create_training_network(self) -> HybridBlock
  3. how the predictions can be made for a batch given a trained network:

    def create_predictor(
         transformation: Transformation,
         trained_net: HybridBlock,
    ) -> Predictor
  • freq – Time time granularity of the data

  • prediction_length – Length of the prediction horizon

  • trainer – Trainer object to be used (default: Trainer())

  • num_hidden_dimensions – Number of hidden nodes in each layer (default: [40, 40])

  • context_length – Number of time units that condition the predictions (default: None, in which case context_length = prediction_length)

  • distr_output – Distribution to fit (default: StudentTOutput())

  • batch_normalization – Whether to use batch normalization (default: False)

  • mean_scaling – Scale the network input by the data mean and the network output by its inverse (default: True)

  • num_parallel_samples – Number of evaluation samples per time series to increase parallelism during inference. This is a model optimization that does not affect the accuracy (default: 100)

  • train_sampler – Controls the sampling of windows during training.

  • validation_sampler – Controls the sampling of windows during validation.

  • batch_size – The size of the batches to be used training and prediction.

create_predictor(transformation, trained_network)[source]

Create and return a predictor object.


A predictor wrapping a HybridBlock used for inference.

Return type


create_training_data_loader(data: gluonts.dataset.common.Dataset, **kwargs) → Iterable[Dict[str, Any]][source]
create_training_network() → mxnet.gluon.block.HybridBlock[source]

Create and return the network used for training (i.e., computing the loss).


The network that computes the loss given input data.

Return type


create_transformation() → gluonts.transform._base.Transformation[source]

Create and return the transformation needed for training and inference.


The transformation that will be applied entry-wise to datasets, at training and inference time.

Return type


create_validation_data_loader(data: gluonts.dataset.common.Dataset, **kwargs) → Iterable[Dict[str, Any]][source]
freq = None
lead_time = None
prediction_length = None