gluonts.model.deepstate package

class gluonts.model.deepstate.DeepStateEstimator(freq: str, prediction_length: int, cardinality: List[int], add_trend: bool = False, past_length: Optional[int] = None, num_periods_to_train: int = 4, trainer: =, batch_size=None, callbacks=None, clip_gradient=10.0, ctx=None, epochs=100, hybridize=False, init="xavier", learning_rate=0.001, learning_rate_decay_factor=0.5, minimum_learning_rate=5e-05, num_batches_per_epoch=50, patience=10, weight_decay=1e-08), num_layers: int = 2, num_cells: int = 40, cell_type: str = 'lstm', num_parallel_samples: int = 100, dropout_rate: float = 0.1, use_feat_dynamic_real: bool = False, use_feat_static_cat: bool = True, embedding_dimension: Optional[List[int]] = None, issm: Optional[gluonts.model.deepstate.issm.ISSM] = None, scaling: bool = True, time_features: Optional[List[gluonts.time_feature._base.TimeFeature]] = None, noise_std_bounds: =, upper=1.0), prior_cov_bounds: =, upper=1.0), innovation_bounds: =, upper=0.01), batch_size: int = 32)[source]


Construct a DeepState estimator.

This implements the deep state space model described in [RSG+18].

  • freq – Frequency of the data to train on and predict

  • prediction_length – Length of the prediction horizon

  • cardinality – Number of values of each categorical feature. This must be set by default unless use_feat_static_cat is set to False explicitly (which is NOT recommended).

  • add_trend – Flag to indicate whether to include trend component in the state space model

  • past_length – This is the length of the training time series; i.e., number of steps to unroll the RNN for before computing predictions. Set this to (at most) the length of the shortest time series in the dataset. (default: None, in which case the training length is set such that at least num_seasons_to_train seasons are included in the training. See num_seasons_to_train)

  • num_periods_to_train – (Used only when past_length is not set) Number of periods to include in the training time series. (default: 4) Here period corresponds to the longest cycle one can expect given the granularity of the time series. See: -value-for-seconds-minutes-intervals-data-in-r

  • trainer – Trainer object to be used (default: Trainer())

  • num_layers – Number of RNN layers (default: 2)

  • num_cells – Number of RNN cells for each layer (default: 40)

  • cell_type – Type of recurrent cells to use (available: ‘lstm’ or ‘gru’; default: ‘lstm’)

  • num_parallel_samples – Number of evaluation samples per time series to increase parallelism during inference. This is a model optimization that does not affect the accuracy ( default: 100).

  • dropout_rate – Dropout regularization parameter (default: 0.1)

  • use_feat_dynamic_real – Whether to use the feat_dynamic_real field from the data (default: False)

  • use_feat_static_cat – Whether to use the feat_static_cat field from the data (default: True)

  • embedding_dimension – Dimension of the embeddings for categorical features (default: [min(50, (cat+1)//2) for cat in cardinality])

  • scaling – Whether to automatically scale the target values (default: true)

  • time_features – Time features to use as inputs of the RNN (default: None, in which case these are automatically determined based on freq)

  • noise_std_bounds – Lower and upper bounds for the standard deviation of the observation noise

  • prior_cov_bounds – Lower and upper bounds for the diagonal of the prior covariance matrix

  • innovation_bounds – Lower and upper bounds for the standard deviation of the observation noise

  • batch_size – The size of the batches to be used training and prediction.

create_predictor(transformation: gluonts.transform._base.Transformation, trained_network: mxnet.gluon.block.HybridBlock) → gluonts.model.predictor.Predictor[source]

Create and return a predictor object.


A predictor wrapping a HybridBlock used for inference.

Return type


create_training_data_loader(data: Iterable[Dict[str, Any]], **kwargs) → gluonts.dataset.loader.DataLoader[source]
create_training_network() → gluonts.model.deepstate._network.DeepStateTrainingNetwork[source]

Create and return the network used for training (i.e., computing the loss).


The network that computes the loss given input data.

Return type


create_transformation() → gluonts.transform._base.Transformation[source]

Create and return the transformation needed for training and inference.


The transformation that will be applied entry-wise to datasets, at training and inference time.

Return type


create_validation_data_loader(data: Iterable[Dict[str, Any]], **kwargs) → gluonts.dataset.loader.DataLoader[source]
freq = None
lead_time = None
prediction_length = None