gluonts.mx.model.tpp.deeptpp package#

class gluonts.mx.model.tpp.deeptpp.DeepTPPEstimator(prediction_interval_length: float, context_interval_length: float, num_marks: int, time_distr_output: TPPDistributionOutput = gluonts.mx.model.tpp.distribution.weibull.WeibullOutput(), embedding_dim: int = 5, trainer: Trainer = gluonts.mx.trainer._base.Trainer(add_default_callbacks=True, callbacks=None, clip_gradient=10.0, ctx=None, epochs=100, hybridize=False, init='xavier', learning_rate=0.001, num_batches_per_epoch=50, weight_decay=1e-08), num_hidden_dimensions: int = 10, num_parallel_samples: int = 100, num_training_instances: int = 100, freq: str = 'H', batch_size: int = 32)[source]#

Bases: GluonEstimator

DeepTPP is a multivariate point process model based on an RNN.

After each event \((\tau_i, m_i)\), we feed the inter-arrival time \(\tau_i\) and the mark \(m_i\) into the RNN. The state \(h_i\) of the RNN represents the history embedding. We use \(h_i\) to parametrize the distribution over the next inter-arrival time \(p(\tau_{i+1} | h_i)\) and the distribution over the next mark \(p(m_{i+1} | h_i)\). The distribution over the marks is always categorical, but different choices are possible for the distribution over inter-arrival times - see gluonts.model.tpp.distribution.

The model is a generalization of the approaches described in [DDT+16], [TWJ19] and [SBG20].

References

Parameters:
  • prediction_interval_length – The length of the interval (in continuous time) that the estimator will predict at prediction time.

  • context_interval_length – The length of intervals (in continuous time) that the estimator will be trained with.

  • num_marks – The number of marks (distinct processes), i.e., the cardinality of the mark set.

  • time_distr_output – TPPDistributionOutput for the distribution over the inter-arrival times. See gluonts.model.tpp.distribution for possible choices.

  • embedding_dim – The dimension of vector embeddings for marks (used as input to the GRU).

  • trainergluonts.mx.trainer.Trainer object which will be used to train the estimator. Note that Trainer(hybridize=False) must be set as DeepTPPEstimator currently does not support hybridization.

  • num_hidden_dimensions – Number of hidden units in the GRU network.

  • num_parallel_samples – The number of samples returned by the Predictor learned.

  • num_training_instances – The number of training instances to be sampled from each entry in the data set provided during training.

  • freq – Similar to the freq of discrete-time models, specifies the time unit by which inter-arrival times are given.

  • batch_size – The size of the batches to be used training and prediction.

create_predictor(transformation: Transformation, trained_network: DeepTPPTrainingNetwork) Predictor[source]#

Create and return a predictor object.

Parameters:
  • transformation – Transformation to be applied to data before it goes into the model.

  • module – A trained HybridBlock object.

Returns:

A predictor wrapping a HybridBlock used for inference.

Return type:

Predictor

create_training_data_loader(data: Dataset, **kwargs) Iterable[Dict[str, Any]][source]#

Create a data loader for training purposes.

Parameters:

data – Dataset from which to create the data loader.

Returns:

The data loader, i.e. and iterable over batches of data.

Return type:

DataLoader

create_training_network() HybridBlock[source]#

Create and return the network used for training (i.e., computing the loss).

Returns:

The network that computes the loss given input data.

Return type:

HybridBlock

create_transformation() Transformation[source]#

Create and return the transformation needed for training and inference.

Returns:

The transformation that will be applied entry-wise to datasets, at training and inference time.

Return type:

Transformation

create_validation_data_loader(data: Dataset, **kwargs) Iterable[Dict[str, Any]][source]#

Create a data loader for validation purposes.

Parameters:

data – Dataset from which to create the data loader.

Returns:

The data loader, i.e. and iterable over batches of data.

Return type:

DataLoader

class gluonts.mx.model.tpp.deeptpp.DeepTPPPredictionNetwork(prediction_interval_length: float, num_parallel_samples: int = 100, *args, **kwargs)[source]#

Bases: DeepTPPNetworkBase

hybrid_forward(F, past_target: Union[NDArray, Symbol], past_valid_length: Union[NDArray, Symbol]) Tuple[Union[NDArray, Symbol], Union[NDArray, Symbol]][source]#

Draw forward samples from the model. At each step, we sample an inter- event time and feed it into the RNN to obtain the parameters for the next distribution over the inter-event time.

Parameters:
  • F – MXNet backend.

  • past_target – Tensor with past observations. Shape: (batch_size, context_length, target_dim). Has to comply with self.context_interval_length.

  • past_valid_length – The valid_length or number of valid entries in the past_target Tensor. Shape: (batch_size,)

Returns:

  • sampled_target (Tensor) – Predicted inter-event times and marks. Shape: (samples, batch_size, max_prediction_length, target_dim).

  • sampled_valid_length (Tensor) – The number of valid entries in the time axis of each sample. Shape (samples, batch_size)

class gluonts.mx.model.tpp.deeptpp.DeepTPPTrainingNetwork(num_marks: int, interval_length: float, time_distr_output: TPPDistributionOutput = gluonts.mx.model.tpp.distribution.weibull.WeibullOutput(), embedding_dim: int = 5, num_hidden_dimensions: int = 10, output_scale: Optional[Union[NDArray, Symbol]] = None, apply_log_to_rnn_inputs: bool = True, **kwargs)[source]#

Bases: DeepTPPNetworkBase

hybrid_forward(F, target: Union[NDArray, Symbol], valid_length: Union[NDArray, Symbol], **kwargs) Union[NDArray, Symbol][source]#

Computes the negative log likelihood loss for the given sequences.

As the model is trained on past (resp. future) or context (resp. prediction) “intervals” as opposed to fixed-length “sequences”, the number of data points available varies across observations. To account for this, data is made available to the training network as a “ragged” tensor. The number of valid entries in each sequence is provided in a separate variable, xxx_valid_length.

Parameters:
  • F – MXNet backend.

  • target – Tensor with observations. Shape: (batch_size, past_max_sequence_length, target_dim).

  • valid_length – The valid_length or number of valid entries in the past_target Tensor. Shape: (batch_size,)

Returns:

Loss tensor. Shape: (batch_size,).

Return type:

Tensor