gluonts.model.tpp.deeptpp package

class gluonts.model.tpp.deeptpp.DeepTPPEstimator(prediction_interval_length: float, context_interval_length: float, num_marks: int, time_distr_output: gluonts.model.tpp.distribution.base.TPPDistributionOutput = gluonts.model.tpp.distribution.weibull.WeibullOutput(), embedding_dim: int = 5, trainer: gluonts.mx.trainer._base.Trainer = gluonts.mx.trainer._base.Trainer(avg_strategy=gluonts.mx.trainer.model_averaging.SelectNBestMean(maximize=False, metric="score", num_models=1), batch_size=None, clip_gradient=10.0, ctx=None, epochs=100, hybridize=False, init="xavier", learning_rate=0.001, learning_rate_decay_factor=0.5, minimum_learning_rate=5e-05, num_batches_per_epoch=50, patience=10, post_initialize_cb=None, weight_decay=1e-08), num_hidden_dimensions: int = 10, num_parallel_samples: int = 100, num_training_instances: int = 100, freq: str = 'H', batch_size: int = 32)[source]

Bases: gluonts.mx.model.estimator.GluonEstimator

DeepTPP is a multivariate point process model based on an RNN.

After each event \((\tau_i, m_i)\), we feed the inter-arrival time \(\tau_i\) and the mark \(m_i\) into the RNN. The state \(h_i\) of the RNN represents the history embedding. We use \(h_i\) to parametrize the distribution over the next inter-arrival time \(p(\tau_{i+1} | h_i)\) and the distribution over the next mark \(p(m_{i+1} | h_i)\). The distribution over the marks is always categorical, but different choices are possible for the distribution over inter-arrival times - see gluonts.model.tpp.distribution.

The model is a generalization of the approaches described in [DDT+16], [TWJ19] and [SBG20].

References

Parameters
  • prediction_interval_length – The length of the interval (in continuous time) that the estimator will predict at prediction time.

  • context_interval_length – The length of intervals (in continuous time) that the estimator will be trained with.

  • num_marks – The number of marks (distinct processes), i.e., the cardinality of the mark set.

  • time_distr_output – TPPDistributionOutput for the distribution over the inter-arrival times. See gluonts.model.tpp.distribution for possible choices.

  • embedding_dim – The dimension of vector embeddings for marks (used as input to the GRU).

  • trainergluonts.mx.trainer.Trainer object which will be used to train the estimator. Note that Trainer(hybridize=False) must be set as DeepTPPEstimator currently does not support hybridization.

  • num_hidden_dimensions – Number of hidden units in the GRU network.

  • num_parallel_samples – The number of samples returned by the Predictor learned.

  • num_training_instances – The number of training instances to be sampled from each entry in the data set provided during training.

  • freq – Similar to the freq of discrete-time models, specifies the time unit by which inter-arrival times are given.

  • batch_size – The size of the batches to be used training and prediction.

create_predictor(transformation: gluonts.transform._base.Transformation, trained_network: gluonts.model.tpp.deeptpp._network.DeepTPPTrainingNetwork) → gluonts.model.predictor.Predictor[source]

Create and return a predictor object.

Returns

A predictor wrapping a HybridBlock used for inference.

Return type

Predictor

create_training_data_loader(data: Iterable[Dict[str, Any]], **kwargs) → gluonts.dataset.loader.DataLoader[source]
create_training_network() → mxnet.gluon.block.HybridBlock[source]

Create and return the network used for training (i.e., computing the loss).

Returns

The network that computes the loss given input data.

Return type

HybridBlock

create_transformation() → gluonts.transform._base.Transformation[source]

Create and return the transformation needed for training and inference.

Returns

The transformation that will be applied entry-wise to datasets, at training and inference time.

Return type

Transformation

create_validation_data_loader(data: Iterable[Dict[str, Any]], **kwargs) → gluonts.dataset.loader.DataLoader[source]
freq = None
lead_time = None
prediction_length = None
class gluonts.model.tpp.deeptpp.DeepTPPTrainingNetwork(num_marks: int, interval_length: float, time_distr_output: gluonts.model.tpp.distribution.base.TPPDistributionOutput = gluonts.model.tpp.distribution.weibull.WeibullOutput(), embedding_dim: int = 5, num_hidden_dimensions: int = 10, output_scale: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol, None] = None, apply_log_to_rnn_inputs: bool = True, **kwargs)[source]

Bases: gluonts.model.tpp.deeptpp._network.DeepTPPNetworkBase

hybrid_forward(F, target: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], valid_length: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], **kwargs) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]

Computes the negative log likelihood loss for the given sequences.

As the model is trained on past (resp. future) or context (resp. prediction) “intervals” as opposed to fixed-length “sequences”, the number of data points available varies across observations. To account for this, data is made available to the training network as a “ragged” tensor. The number of valid entries in each sequence is provided in a separate variable, xxx_valid_length.

Parameters
  • F – MXNet backend.

  • target – Tensor with observations. Shape: (batch_size, past_max_sequence_length, target_dim).

  • valid_length – The valid_length or number of valid entries in the past_target Tensor. Shape: (batch_size,)

Returns

Loss tensor. Shape: (batch_size,).

Return type

Tensor

class gluonts.model.tpp.deeptpp.DeepTPPPredictionNetwork(prediction_interval_length: float, num_parallel_samples: int = 100, *args, **kwargs)[source]

Bases: gluonts.model.tpp.deeptpp._network.DeepTPPNetworkBase

hybrid_forward(F, past_target: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], past_valid_length: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Tuple[Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]][source]

Draw forward samples from the model. At each step, we sample an inter-event time and feed it into the RNN to obtain the parameters for the next distribution over the inter-event time.

Parameters
  • F – MXNet backend.

  • past_target – Tensor with past observations. Shape: (batch_size, context_length, target_dim). Has to comply with self.context_interval_length.

  • past_valid_length – The valid_length or number of valid entries in the past_target Tensor. Shape: (batch_size,)

Returns

  • sampled_target (Tensor) – Predicted inter-event times and marks. Shape: (samples, batch_size, max_prediction_length, target_dim).

  • sampled_valid_length (Tensor) – The number of valid entries in the time axis of each sample. Shape (samples, batch_size)