gluonts.torch.model.deepar.module module

class gluonts.torch.model.deepar.module.DeepARModel(freq: str, context_length: int, prediction_length: int, num_feat_dynamic_real: int, num_feat_static_real: int, num_feat_static_cat: int, cardinality: List[int], embedding_dimension: Optional[List[int]] = None, num_layers: int = 2, hidden_size: int = 40, dropout_rate: float = 0.1, distr_output: gluonts.torch.modules.distribution_output.DistributionOutput = gluonts.torch.modules.distribution_output.StudentTOutput(), lags_seq: Optional[List[int]] = None, scaling: bool = True, num_parallel_samples: int = 100)[source]

Bases: torch.nn.modules.module.Module

forward(feat_static_cat: torch.Tensor, feat_static_real: torch.Tensor, past_time_feat: torch.Tensor, past_target: torch.Tensor, past_observed_values: torch.Tensor, future_time_feat: torch.Tensor, num_parallel_samples: Optional[int] = None) → torch.Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

output_distribution(params, scale=None, trailing_n=None) → torch.distributions.distribution.Distribution[source]
training = None
unroll_lagged_rnn(feat_static_cat: torch.Tensor, feat_static_real: torch.Tensor, past_time_feat: torch.Tensor, past_target: torch.Tensor, past_observed_values: torch.Tensor, future_time_feat: Optional[torch.Tensor] = None, future_target: Optional[torch.Tensor] = None) → Tuple[Tuple[torch.Tensor, ...], torch.Tensor, Tuple[torch.Tensor, torch.Tensor]][source]
class gluonts.torch.model.deepar.module.LaggedLSTM(input_size: int, features_size: int, num_layers: int = 2, hidden_size: int = 40, dropout_rate: float = 0.1, lags_seq: Optional[List[int]] = None)[source]

Bases: torch.nn.modules.module.Module

forward(prior_input: torch.Tensor, input: torch.Tensor, features: Optional[torch.Tensor] = None, state: Optional[Tuple[torch.Tensor, torch.Tensor]] = None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_lagged_subsequences(sequence: torch.Tensor, subsequences_length: int) → torch.Tensor[source]

Returns lagged subsequences of a given sequence.

Parameters
  • sequence (Tensor) – the sequence from which lagged subsequences should be extracted. Shape: (N, T, C).

  • subsequences_length (int) – length of the subsequences to be extracted.

Returns

lagged – a tensor of shape (N, S, C, I), where S = subsequences_length and I = len(indices), containing lagged subsequences. Specifically, lagged[i, j, :, k] = sequence[i, -indices[k]-S+j, :].

Return type

Tensor

training = None