gluonts.torch.model.wavenet package#
- class gluonts.torch.model.wavenet.WaveNet(pred_length: int, bin_values: List[float], num_residual_channels: int, num_skip_channels: int, dilation_depth: int, num_stacks: int, num_feat_dynamic_real: int = 1, num_feat_static_real: int = 1, cardinality: List[int] = [1], embedding_dimension: int = 5, num_parallel_samples: int = 100, temperature: float = 1.0, use_log_scale_feature: bool = True)[source]#
Bases:
torch.nn.modules.module.Module
The WaveNet model.
- Parameters
pred_length – Prediction length.
bin_values – List of bin values.
num_residual_channels – Number of residual channels.
num_skip_channels – Number of skip channels.
dilation_depth – The depth of the dilated convolution.
num_stacks – The number of dilation stacks.
num_feat_dynamic_real – The number of dynamic real features, by default 1
optional – The number of dynamic real features, by default 1
num_feat_static_real – The number of static real features, by default 1
optional – The number of static real features, by default 1
cardinality – The cardinalities of static categorical features, by default [1]
optional – The cardinalities of static categorical features, by default [1]
embedding_dimension – The dimension of the embeddings for categorical features, by default 5
optional – The dimension of the embeddings for categorical features, by default 5
num_parallel_samples – The number of parallel samples to generate during inference. This parameter is only used in inference mode, by default 100
optional – The number of parallel samples to generate during inference. This parameter is only used in inference mode, by default 100
temperature – Temparature used for sampling from the output softmax distribution, by default 1.0
optional – Temparature used for sampling from the output softmax distribution, by default 1.0
- base_net(inputs: torch.Tensor, queues: Optional[List[torch.Tensor]] = None) Tuple[torch.Tensor, List[torch.Tensor]] [source]#
Forward pass through the WaveNet.
- Parameters
inputs – A tensor of inputs Shape: (batch_size, num_residual_channels, sequence_length)
queues – Convolutional queues containing past computations. This speeds up predictions and must be provided during prediction mode. See [Paine et al., 2016] for details, by default None
optional – Convolutional queues containing past computations. This speeds up predictions and must be provided during prediction mode. See [Paine et al., 2016] for details, by default None
al. ([Paine et) – arXiv preprint arXiv:1611.09482 (2016).
algorithm." (2016] "Fast wavenet generation) – arXiv preprint arXiv:1611.09482 (2016).
- Returns
A tensor containing the unnormalized outputs of the network of
shape (batch_size, pred_length, num_bins) and a list containing the
convolutional queues for each layer. The queue corresponding to
layer `l` has shape ((batch_size, num_residual_channels, 2^l).)
- forward(feat_static_cat: torch.Tensor, feat_static_real: torch.Tensor, past_target: torch.Tensor, past_observed_values: torch.Tensor, past_time_feat: torch.Tensor, future_time_feat: torch.Tensor, scale: torch.Tensor, prediction_length: Optional[int] = None, num_parallel_samples: Optional[int] = None, temperature: Optional[float] = None) torch.Tensor [source]#
Generate predictions from the WaveNet model.
- Parameters
feat_static_cat – Static categorical features: (batch_size, num_cat_features)
feat_static_real – Static real-valued features: (batch_size, num_feat_static_real)
past_target – Past target: (batch_size, receptive_field)
past_observed_values – Observed value indicator for the past target: (batch_size, receptive_field)
past_time_feat – Past time features: (batch_size, num_time_features, receptive_field)
future_time_feat – Future time features: (batch_size, num_time_features, pred_length)
scale – Scale of the time series: (batch_size, 1)
prediction_length – Time length of the samples to generate. If not provided, use
self.prediction_length
.num_parallel_samples – Number of samples to generate. If not provided, use
self.num_parallel_samples
.temperature – Temperature to use in generating samples. If not provided, use
self.temperature
.
- Return type
Predictions with shape (batch_size, num_parallel_samples, pred_length)
- get_full_features(feat_static_cat: torch.Tensor, feat_static_real: torch.Tensor, past_observed_values: torch.Tensor, past_time_feat: torch.Tensor, future_time_feat: torch.Tensor, future_observed_values: Optional[torch.Tensor], scale: torch.Tensor) torch.Tensor [source]#
Prepares the inputs for the network by repeating static feature and concatenating it with time features and observed value indicator.
- Parameters
feat_static_cat – Static categorical features: (batch_size, num_cat_features)
feat_static_real – Static real-valued features: (batch_size, num_feat_static_real)
past_observed_values – Observed value indicator for the past target: (batch_size, receptive_field)
past_time_feat – Past time features: (batch_size, num_time_features, receptive_field)
future_time_feat – Future time features: (batch_size, num_time_features, pred_length)
future_observed_values – Observed value indicator for the future target: (batch_size, pred_length). This will be set to all ones, if not provided (e.g., during inference)
scale – scale of the time series: (batch_size, 1)
- Returns
A tensor containing all the features ready to be passed through the
network.
Shape ((batch_size, num_features, receptive_field + pred_length))
- loss(feat_static_cat: torch.Tensor, feat_static_real: torch.Tensor, past_target: torch.Tensor, past_observed_values: torch.Tensor, past_time_feat: torch.Tensor, future_time_feat: torch.Tensor, future_target: torch.Tensor, future_observed_values: torch.Tensor, scale: torch.Tensor) torch.Tensor [source]#
Computes the training loss for the wavenet model.
- Parameters
feat_static_cat – Static categorical features: (batch_size, num_cat_features)
feat_static_real – Static real-valued features: (batch_size, num_feat_static_real)
past_target – Past target: (batch_size, receptive_field)
past_observed_values – Observed value indicator for the past target: (batch_size, receptive_field)
past_time_feat – Past time features: (batch_size, num_time_features, receptive_field)
future_time_feat – Future time features: (batch_size, num_time_features, pred_length)
future_target – Target on which the loss is computed: (batch_size, pred_length)
future_observed_values – Observed value indicator for the future target: (batch_size, pred_length). This will be set to all ones, if not provided (e.g., during inference)
scale – Scale of the time series: (batch_size, 1)
- Return type
Loss tensor with shape (batch_size, pred_length)
- target_feature_embedding(target: torch.Tensor, features: torch.Tensor) torch.Tensor [source]#
Provides a joint embedding for the target and features.
- Parameters
target – Full target of shape (batch_size, sequence_length)
features – Full features of shape (batch_size, num_features, sequence_length)
- Returns
A tensor containing a joint embedding of target and features.
Shape ((batch_size, n_residue, sequence_length))
- training: bool#
- class gluonts.torch.model.wavenet.WaveNetEstimator(freq: str, prediction_length: int, num_bins: int = 1024, num_residual_channels: int = 24, num_skip_channels: int = 32, dilation_depth: Optional[int] = None, num_stacks: int = 1, temperature: float = 1.0, num_feat_dynamic_real: int = 0, num_feat_static_cat: int = 0, num_feat_static_real: int = 0, cardinality: List[int] = [1], seasonality: Optional[int] = None, embedding_dimension: int = 5, use_log_scale_feature: bool = True, time_features: Optional[List[Callable[[pandas.core.indexes.period.PeriodIndex], numpy.ndarray]]] = None, lr: float = 0.001, weight_decay: float = 1e-08, train_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None, validation_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None, batch_size: int = 32, num_batches_per_epoch: int = 50, num_parallel_samples: int = 100, negative_data: bool = False, trainer_kwargs: Optional[Dict[str, Any]] = None)[source]#
Bases:
gluonts.torch.model.estimator.PyTorchLightningEstimator
- create_lightning_module() lightning.pytorch.core.module.LightningModule [source]#
Create and return the network used for training (i.e., computing the loss).
- Returns
The network that computes the loss given input data.
- Return type
pl.LightningModule
- create_predictor(transformation: gluonts.transform._base.Transformation, module: gluonts.torch.model.wavenet.lightning_module.WaveNetLightningModule) gluonts.torch.model.predictor.PyTorchPredictor [source]#
Create and return a predictor object.
- Parameters
transformation – Transformation to be applied to data before it goes into the model.
module – A trained pl.LightningModule object.
- Returns
A predictor wrapping a nn.Module used for inference.
- Return type
- create_training_data_loader(data: gluonts.dataset.Dataset, module: gluonts.torch.model.wavenet.lightning_module.WaveNetLightningModule, shuffle_buffer_length: Optional[int] = None, **kwargs) Iterable [source]#
Create a data loader for training purposes.
- Parameters
data – Dataset from which to create the data loader.
module – The pl.LightningModule object that will receive the batches from the data loader.
- Returns
The data loader, i.e. and iterable over batches of data.
- Return type
Iterable
- create_transformation() gluonts.transform._base.Transformation [source]#
Create and return the transformation needed for training and inference.
- Returns
The transformation that will be applied entry-wise to datasets, at training and inference time.
- Return type
- create_validation_data_loader(data: gluonts.dataset.Dataset, module: gluonts.torch.model.wavenet.lightning_module.WaveNetLightningModule, **kwargs) Iterable [source]#
Create a data loader for validation purposes.
- Parameters
data – Dataset from which to create the data loader.
module – The pl.LightningModule object that will receive the batches from the data loader.
- Returns
The data loader, i.e. and iterable over batches of data.
- Return type
Iterable
- lead_time: int#
- prediction_length: int#
- class gluonts.torch.model.wavenet.WaveNetLightningModule(model_kwargs: dict, lr: float = 0.001, weight_decay: float = 1e-08)[source]#
Bases:
lightning.pytorch.core.module.LightningModule
LightningModule wrapper over WaveNet.
- Parameters
model_kwargs – Keyword arguments to pass to WaveNet.
lr – Learning rate, by default 1e-3
optional – Learning rate, by default 1e-3
weight_decay – Weight decay, by default 1e-8
optional – Weight decay, by default 1e-8