gluonts.torch.util module#
- gluonts.torch.util.copy_parameters(net_source: Module, net_dest: Module, strict: bool = True) None[source]#
Copies parameters from one network to another.
- Parameters:
net_source – Input network.
net_dest – Output network.
strict – whether to strictly enforce that the keys in
state_dictmatch the keys returned by this module’sstate_dict()function. Default:True
- gluonts.torch.util.lagged_sequence_values(indices: List[int], prior_sequence: Tensor, sequence: Tensor, dim: int) Tensor[source]#
Constructs an array of lagged values from a given sequence.
- Parameters:
indices – Indices of the lagged observations. For example,
[0]indicates that, at any timet, the will have only the observation from timetitself; instead,[0, 24]indicates that the output will have observations from timestandt-24.prior_sequence – Tensor containing the input sequence prior to the time range for which the output is required.
sequence – Tensor containing the input sequence in the time range where the output is required.
dim – Time dimension.
- Returns:
A tensor of shape (*sequence.shape, len(indices)).
- Return type:
Tensor
- gluonts.torch.util.repeat_along_dim(a: Tensor, dim: int, repeats: int) Tensor[source]#
Repeat a tensor along a given dimension, using
torch.repeatinternally.- Parameters:
a – Original tensor to repeat.
dim – Dimension to repeat data over.
repeats – How many time to repeat the input tensor.
- Returns:
A tensor with the same size as the input one, except dimension
dimwhich is multiplied byrepeats.- Return type:
torch.Tensor
- gluonts.torch.util.resolve_device(device: Union[str, device]) Union[str, device][source]#
Resolves a torch device to the most appropriate one.
The
"auto"device is resolved to"cuda"if CUDA is available, and to"cpu"otherwise. Otherwise the device is unchanged.
- gluonts.torch.util.slice_along_dim(a: Tensor, dim: int, slice_: slice) Tensor[source]#
Slice a tensor along a given dimension.
- Parameters:
a – Original tensor to slice.
dim – Dimension to slice over.
slice – Slice to take.
- Returns:
A tensor with the same size as the input one, except dimension
dimwhich has length equal to the slice length.- Return type:
torch.Tensor
- gluonts.torch.util.take_last(a: Tensor, dim: int, num: int) Tensor[source]#
Take last elements from a given tensor along a given dimension.
- Parameters:
a – Original tensor to slice.
dim – Dimension to slice over.
num – Number of trailing elements to retain (non-negative).
- Returns:
A tensor with the same size as the input one, except dimension
dimwhich has length equal tonum.- Return type:
torch.Tensor
- gluonts.torch.util.unsqueeze_expand(a: Tensor, dim: int, size: int) Tensor[source]#
Unsqueeze a dimension and expand over it in one go.
- Parameters:
a – Original tensor to unsqueeze.
dim – Dimension to unsqueeze.
size – Size for the new dimension.
- Returns:
A tensor with an added dimension
dimof sizesize.- Return type:
torch.Tensor
- gluonts.torch.util.weighted_average(x: Tensor, weights: Optional[Tensor] = None, dim=None) Tensor[source]#
Computes the weighted average of a given tensor across a given dim, masking values associated with weight zero,
meaning instead of nan * 0 = nan you will get 0 * 0 = 0.
- Parameters:
x – Input tensor, of which the average must be computed.
weights – Weights tensor, of the same shape as x.
dim – The dim along which to average x
- Returns:
The tensor with values averaged along the specified dim.
- Return type:
Tensor