gluonts.torch.distributions package#

class gluonts.torch.distributions.AffineTransformed(base_distribution: torch.distributions.distribution.Distribution, loc=None, scale=None)[source]#

Bases: torch.distributions.transformed_distribution.TransformedDistribution

Represents the distribution of an affinely transformed random variable.

This is the distribution of Y = scale * X + loc, where X is a random variable distributed according to base_distribution.

Parameters
  • base_distribution – Original distribution

  • loc – Translation parameter of the affine transformation.

  • scale – Scaling parameter of the affine transformation.

property mean#

Returns the mean of the distribution.

property stddev#

Returns the standard deviation of the distribution.

property variance#

Returns the variance of the distribution.

class gluonts.torch.distributions.BetaOutput(beta: float = 0.0)[source]#

Bases: gluonts.torch.distributions.distribution_output.DistributionOutput

args_dim: Dict[str, int] = {'concentration0': 1, 'concentration1': 1}#
distr_cls#

alias of torch.distributions.beta.Beta

classmethod domain_map(concentration1: torch.Tensor, concentration0: torch.Tensor)[source]#

Converts arguments to the right shape and domain.

The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.

property event_shape: Tuple#

Shape of each individual event compatible with the output object.

property value_in_support: float#

A float value that is valid for computing the loss of the corresponding output.

By default 0.0.

class gluonts.torch.distributions.BinnedUniforms(bins_lower_bound: float, bins_upper_bound: float, logits: torch.Tensor, numb_bins: int = 100, validate_args: Optional[bool] = None)[source]#

Bases: torch.distributions.distribution.Distribution

Binned uniforms distribution.

Parameters
  • bins_lower_bound (float) – The lower bound of the bin edges

  • bins_upper_bound (float) – The upper bound of the bin edges

  • numb_bins (int) – The number of equidistance bins to allocate between bins_lower_bound and bins_upper_bound. Default value is 100.

  • logits (tensor) – the logits defining the probability of each bins. These are softmaxed. The tensor is of shape (*batch_shape,)

  • validate_args (bool) –

arg_constraints = {'logits': Real()}#
property bins_prob#

Returns the probability of the observed point to be in each of the bins bins_prob.shape: (*batch_shape, event_shape). event_shape is numb_bins

cdf(x)[source]#

Cumulative density tensor for a tensor of data points x.

‘x’ is expected to be of shape (*batch_shape)

entropy()[source]#

We do not have an implementation of the entropy yet.

enumerate_support(expand=True)[source]#

This is a real valued distribution.

expand(batch_shape, _instance=None)[source]#

Returns a new distribution instance (or populates an existing instance provided by a derived class) with batch dimensions expanded to batch_shape. This method calls expand on the distribution’s parameters. As such, this does not allocate new memory for the expanded distribution instance. Additionally, this does not repeat any args checking or parameter broadcasting in __init__.py, when an instance is first created.

Parameters
  • batch_shape (torch.Size) – the desired expanded size.

  • _instance – new instance provided by subclasses that need to override .expand.

Returns

New distribution instance with batch dimensions expanded to batch_size.

get_one_hot_bin_indicator(x, in_float=False)[source]#

‘x’ is to have shape (*batch_shape) which can be for example () or (32, ) or (32, 168, )

has_rsample = False#
icdf(quantiles)[source]#

Inverse cdf of a tensor of quantile quantiles ‘quantiles’ is of shape (*batch_shape) with values between (0.0, 1.0)

This is the function to be called from the outside.

log_binned_p(x)[source]#

Log probability for a tensor of datapoints x.

‘x’ is to have shape (*batch_shape)

property log_bins_prob#
log_prob(x)[source]#

Log probability for a tensor of datapoints x.

‘x’ is to have shape (*batch_shape)

property mean#

Returns the mean of the distribution.

mean.shape : (*batch_shape,)

property median#

Returns the median of the distribution.

median.shape : (*batch_shape,)

property mode#

Returns the mode of the distribution.

mode.shape : (*batch_shape,)

pdf(x)[source]#

Probability for a tensor of data points x.

‘x’ is to have shape (*batch_shape)

rsample(sample_shape=torch.Size([]))[source]#

We do not have an implementation for the reparameterization trick yet.

sample(sample_shape=torch.Size([]))[source]#

Returns samples from the distribution.

Returns

samples of shape (*sample_shape, *batch_shape)

support = Real()#
variance()[source]#

Returns the variance of the distribution.

class gluonts.torch.distributions.BinnedUniformsOutput(bins_lower_bound: float, bins_upper_bound: float, num_bins: int)[source]#

Bases: gluonts.torch.distributions.distribution_output.DistributionOutput

distr_cls#

alias of gluonts.torch.distributions.binned_uniforms.BinnedUniforms

distribution(distr_args, loc: Optional[torch.Tensor] = None, scale: Optional[torch.Tensor] = None) gluonts.torch.distributions.binned_uniforms.BinnedUniforms[source]#

Construct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.

Parameters
  • distr_args – Constructor arguments for the underlying Distribution type.

  • loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.

  • scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.

classmethod domain_map(logits: torch.Tensor) torch.Tensor[source]#

Converts arguments to the right shape and domain.

The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.

property event_shape: Tuple#

Shape of each individual event compatible with the output object.

class gluonts.torch.distributions.DiscreteDistribution(values: torch.Tensor, probs: torch.Tensor, validate_args: Optional[bool] = None)[source]#

Bases: torch.distributions.distribution.Distribution

Implements discrete distribution where the underlying random variable takes a value from the finite set values with the corresponding probabilities.

Note: values can have duplicates in which case the probability mass of duplicates is added up.

A natural loss function, especially given that the new observation does not have to be from the finite set values, is ranked probability score (RPS). For this reason and to be consitent with terminology of other models, log_prob is implemented as the negative RPS.

static adjust_probs(values_sorted, probs_sorted)[source]#

Puts probability mass of all duplicate values into one position (last index of the duplicate).

Assumption: values_sorted is sorted!

Parameters
  • values_sorted

  • probs_sorted

Returns

log_prob(obs: torch.Tensor)[source]#

Returns the log of the probability density/mass function evaluated at value.

Parameters

value (Tensor) –

mean()[source]#

Returns the mean of the distribution.

quantile_losses(obs: torch.Tensor, quantiles: torch.Tensor, levels: torch.Tensor)[source]#
rps(obs: torch.Tensor, check_for_duplicates: bool = True)[source]#

Implements ranked probability score which is the sum of the qunatile losses for all possible quantiles.

Here, the number of quantiles is finite and is equal to the number of unique values in (each batch element of) obs.

Parameters
  • obs

  • check_for_duplicates

sample(sample_shape=torch.Size([]))[source]#

Generates a sample_shape shaped sample or sample_shape shaped batch of samples if the distribution parameters are batched.

class gluonts.torch.distributions.DistributionOutput(beta: float = 0.0)[source]#

Bases: gluonts.torch.distributions.output.Output

Class to construct a distribution given the output of a network.

distr_cls: type#
distribution(distr_args, loc: Optional[torch.Tensor] = None, scale: Optional[torch.Tensor] = None) torch.distributions.distribution.Distribution[source]#

Construct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.

Parameters
  • distr_args – Constructor arguments for the underlying Distribution type.

  • loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.

  • scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.

domain_map(*args: torch.Tensor)[source]#

Converts arguments to the right shape and domain.

The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.

property event_dim: int#

Number of event dimensions, i.e., length of the event_shape tuple, of the distributions that this object constructs.

property forecast_generator: gluonts.model.forecast_generator.ForecastGenerator#
loss(target: torch.Tensor, distr_args: Tuple[torch.Tensor, ...], loc: Optional[torch.Tensor] = None, scale: Optional[torch.Tensor] = None) torch.Tensor[source]#

Compute loss for target data given network output.

Parameters
  • target – Values of the target time series for which loss is to be computed.

  • distr_args – Arguments that can be used to construct the output distribution.

  • loc – Location parameter of the distribution, optional.

  • scale – Scale parameter of the distribution, optional.

Returns

Values of the loss, has same shape as target.

Return type

loss_values

class gluonts.torch.distributions.GammaOutput(beta: float = 0.0)[source]#

Bases: gluonts.torch.distributions.distribution_output.DistributionOutput

args_dim: Dict[str, int] = {'concentration': 1, 'rate': 1}#
distr_cls#

alias of torch.distributions.gamma.Gamma

classmethod domain_map(concentration: torch.Tensor, rate: torch.Tensor)[source]#

Converts arguments to the right shape and domain.

The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.

property event_shape: Tuple#

Shape of each individual event compatible with the output object.

property value_in_support: float#

A float value that is valid for computing the loss of the corresponding output.

By default 0.0.

class gluonts.torch.distributions.GeneralizedPareto(xi, beta, validate_args=None)[source]#

Bases: torch.distributions.distribution.Distribution

Generalised Pareto distribution.

Parameters
  • xi – Tensor containing the xi (heaviness) shape parameters. The tensor is of shape (*batch_shape, 1)

  • beta – Tensor containing the beta scale parameters. The tensor is of shape (*batch_shape, 1)

arg_constraints = {'beta': GreaterThan(lower_bound=0.0), 'xi': GreaterThan(lower_bound=0.0)}#
cdf(x)[source]#

cdf values for a tensor x of shape (*batch_shape)

has_rsample = False#
icdf(value)[source]#

icdf values for a tensor quantile values of shape (*batch_shape)

log_prob(x)[source]#

Log probability for a tensor x of shape (*batch_shape)

property mean#

Returns the mean of the distribution, of shape (*batch_shape,)

property stddev#

Returns the standard deviation of the distribution.

support = GreaterThan(lower_bound=0.0)#
property variance#

Returns the variance of the distribution, of shape (*batch_shape,)

class gluonts.torch.distributions.GeneralizedParetoOutput[source]#

Bases: gluonts.torch.distributions.distribution_output.DistributionOutput

distr_cls#

alias of gluonts.torch.distributions.generalized_pareto.GeneralizedPareto

distribution(distr_args, loc: Optional[torch.Tensor] = None, scale: Optional[torch.Tensor] = None) gluonts.torch.distributions.generalized_pareto.GeneralizedPareto[source]#

Construct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.

Parameters
  • distr_args – Constructor arguments for the underlying Distribution type.

  • loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.

  • scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.

classmethod domain_map(xi: torch.Tensor, beta: torch.Tensor) Tuple[torch.Tensor, torch.Tensor][source]#

Converts arguments to the right shape and domain.

The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.

property event_shape: Tuple#

Shape of each individual event compatible with the output object.

class gluonts.torch.distributions.ISQF(spline_knots: torch.Tensor, spline_heights: torch.Tensor, beta_l: torch.Tensor, beta_r: torch.Tensor, qk_y: torch.Tensor, qk_x: torch.Tensor, tol: float = 0.0001, validate_args: bool = False)[source]#

Bases: torch.distributions.distribution.Distribution

Distribution class for the Incremental (Spline) Quantile Function in the paper Learning Quantile Functions without Quantile Crossing for Distribution-free Time Series Forecasting by Park, Robinson, Aubet, Kan, Gasthaus, Wang :param spline_knots: Tensor parametrizing the x-positions (y-positions) of the spline knots

Shape: (*batch_shape, (num_qk-1), num_pieces)

Parameters
  • spline_heights – Tensor parametrizing the x-positions (y-positions) of the spline knots Shape: (*batch_shape, (num_qk-1), num_pieces)

  • qk_x – Tensor containing the increasing x-positions (y-positions) of the quantile knots, Shape: (*batch_shape, num_qk)

  • qk_y – Tensor containing the increasing x-positions (y-positions) of the quantile knots, Shape: (*batch_shape, num_qk)

  • beta_l – Tensor containing the non-negative learnable parameter of the left (right) tail, Shape: (*batch_shape,)

  • beta_r – Tensor containing the non-negative learnable parameter of the left (right) tail, Shape: (*batch_shape,)

property batch_shape: torch.Size#

Returns the shape over which parameters are batched.

cdf(z: torch.Tensor) torch.Tensor[source]#

Computes the quantile level alpha_tilde such that q(alpha_tilde) = z :param z: Tensor of shape = (*batch_shape,)

Returns

Tensor of shape = (*batch_shape,)

Return type

alpha_tilde

cdf_spline(z: torch.Tensor) torch.Tensor[source]#

For observations z and splines defined in [qk_x[k], qk_x[k+1]] Computes the quantile level alpha_tilde such that alpha_tilde.

= q^{-1}(z) if z is in-between qk_x[k] and qk_x[k+1] = qk_x[k] if z<qk_x[k] = qk_x[k+1] if z>qk_x[k+1] :param z: Observation, shape = (*batch_shape,)

Returns

Corresponding quantile level, shape = (*batch_shape, num_qk-1)

Return type

alpha_tilde

cdf_tail(z: torch.Tensor, left_tail: bool = True) torch.Tensor[source]#

Computes the quantile level alpha_tilde such that alpha_tilde.

= q^{-1}(z) if z is in the tail region = qk_x_l or qk_x_r if z is in the non-tail region :param z: Observation, shape = (*batch_shape,) :param left_tail: If True, compute alpha_tilde for the left tail

Otherwise, compute alpha_tilde for the right tail

Returns

Corresponding quantile level, shape = (*batch_shape,)

Return type

alpha_tilde

crps(z: torch.Tensor) torch.Tensor[source]#

Compute CRPS in analytical form :param z: Observation to evaluate. Shape = (*batch_shape,)

Returns

Tensor containing the CRPS, of the same shape as z

Return type

Tensor

crps_spline(z: torch.Tensor) torch.Tensor[source]#

Compute CRPS in analytical form for the spline :param z: Observation to evaluate. shape = (*batch_shape,)

Returns

Tensor containing the CRPS, of the same shape as z

Return type

Tensor

crps_tail(z: torch.Tensor, left_tail: bool = True) torch.Tensor[source]#

Compute CRPS in analytical form for left/right tails :param z: Observation to evaluate. shape = (*batch_shape,) :param left_tail: If True, compute CRPS for the left tail

Otherwise, compute CRPS for the right tail

Returns

Tensor containing the CRPS, of the same shape as z

Return type

Tensor

loss(z: torch.Tensor) torch.Tensor[source]#
static parameterize_qk(quantile_knots: torch.Tensor) Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor][source]#

Function to parameterize the x or y positions of the num_qk quantile knots :param quantile_knots: x or y positions of the quantile knots

shape: (*batch_shape, num_qk)

Returns

  • qk – x or y positions of the quantile knots (qk), with index=1, …, num_qk-1, shape: (*batch_shape, num_qk-1)

  • qk_plus – x or y positions of the quantile knots (qk), with index=2, …, num_qk, shape: (*batch_shape, num_qk-1)

  • qk_l – x or y positions of the left-most quantile knot (qk), shape: (*batch_shape)

  • qk_r – x or y positions of the right-most quantile knot (qk), shape: (*batch_shape)

static parameterize_spline(spline_knots: torch.Tensor, qk: torch.Tensor, qk_plus: torch.Tensor, tol: float = 0.0001) Tuple[torch.Tensor, torch.Tensor][source]#

Function to parameterize the x or y positions of the spline knots :param spline_knots: variable that parameterizes the spline knot positions :param qk: x or y positions of the quantile knots (qk),

with index=1, …, num_qk-1, shape: (*batch_shape, num_qk-1)

Parameters
  • qk_plus – x or y positions of the quantile knots (qk), with index=2, …, num_qk, shape: (*batch_shape, num_qk-1)

  • num_pieces – number of spline knot pieces

  • tol – tolerance hyperparameter for numerical stability

Returns

  • sk – x or y positions of the spline knots (sk), shape: (*batch_shape, num_qk-1, num_pieces)

  • delta_sk – difference of x or y positions of the spline knots (sk), shape: (*batch_shape, num_qk-1, num_pieces)

static parameterize_tail(beta: torch.Tensor, qk_x: torch.Tensor, qk_y: torch.Tensor) Tuple[torch.Tensor, torch.Tensor][source]#

Function to parameterize the tail parameters Note that the exponential tails are given by q(alpha)

= a_l log(alpha) + b_l if left tail = a_r log(1-alpha) + b_r if right tail where a_l=1/beta_l, b_l=-a_l*log(qk_x_l)+q(qk_x_l) a_r=1/beta_r, b_r=a_r*log(1-qk_x_r)+q(qk_x_r) :param beta: parameterizes the left or right tail, shape: (*batch_shape,) :param qk_x: left- or right-most x-positions of the quantile knots,

shape: (*batch_shape,)

Parameters

qk_y – left- or right-most y-positions of the quantile knots, shape: (*batch_shape,)

Returns

  • tail_a – a_l or a_r as described above

  • tail_b – b_l or b_r as described above

quantile(alpha: torch.Tensor) torch.Tensor[source]#
quantile_internal(alpha: torch.Tensor, dim: Optional[int] = None) torch.Tensor[source]#

Evaluates the quantile function at the quantile levels input_alpha :param alpha: Tensor of shape = (*batch_shape,) if axis=None, or containing an

additional axis on the specified position, otherwise

Parameters

dim – Index of the axis containing the different quantile levels which are to be computed. Read the description below for detailed information

Returns

Quantiles tensor, of the same shape as alpha

Return type

Tensor

quantile_spline(alpha: torch.Tensor, dim: Optional[int] = None) torch.Tensor[source]#
quantile_tail(alpha: torch.Tensor, dim: Optional[int] = None, left_tail: bool = True) torch.Tensor[source]#
rsample(sample_shape: torch.Size = torch.Size([])) torch.Tensor[source]#

Function used to draw random samples :param num_samples: number of samples :param dtype: data type

Returns

Tensor of shape (*batch_shape,) if num_samples = None else (num_samples, *batch_shape)

Return type

Tensor

class gluonts.torch.distributions.ISQFOutput(num_pieces: int, qk_x: List[float], tol: float = 0.0001)[source]#

Bases: gluonts.torch.distributions.distribution_output.DistributionOutput

DistributionOutput class for the Incremental (Spline) Quantile Function :param num_pieces: number of spline pieces for each spline

ISQF reduces to IQF when num_pieces = 1

Parameters
  • qk_x – List containing the x-positions of quantile knots

  • tol – tolerance for numerical safeguarding

distr_cls#

alias of gluonts.torch.distributions.isqf.ISQF

distribution(distr_args, loc: Optional[torch.Tensor] = None, scale: Optional[torch.Tensor] = None) gluonts.torch.distributions.isqf.ISQF[source]#

function outputing the distribution class distr_args: distribution arguments loc: shift to the data mean scale: scale to the data

classmethod domain_map(spline_knots: torch.Tensor, spline_heights: torch.Tensor, beta_l: torch.Tensor, beta_r: torch.Tensor, quantile_knots: torch.Tensor, tol: float = 0.0001) Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor][source]#

Domain map function The inputs of this function are specified by self.args_dim.

spline_knots, spline_heights: parameterizing the x-/ y-positions of the spline knots, shape = (*batch_shape, (num_qk-1)*num_pieces)

beta_l, beta_r: parameterizing the left/right tail, shape = (*batch_shape, 1)

quantile_knots: parameterizing the y-positions of the quantile knots, shape = (*batch_shape, num_qk)

property event_shape: Tuple#

Shape of each individual event compatible with the output object.

reshape_spline_args(distr_args, qk_x: List[float])[source]#

auxiliary function reshaping knots and heights to (*batch_shape, num_qk-1, num_pieces) qk_x to (*batch_shape, num_qk)

class gluonts.torch.distributions.ImplicitQuantileNetwork(outputs: torch.Tensor, taus: torch.Tensor, validate_args=None)[source]#

Bases: torch.distributions.distribution.Distribution

Distribution class for the Implicit Quantile from which we can sample or calculate the quantile loss.

Parameters
  • outputs – Outputs from the Implicit Quantile Network.

  • taus – Tensor random numbers from the Beta or Uniform distribution for the corresponding outputs.

arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {}#
quantile_loss(value: torch.Tensor) torch.Tensor[source]#
sample(sample_shape=torch.Size([])) torch.Tensor[source]#

Generates a sample_shape shaped sample or sample_shape shaped batch of samples if the distribution parameters are batched.

class gluonts.torch.distributions.ImplicitQuantileNetworkOutput(output_domain: Optional[str] = None, concentration1: float = 1.0, concentration0: float = 1.0, cos_embedding_dim: int = 64)[source]#

Bases: gluonts.torch.distributions.distribution_output.DistributionOutput

DistributionOutput class for the IQN from the paper Probabilistic Time Series Forecasting with Implicit Quantile Networks (https://arxiv.org/abs/2107.03743) by Gouttes et al. 2021.

Parameters
  • output_domain – Optional domain mapping of the output. Can be “positive”, “unit” or None.

  • concentration1 – Alpha parameter of the Beta distribution when sampling the taus during training.

  • concentration0 – Beta parameter of the Beta distribution when sampling the taus during training.

  • cos_embedding_dim – The embedding dimension for the taus embedding layer of IQN. Default is 64.

args_dim: Dict[str, int] = {'quantile_function': 1}#
distr_cls#

alias of gluonts.torch.distributions.implicit_quantile_network.ImplicitQuantileNetwork

distribution(distr_args, loc=0, scale=None) gluonts.torch.distributions.implicit_quantile_network.ImplicitQuantileNetwork[source]#

Construct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.

Parameters
  • distr_args – Constructor arguments for the underlying Distribution type.

  • loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.

  • scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.

classmethod domain_map(*args)[source]#

Converts arguments to the right shape and domain.

The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.

property event_shape#

Shape of each individual event compatible with the output object.

get_args_proj(in_features: int) torch.nn.modules.module.Module[source]#
in_features: int#
loss(target: torch.Tensor, distr_args: Tuple[torch.Tensor, ...], loc: Optional[torch.Tensor] = None, scale: Optional[torch.Tensor] = None) torch.Tensor[source]#

Compute loss for target data given network output.

Parameters
  • target – Values of the target time series for which loss is to be computed.

  • distr_args – Arguments that can be used to construct the output distribution.

  • loc – Location parameter of the distribution, optional.

  • scale – Scale parameter of the distribution, optional.

Returns

Values of the loss, has same shape as target.

Return type

loss_values

class gluonts.torch.distributions.LaplaceOutput(beta: float = 0.0)[source]#

Bases: gluonts.torch.distributions.distribution_output.DistributionOutput

args_dim: Dict[str, int] = {'loc': 1, 'scale': 1}#
distr_cls#

alias of torch.distributions.laplace.Laplace

classmethod domain_map(loc: torch.Tensor, scale: torch.Tensor)[source]#

Converts arguments to the right shape and domain.

The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.

property event_shape: Tuple#

Shape of each individual event compatible with the output object.

class gluonts.torch.distributions.NegativeBinomialOutput(beta: float = 0.0)[source]#

Bases: gluonts.torch.distributions.distribution_output.DistributionOutput

args_dim: Dict[str, int] = {'logits': 1, 'total_count': 1}#
distr_cls#

alias of gluonts.torch.distributions.negative_binomial.NegativeBinomial

distribution(distr_args, loc: Optional[torch.Tensor] = None, scale: Optional[torch.Tensor] = None) torch.distributions.distribution.Distribution[source]#

Construct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.

Parameters
  • distr_args – Constructor arguments for the underlying Distribution type.

  • loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.

  • scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.

classmethod domain_map(total_count: torch.Tensor, logits: torch.Tensor)[source]#

Converts arguments to the right shape and domain.

The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.

property event_shape: Tuple#

Shape of each individual event compatible with the output object.

class gluonts.torch.distributions.NormalOutput(beta: float = 0.0)[source]#

Bases: gluonts.torch.distributions.distribution_output.DistributionOutput

args_dim: Dict[str, int] = {'loc': 1, 'scale': 1}#
distr_cls#

alias of torch.distributions.normal.Normal

classmethod domain_map(loc: torch.Tensor, scale: torch.Tensor)[source]#

Converts arguments to the right shape and domain.

The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.

property event_shape: Tuple#

Shape of each individual event compatible with the output object.

class gluonts.torch.distributions.Output[source]#

Bases: object

Converts raw neural network output into a forecast and computes loss.

args_dim: Dict[str, int]#
domain_map(*args: torch.Tensor) Tuple[torch.Tensor, ...][source]#

Converts arguments to the right shape and domain.

The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.

property dtype#
property event_shape: Tuple#

Shape of each individual event compatible with the output object.

property forecast_generator: gluonts.model.forecast_generator.ForecastGenerator#
get_args_proj(in_features: int) torch.nn.modules.module.Module[source]#
in_features: int#
loss(target: torch.Tensor, distr_args: Tuple[torch.Tensor, ...], loc: Optional[torch.Tensor] = None, scale: Optional[torch.Tensor] = None) torch.Tensor[source]#

Compute loss for target data given network output.

Parameters
  • target – Values of the target time series for which loss is to be computed.

  • distr_args – Arguments that can be used to construct the output distribution.

  • loc – Location parameter of the distribution, optional.

  • scale – Scale parameter of the distribution, optional.

Returns

Values of the loss, has same shape as target.

Return type

loss_values

property value_in_support: float#

A float value that is valid for computing the loss of the corresponding output.

By default 0.0.

class gluonts.torch.distributions.PiecewiseLinear(gamma: torch.Tensor, slopes: torch.Tensor, knot_spacings: torch.Tensor, validate_args=False)[source]#

Bases: torch.distributions.distribution.Distribution

property batch_shape: torch.Size#

Returns the shape over which parameters are batched.

cdf(z: torch.Tensor) torch.Tensor[source]#

Returns the cumulative density/mass function evaluated at value.

Parameters

value (Tensor) –

crps(z: torch.Tensor) torch.Tensor[source]#
loss(z: torch.Tensor) torch.Tensor[source]#
static parametrize_knots(knot_spacings: torch.Tensor) torch.Tensor[source]#
static parametrize_slopes(slopes: torch.Tensor) torch.Tensor[source]#
quantile(u: torch.Tensor) torch.Tensor[source]#
quantile_internal(u: torch.Tensor, dim: Optional[int] = None) torch.Tensor[source]#
rsample(sample_shape: torch.Size = torch.Size([])) torch.Tensor[source]#

Generates a sample_shape shaped reparameterized sample or sample_shape shaped batch of reparameterized samples if the distribution parameters are batched.

class gluonts.torch.distributions.PiecewiseLinearOutput(num_pieces: int)[source]#

Bases: gluonts.torch.distributions.distribution_output.DistributionOutput

distr_cls#

alias of gluonts.torch.distributions.piecewise_linear.PiecewiseLinear

distribution(distr_args, loc: Optional[torch.Tensor] = None, scale: Optional[torch.Tensor] = None) gluonts.torch.distributions.piecewise_linear.PiecewiseLinear[source]#

Construct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.

Parameters
  • distr_args – Constructor arguments for the underlying Distribution type.

  • loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.

  • scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.

classmethod domain_map(gamma: torch.Tensor, slopes: torch.Tensor, knot_spacings: torch.Tensor) Tuple[torch.Tensor, torch.Tensor, torch.Tensor][source]#

Converts arguments to the right shape and domain.

The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.

property event_shape: Tuple#

Shape of each individual event compatible with the output object.

class gluonts.torch.distributions.PoissonOutput(beta: float = 0.0)[source]#

Bases: gluonts.torch.distributions.distribution_output.DistributionOutput

args_dim: Dict[str, int] = {'rate': 1}#
distr_cls#

alias of torch.distributions.poisson.Poisson

distribution(distr_args, loc: Optional[torch.Tensor] = None, scale: Optional[torch.Tensor] = None) torch.distributions.distribution.Distribution[source]#

Construct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.

Parameters
  • distr_args – Constructor arguments for the underlying Distribution type.

  • loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.

  • scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.

classmethod domain_map(rate: torch.Tensor)[source]#

Converts arguments to the right shape and domain.

The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.

property event_shape: Tuple#

Shape of each individual event compatible with the output object.

class gluonts.torch.distributions.QuantileOutput(quantiles: List[float])[source]#

Bases: gluonts.torch.distributions.output.Output

args_dim: Dict[str, int]#
domain_map(*args: torch.Tensor) Tuple[torch.Tensor, ...][source]#

Converts arguments to the right shape and domain.

The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.

property event_shape: Tuple#

Shape of each individual event compatible with the output object.

property forecast_generator: gluonts.model.forecast_generator.ForecastGenerator#
in_features: int#
loss(target: torch.Tensor, distr_args: Tuple[torch.Tensor, ...], loc: Optional[torch.Tensor] = None, scale: Optional[torch.Tensor] = None) torch.Tensor[source]#

Compute loss for target data given network output.

Parameters
  • target – Values of the target time series for which loss is to be computed.

  • distr_args – Arguments that can be used to construct the output distribution.

  • loc – Location parameter of the distribution, optional.

  • scale – Scale parameter of the distribution, optional.

Returns

Values of the loss, has same shape as target.

Return type

loss_values

property quantiles: List[float]#
class gluonts.torch.distributions.SplicedBinnedPareto(bins_lower_bound: float, bins_upper_bound: float, logits: torch.Tensor, upper_gp_xi: torch.Tensor, upper_gp_beta: torch.Tensor, lower_gp_xi: torch.Tensor, lower_gp_beta: torch.Tensor, numb_bins: int = 100, tail_percentile_gen_pareto: float = 0.05, validate_args=None)[source]#

Bases: gluonts.torch.distributions.binned_uniforms.BinnedUniforms

Spliced Binned-Pareto univariate distribution.

Parameters
  • bins_lower_bound (The lower bound of the bin edges) –

  • bins_upper_bound (The upper bound of the bin edges) –

  • numb_bins (The number of equidistance bins to allocate between) – bins_lower_bound and bins_upper_bound. Default value is 100.

  • tail_percentile_gen_pareto (The percentile of the distribution that is) – each tail. Default value is 0.05. NB: This symmetric percentile can still represent asymmetric upper and lower tails.

arg_constraints = {'logits': Real(), 'lower_gp_beta': GreaterThan(lower_bound=0.0), 'lower_gp_xi': GreaterThan(lower_bound=0.0), 'upper_gp_beta': GreaterThan(lower_bound=0.0), 'upper_gp_xi': GreaterThan(lower_bound=0.0)}#
cdf(x: torch.Tensor)[source]#

Cumulative density tensor for a tensor of data points x.

‘x’ is expected to be of shape (*batch_shape)

has_rsample = False#
log_prob(x: torch.Tensor, for_training=True)[source]#
Parameters
  • x (a tensor of size 'batch_size', 1) –

  • for_training (boolean to indicate a return of the log-probability, or) – of the loss (which is an adjusted log-probability)

pdf(x)[source]#

Probability for a tensor of data points x.

‘x’ is to have shape (*batch_shape)

support = Real()#
class gluonts.torch.distributions.SplicedBinnedParetoOutput(bins_lower_bound: float, bins_upper_bound: float, num_bins: int, tail_percentile_gen_pareto: float)[source]#

Bases: gluonts.torch.distributions.distribution_output.DistributionOutput

distr_cls#

alias of gluonts.torch.distributions.spliced_binned_pareto.SplicedBinnedPareto

distribution(distr_args, loc: Optional[torch.Tensor] = None, scale: Optional[torch.Tensor] = None) gluonts.torch.distributions.binned_uniforms.BinnedUniforms[source]#

Construct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.

Parameters
  • distr_args – Constructor arguments for the underlying Distribution type.

  • loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.

  • scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.

classmethod domain_map(logits: torch.Tensor, upper_gp_xi: torch.Tensor, upper_gp_beta: torch.Tensor, lower_gp_xi: torch.Tensor, lower_gp_beta: torch.Tensor) Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor][source]#

Converts arguments to the right shape and domain.

The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.

property event_shape: Tuple#

Shape of each individual event compatible with the output object.

class gluonts.torch.distributions.StudentTOutput(beta: float = 0.0)[source]#

Bases: gluonts.torch.distributions.distribution_output.DistributionOutput

args_dim: Dict[str, int] = {'df': 1, 'loc': 1, 'scale': 1}#
distr_cls#

alias of gluonts.torch.distributions.studentT.StudentT

classmethod domain_map(df: torch.Tensor, loc: torch.Tensor, scale: torch.Tensor)[source]#

Converts arguments to the right shape and domain.

The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.

property event_shape: Tuple#

Shape of each individual event compatible with the output object.

class gluonts.torch.distributions.TruncatedNormal(loc: torch.Tensor, scale: torch.Tensor, min: Union[torch.Tensor, float] = - 1.0, max: Union[torch.Tensor, float] = 1.0, upscale: Union[torch.Tensor, float] = 5.0, tanh_loc: bool = False)[source]#

Bases: torch.distributions.distribution.Distribution

Implements a Truncated Normal distribution with location scaling.

Location scaling prevents the location to be “too far” from 0, which ultimately leads to numerically unstable samples and poor gradient computation (e.g. gradient explosion). In practice, the location is computed according to

\[loc = tanh(loc / upscale) * upscale.\]

This behaviour can be disabled by switching off the tanh_loc parameter (see below).

Parameters
  • loc – normal distribution location parameter

  • scale – normal distribution sigma parameter (squared root of variance)

  • min – minimum value of the distribution. Default = -1.0

  • max – maximum value of the distribution. Default = 1.0

  • upscale – scaling factor. Default = 5.0

  • tanh_loc – if True, the above formula is used for the location scaling, otherwise the raw value is kept. Default is False

References

Notes

This implementation is strongly based on:
arg_constraints = {'loc': Real(), 'scale': GreaterThan(lower_bound=1e-06)}#
cdf(value)[source]#

Returns the cumulative density/mass function evaluated at value.

Parameters

value (Tensor) –

cdf_truncated_standard_normal(value)[source]#
property entropy#

Returns entropy of distribution, batched over batch_shape.

Returns

Tensor of shape batch_shape.

eps = 1e-06#
has_rsample = True#
icdf(value)[source]#

Returns the inverse cumulative density/mass function evaluated at value.

Parameters

value (Tensor) –

icdf_truncated_standard_normal(value)[source]#
log_prob(value)[source]#

Returns the log of the probability density/mass function evaluated at value.

Parameters

value (Tensor) –

log_prob_truncated_standard_normal(value)[source]#
property mean#

Returns the mean of the distribution.

rsample(sample_shape=None)[source]#

Generates a sample_shape shaped reparameterized sample or sample_shape shaped batch of reparameterized samples if the distribution parameters are batched.

property support#

Returns a Constraint object representing this distribution’s support.

property variance#

Returns the variance of the distribution.

class gluonts.torch.distributions.TruncatedNormalOutput(min: float = - 1.0, max: float = 1.0, upscale: float = 5.0, tanh_loc: bool = False)[source]#

Bases: gluonts.torch.distributions.distribution_output.DistributionOutput

distr_cls#

alias of gluonts.torch.distributions.truncated_normal.TruncatedNormal

distribution(distr_args, loc: Optional[torch.Tensor] = None, scale: Optional[torch.Tensor] = None) torch.distributions.distribution.Distribution[source]#

Construct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.

Parameters
  • distr_args – Constructor arguments for the underlying Distribution type.

  • loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.

  • scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.

classmethod domain_map(loc: torch.Tensor, scale: torch.Tensor)[source]#

Converts arguments to the right shape and domain.

The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.

property event_shape: Tuple#

Shape of each individual event compatible with the output object.