gluonts.torch.distributions package#
- class gluonts.torch.distributions.AffineTransformed(base_distribution: Distribution, loc=None, scale=None)[source]#
Bases:
TransformedDistribution
Represents the distribution of an affinely transformed random variable.
This is the distribution of
Y = scale * X + loc
, whereX
is a random variable distributed according tobase_distribution
.- Parameters:
base_distribution – Original distribution
loc – Translation parameter of the affine transformation.
scale – Scaling parameter of the affine transformation.
- property mean#
Returns the mean of the distribution.
- property stddev#
Returns the standard deviation of the distribution.
- property variance#
Returns the variance of the distribution.
- class gluonts.torch.distributions.BetaOutput(beta: float = 0.0)[source]#
Bases:
DistributionOutput
- args_dim: Dict[str, int] = {'concentration0': 1, 'concentration1': 1}#
- distr_cls#
alias of
Beta
- classmethod domain_map(concentration1: Tensor, concentration0: Tensor)[source]#
Converts arguments to the right shape and domain.
The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.
- property event_shape: Tuple#
Shape of each individual event compatible with the output object.
- property value_in_support: float#
A float value that is valid for computing the loss of the corresponding output.
By default 0.0.
- class gluonts.torch.distributions.BinnedUniforms(bins_lower_bound: float, bins_upper_bound: float, logits: Tensor, numb_bins: int = 100, validate_args: Optional[bool] = None)[source]#
Bases:
Distribution
Binned uniforms distribution.
- Parameters:
bins_lower_bound (float) – The lower bound of the bin edges
bins_upper_bound (float) – The upper bound of the bin edges
numb_bins (int) – The number of equidistance bins to allocate between bins_lower_bound and bins_upper_bound. Default value is 100.
logits (tensor) – the logits defining the probability of each bins. These are softmaxed. The tensor is of shape (*batch_shape,)
validate_args (bool) –
- arg_constraints = {'logits': Real()}#
- property bins_prob#
Returns the probability of the observed point to be in each of the bins bins_prob.shape: (*batch_shape, event_shape). event_shape is numb_bins
- cdf(x)[source]#
Cumulative density tensor for a tensor of data points x.
‘x’ is expected to be of shape (*batch_shape)
- expand(batch_shape, _instance=None)[source]#
Returns a new distribution instance (or populates an existing instance provided by a derived class) with batch dimensions expanded to batch_shape. This method calls
expand
on the distribution’s parameters. As such, this does not allocate new memory for the expanded distribution instance. Additionally, this does not repeat any args checking or parameter broadcasting in __init__.py, when an instance is first created.- Parameters:
batch_shape (torch.Size) – the desired expanded size.
_instance – new instance provided by subclasses that need to override .expand.
- Returns:
New distribution instance with batch dimensions expanded to batch_size.
- get_one_hot_bin_indicator(x, in_float=False)[source]#
‘x’ is to have shape (*batch_shape) which can be for example () or (32, ) or (32, 168, )
- has_rsample = False#
- icdf(quantiles)[source]#
Inverse cdf of a tensor of quantile quantiles ‘quantiles’ is of shape (*batch_shape) with values between (0.0, 1.0)
This is the function to be called from the outside.
- log_binned_p(x)[source]#
Log probability for a tensor of datapoints x.
‘x’ is to have shape (*batch_shape)
- property log_bins_prob#
- log_prob(x)[source]#
Log probability for a tensor of datapoints x.
‘x’ is to have shape (*batch_shape)
- rsample(sample_shape=torch.Size([]))[source]#
We do not have an implementation for the reparameterization trick yet.
- support = Real()#
- class gluonts.torch.distributions.BinnedUniformsOutput(bins_lower_bound: float, bins_upper_bound: float, num_bins: int)[source]#
Bases:
DistributionOutput
- distr_cls#
alias of
BinnedUniforms
- distribution(distr_args, loc: Optional[Tensor] = None, scale: Optional[Tensor] = None) BinnedUniforms [source]#
Construct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.
- Parameters:
distr_args – Constructor arguments for the underlying Distribution type.
loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.
scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.
- classmethod domain_map(logits: Tensor) Tensor [source]#
Converts arguments to the right shape and domain.
The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.
- property event_shape: Tuple#
Shape of each individual event compatible with the output object.
- class gluonts.torch.distributions.DiscreteDistribution(values: Tensor, probs: Tensor, validate_args: Optional[bool] = None)[source]#
Bases:
Distribution
Implements discrete distribution where the underlying random variable takes a value from the finite set values with the corresponding probabilities.
Note: values can have duplicates in which case the probability mass of duplicates is added up.
A natural loss function, especially given that the new observation does not have to be from the finite set values, is ranked probability score (RPS). For this reason and to be consitent with terminology of other models, log_prob is implemented as the negative RPS.
- static adjust_probs(values_sorted, probs_sorted)[source]#
Puts probability mass of all duplicate values into one position (last index of the duplicate).
Assumption: values_sorted is sorted!
- Parameters:
values_sorted –
probs_sorted –
- Returns:
- log_prob(obs: Tensor)[source]#
Returns the log of the probability density/mass function evaluated at value.
- Parameters:
value (Tensor) –
- rps(obs: Tensor, check_for_duplicates: bool = True)[source]#
Implements ranked probability score which is the sum of the qunatile losses for all possible quantiles.
Here, the number of quantiles is finite and is equal to the number of unique values in (each batch element of) obs.
- Parameters:
obs –
check_for_duplicates –
- class gluonts.torch.distributions.DistributionOutput(beta: float = 0.0)[source]#
Bases:
Output
Class to construct a distribution given the output of a network.
- distr_cls: type#
- distribution(distr_args, loc: Optional[Tensor] = None, scale: Optional[Tensor] = None) Distribution [source]#
Construct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.
- Parameters:
distr_args – Constructor arguments for the underlying Distribution type.
loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.
scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.
- domain_map(*args: Tensor)[source]#
Converts arguments to the right shape and domain.
The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.
- property event_dim: int#
Number of event dimensions, i.e., length of the event_shape tuple, of the distributions that this object constructs.
- property forecast_generator: ForecastGenerator#
- loss(target: Tensor, distr_args: Tuple[Tensor, ...], loc: Optional[Tensor] = None, scale: Optional[Tensor] = None) Tensor [source]#
Compute loss for target data given network output.
- Parameters:
target – Values of the target time series for which loss is to be computed.
distr_args – Arguments that can be used to construct the output distribution.
loc – Location parameter of the distribution, optional.
scale – Scale parameter of the distribution, optional.
- Returns:
Values of the loss, has same shape as target.
- Return type:
loss_values
- class gluonts.torch.distributions.GammaOutput(beta: float = 0.0)[source]#
Bases:
DistributionOutput
- args_dim: Dict[str, int] = {'concentration': 1, 'rate': 1}#
- distr_cls#
alias of
Gamma
- classmethod domain_map(concentration: Tensor, rate: Tensor)[source]#
Converts arguments to the right shape and domain.
The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.
- property event_shape: Tuple#
Shape of each individual event compatible with the output object.
- property value_in_support: float#
A float value that is valid for computing the loss of the corresponding output.
By default 0.0.
- class gluonts.torch.distributions.GeneralizedPareto(xi, beta, validate_args=None)[source]#
Bases:
Distribution
Generalised Pareto distribution.
- Parameters:
- arg_constraints = {'beta': GreaterThan(lower_bound=0.0), 'xi': GreaterThan(lower_bound=0.0)}#
- has_rsample = False#
- property stddev#
Returns the standard deviation of the distribution.
- support = GreaterThan(lower_bound=0.0)#
- class gluonts.torch.distributions.GeneralizedParetoOutput[source]#
Bases:
DistributionOutput
- distr_cls#
alias of
GeneralizedPareto
- distribution(distr_args, loc: Optional[Tensor] = None, scale: Optional[Tensor] = None) GeneralizedPareto [source]#
Construct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.
- Parameters:
distr_args – Constructor arguments for the underlying Distribution type.
loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.
scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.
- classmethod domain_map(xi: Tensor, beta: Tensor) Tuple[Tensor, Tensor] [source]#
Converts arguments to the right shape and domain.
The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.
- property event_shape: Tuple#
Shape of each individual event compatible with the output object.
- class gluonts.torch.distributions.ISQF(spline_knots: Tensor, spline_heights: Tensor, beta_l: Tensor, beta_r: Tensor, qk_y: Tensor, qk_x: Tensor, tol: float = 0.0001, validate_args: bool = False)[source]#
Bases:
Distribution
Distribution class for the Incremental (Spline) Quantile Function in the paper
Learning Quantile Functions without Quantile Crossing for Distribution-free Time Series Forecasting
by Park, Robinson, Aubet, Kan, Gasthaus, Wang :param spline_knots: Tensor parametrizing the x-positions (y-positions) of the spline knotsShape: (*batch_shape, (num_qk-1), num_pieces)
- Parameters:
spline_heights – Tensor parametrizing the x-positions (y-positions) of the spline knots Shape: (*batch_shape, (num_qk-1), num_pieces)
qk_x – Tensor containing the increasing x-positions (y-positions) of the quantile knots, Shape: (*batch_shape, num_qk)
qk_y – Tensor containing the increasing x-positions (y-positions) of the quantile knots, Shape: (*batch_shape, num_qk)
beta_l – Tensor containing the non-negative learnable parameter of the left (right) tail, Shape: (*batch_shape,)
beta_r – Tensor containing the non-negative learnable parameter of the left (right) tail, Shape: (*batch_shape,)
- property batch_shape: Size#
Returns the shape over which parameters are batched.
- cdf(z: Tensor) Tensor [source]#
Computes the quantile level alpha_tilde such that q(alpha_tilde) = z :param z: Tensor of shape = (*batch_shape,)
- Returns:
Tensor of shape = (*batch_shape,)
- Return type:
alpha_tilde
- cdf_spline(z: Tensor) Tensor [source]#
For observations z and splines defined in [qk_x[k], qk_x[k+1]] Computes the quantile level alpha_tilde such that alpha_tilde.
= q^{-1}(z) if z is in-between qk_x[k] and qk_x[k+1] = qk_x[k] if z<qk_x[k] = qk_x[k+1] if z>qk_x[k+1] :param z: Observation, shape = (*batch_shape,)
- Returns:
Corresponding quantile level, shape = (*batch_shape, num_qk-1)
- Return type:
alpha_tilde
- cdf_tail(z: Tensor, left_tail: bool = True) Tensor [source]#
Computes the quantile level alpha_tilde such that alpha_tilde.
= q^{-1}(z) if z is in the tail region = qk_x_l or qk_x_r if z is in the non-tail region :param z: Observation, shape = (*batch_shape,) :param left_tail: If True, compute alpha_tilde for the left tail
Otherwise, compute alpha_tilde for the right tail
- Returns:
Corresponding quantile level, shape = (*batch_shape,)
- Return type:
alpha_tilde
- crps(z: Tensor) Tensor [source]#
Compute CRPS in analytical form :param z: Observation to evaluate. Shape = (*batch_shape,)
- Returns:
Tensor containing the CRPS, of the same shape as z
- Return type:
Tensor
- crps_spline(z: Tensor) Tensor [source]#
Compute CRPS in analytical form for the spline :param z: Observation to evaluate. shape = (*batch_shape,)
- Returns:
Tensor containing the CRPS, of the same shape as z
- Return type:
Tensor
- crps_tail(z: Tensor, left_tail: bool = True) Tensor [source]#
Compute CRPS in analytical form for left/right tails :param z: Observation to evaluate. shape = (*batch_shape,) :param left_tail: If True, compute CRPS for the left tail
Otherwise, compute CRPS for the right tail
- Returns:
Tensor containing the CRPS, of the same shape as z
- Return type:
Tensor
- static parameterize_qk(quantile_knots: Tensor) Tuple[Tensor, Tensor, Tensor, Tensor] [source]#
Function to parameterize the x or y positions of the num_qk quantile knots :param quantile_knots: x or y positions of the quantile knots
shape: (*batch_shape, num_qk)
- Returns:
qk – x or y positions of the quantile knots (qk), with index=1, …, num_qk-1, shape: (*batch_shape, num_qk-1)
qk_plus – x or y positions of the quantile knots (qk), with index=2, …, num_qk, shape: (*batch_shape, num_qk-1)
qk_l – x or y positions of the left-most quantile knot (qk), shape: (*batch_shape)
qk_r – x or y positions of the right-most quantile knot (qk), shape: (*batch_shape)
- static parameterize_spline(spline_knots: Tensor, qk: Tensor, qk_plus: Tensor, tol: float = 0.0001) Tuple[Tensor, Tensor] [source]#
Function to parameterize the x or y positions of the spline knots :param spline_knots: variable that parameterizes the spline knot positions :param qk: x or y positions of the quantile knots (qk),
with index=1, …, num_qk-1, shape: (*batch_shape, num_qk-1)
- Parameters:
qk_plus – x or y positions of the quantile knots (qk), with index=2, …, num_qk, shape: (*batch_shape, num_qk-1)
num_pieces – number of spline knot pieces
tol – tolerance hyperparameter for numerical stability
- Returns:
- static parameterize_tail(beta: Tensor, qk_x: Tensor, qk_y: Tensor) Tuple[Tensor, Tensor] [source]#
Function to parameterize the tail parameters Note that the exponential tails are given by q(alpha)
= a_l log(alpha) + b_l if left tail = a_r log(1-alpha) + b_r if right tail where a_l=1/beta_l, b_l=-a_l*log(qk_x_l)+q(qk_x_l) a_r=1/beta_r, b_r=a_r*log(1-qk_x_r)+q(qk_x_r) :param beta: parameterizes the left or right tail, shape: (*batch_shape,) :param qk_x: left- or right-most x-positions of the quantile knots,
shape: (*batch_shape,)
- Parameters:
qk_y – left- or right-most y-positions of the quantile knots, shape: (*batch_shape,)
- Returns:
tail_a – a_l or a_r as described above
tail_b – b_l or b_r as described above
- quantile_internal(alpha: Tensor, dim: Optional[int] = None) Tensor [source]#
Evaluates the quantile function at the quantile levels input_alpha :param alpha: Tensor of shape = (*batch_shape,) if axis=None, or containing an
additional axis on the specified position, otherwise
- Parameters:
dim – Index of the axis containing the different quantile levels which are to be computed. Read the description below for detailed information
- Returns:
Quantiles tensor, of the same shape as alpha
- Return type:
Tensor
- class gluonts.torch.distributions.ISQFOutput(num_pieces: int, qk_x: List[float], tol: float = 0.0001)[source]#
Bases:
DistributionOutput
DistributionOutput class for the Incremental (Spline) Quantile Function :param num_pieces: number of spline pieces for each spline
ISQF reduces to IQF when num_pieces = 1
- Parameters:
qk_x – List containing the x-positions of quantile knots
tol – tolerance for numerical safeguarding
- distribution(distr_args, loc: Optional[Tensor] = None, scale: Optional[Tensor] = None) ISQF [source]#
function outputing the distribution class distr_args: distribution arguments loc: shift to the data mean scale: scale to the data
- classmethod domain_map(spline_knots: Tensor, spline_heights: Tensor, beta_l: Tensor, beta_r: Tensor, quantile_knots: Tensor, tol: float = 0.0001) Tuple[Tensor, Tensor, Tensor, Tensor, Tensor] [source]#
Domain map function The inputs of this function are specified by self.args_dim.
spline_knots, spline_heights: parameterizing the x-/ y-positions of the spline knots, shape = (*batch_shape, (num_qk-1)*num_pieces)
beta_l, beta_r: parameterizing the left/right tail, shape = (*batch_shape, 1)
quantile_knots: parameterizing the y-positions of the quantile knots, shape = (*batch_shape, num_qk)
- property event_shape: Tuple#
Shape of each individual event compatible with the output object.
- class gluonts.torch.distributions.ImplicitQuantileNetwork(outputs: Tensor, taus: Tensor, validate_args=None)[source]#
Bases:
Distribution
Distribution class for the Implicit Quantile from which we can sample or calculate the quantile loss.
- Parameters:
outputs – Outputs from the Implicit Quantile Network.
taus – Tensor random numbers from the Beta or Uniform distribution for the corresponding outputs.
- arg_constraints: Dict[str, Constraint] = {}#
- class gluonts.torch.distributions.ImplicitQuantileNetworkOutput(output_domain: Optional[str] = None, concentration1: float = 1.0, concentration0: float = 1.0, cos_embedding_dim: int = 64)[source]#
Bases:
DistributionOutput
DistributionOutput class for the IQN from the paper
Probabilistic Time Series Forecasting with Implicit Quantile Networks
(https://arxiv.org/abs/2107.03743) by Gouttes et al. 2021.- Parameters:
output_domain – Optional domain mapping of the output. Can be “positive”, “unit” or None.
concentration1 – Alpha parameter of the Beta distribution when sampling the taus during training.
concentration0 – Beta parameter of the Beta distribution when sampling the taus during training.
cos_embedding_dim – The embedding dimension for the taus embedding layer of IQN. Default is 64.
- args_dim: Dict[str, int] = {'quantile_function': 1}#
- distr_cls#
alias of
ImplicitQuantileNetwork
- distribution(distr_args, loc=0, scale=None) ImplicitQuantileNetwork [source]#
Construct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.
- Parameters:
distr_args – Constructor arguments for the underlying Distribution type.
loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.
scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.
- classmethod domain_map(*args)[source]#
Converts arguments to the right shape and domain.
The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.
- property event_shape#
Shape of each individual event compatible with the output object.
- loss(target: Tensor, distr_args: Tuple[Tensor, ...], loc: Optional[Tensor] = None, scale: Optional[Tensor] = None) Tensor [source]#
Compute loss for target data given network output.
- Parameters:
target – Values of the target time series for which loss is to be computed.
distr_args – Arguments that can be used to construct the output distribution.
loc – Location parameter of the distribution, optional.
scale – Scale parameter of the distribution, optional.
- Returns:
Values of the loss, has same shape as target.
- Return type:
loss_values
- class gluonts.torch.distributions.LaplaceOutput(beta: float = 0.0)[source]#
Bases:
DistributionOutput
- args_dim: Dict[str, int] = {'loc': 1, 'scale': 1}#
- distr_cls#
alias of
Laplace
- classmethod domain_map(loc: Tensor, scale: Tensor)[source]#
Converts arguments to the right shape and domain.
The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.
- property event_shape: Tuple#
Shape of each individual event compatible with the output object.
- class gluonts.torch.distributions.NegativeBinomialOutput(beta: float = 0.0)[source]#
Bases:
DistributionOutput
- args_dim: Dict[str, int] = {'logits': 1, 'total_count': 1}#
- distr_cls#
alias of
NegativeBinomial
- distribution(distr_args, loc: Optional[Tensor] = None, scale: Optional[Tensor] = None) Distribution [source]#
Construct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.
- Parameters:
distr_args – Constructor arguments for the underlying Distribution type.
loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.
scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.
- classmethod domain_map(total_count: Tensor, logits: Tensor)[source]#
Converts arguments to the right shape and domain.
The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.
- property event_shape: Tuple#
Shape of each individual event compatible with the output object.
- class gluonts.torch.distributions.NormalOutput(beta: float = 0.0)[source]#
Bases:
DistributionOutput
- args_dim: Dict[str, int] = {'loc': 1, 'scale': 1}#
- distr_cls#
alias of
Normal
- classmethod domain_map(loc: Tensor, scale: Tensor)[source]#
Converts arguments to the right shape and domain.
The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.
- property event_shape: Tuple#
Shape of each individual event compatible with the output object.
- class gluonts.torch.distributions.Output[source]#
Bases:
object
Converts raw neural network output into a forecast and computes loss.
- args_dim: Dict[str, int]#
- domain_map(*args: Tensor) Tuple[Tensor, ...] [source]#
Converts arguments to the right shape and domain.
The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.
- property dtype#
- property event_shape: Tuple#
Shape of each individual event compatible with the output object.
- property forecast_generator: ForecastGenerator#
- in_features: int#
- loss(target: Tensor, distr_args: Tuple[Tensor, ...], loc: Optional[Tensor] = None, scale: Optional[Tensor] = None) Tensor [source]#
Compute loss for target data given network output.
- Parameters:
target – Values of the target time series for which loss is to be computed.
distr_args – Arguments that can be used to construct the output distribution.
loc – Location parameter of the distribution, optional.
scale – Scale parameter of the distribution, optional.
- Returns:
Values of the loss, has same shape as target.
- Return type:
loss_values
- property value_in_support: float#
A float value that is valid for computing the loss of the corresponding output.
By default 0.0.
- class gluonts.torch.distributions.PiecewiseLinear(gamma: Tensor, slopes: Tensor, knot_spacings: Tensor, validate_args=False)[source]#
Bases:
Distribution
- property batch_shape: Size#
Returns the shape over which parameters are batched.
- class gluonts.torch.distributions.PiecewiseLinearOutput(num_pieces: int)[source]#
Bases:
DistributionOutput
- distr_cls#
alias of
PiecewiseLinear
- distribution(distr_args, loc: Optional[Tensor] = None, scale: Optional[Tensor] = None) PiecewiseLinear [source]#
Construct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.
- Parameters:
distr_args – Constructor arguments for the underlying Distribution type.
loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.
scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.
- classmethod domain_map(gamma: Tensor, slopes: Tensor, knot_spacings: Tensor) Tuple[Tensor, Tensor, Tensor] [source]#
Converts arguments to the right shape and domain.
The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.
- property event_shape: Tuple#
Shape of each individual event compatible with the output object.
- class gluonts.torch.distributions.PoissonOutput(beta: float = 0.0)[source]#
Bases:
DistributionOutput
- args_dim: Dict[str, int] = {'rate': 1}#
- distr_cls#
alias of
Poisson
- distribution(distr_args, loc: Optional[Tensor] = None, scale: Optional[Tensor] = None) Distribution [source]#
Construct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.
- Parameters:
distr_args – Constructor arguments for the underlying Distribution type.
loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.
scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.
- classmethod domain_map(rate: Tensor)[source]#
Converts arguments to the right shape and domain.
The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.
- property event_shape: Tuple#
Shape of each individual event compatible with the output object.
- class gluonts.torch.distributions.QuantileOutput(quantiles: List[float])[source]#
Bases:
Output
- domain_map(*args: Tensor) Tuple[Tensor, ...] [source]#
Converts arguments to the right shape and domain.
The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.
- property event_shape: Tuple#
Shape of each individual event compatible with the output object.
- property forecast_generator: ForecastGenerator#
- loss(target: Tensor, distr_args: Tuple[Tensor, ...], loc: Optional[Tensor] = None, scale: Optional[Tensor] = None) Tensor [source]#
Compute loss for target data given network output.
- Parameters:
target – Values of the target time series for which loss is to be computed.
distr_args – Arguments that can be used to construct the output distribution.
loc – Location parameter of the distribution, optional.
scale – Scale parameter of the distribution, optional.
- Returns:
Values of the loss, has same shape as target.
- Return type:
loss_values
- property quantiles: List[float]#
- class gluonts.torch.distributions.SplicedBinnedPareto(bins_lower_bound: float, bins_upper_bound: float, logits: Tensor, upper_gp_xi: Tensor, upper_gp_beta: Tensor, lower_gp_xi: Tensor, lower_gp_beta: Tensor, numb_bins: int = 100, tail_percentile_gen_pareto: float = 0.05, validate_args=None)[source]#
Bases:
BinnedUniforms
Spliced Binned-Pareto univariate distribution.
- Parameters:
bins_lower_bound (The lower bound of the bin edges) –
bins_upper_bound (The upper bound of the bin edges) –
numb_bins (The number of equidistance bins to allocate between) – bins_lower_bound and bins_upper_bound. Default value is 100.
tail_percentile_gen_pareto (The percentile of the distribution that is) – each tail. Default value is 0.05. NB: This symmetric percentile can still represent asymmetric upper and lower tails.
- arg_constraints = {'logits': Real(), 'lower_gp_beta': GreaterThan(lower_bound=0.0), 'lower_gp_xi': GreaterThan(lower_bound=0.0), 'upper_gp_beta': GreaterThan(lower_bound=0.0), 'upper_gp_xi': GreaterThan(lower_bound=0.0)}#
- cdf(x: Tensor)[source]#
Cumulative density tensor for a tensor of data points x.
‘x’ is expected to be of shape (*batch_shape)
- has_rsample = False#
- log_prob(x: Tensor, for_training=True)[source]#
- Parameters:
x (a tensor of size 'batch_size', 1) –
for_training (boolean to indicate a return of the log-probability, or) – of the loss (which is an adjusted log-probability)
- support = Real()#
- class gluonts.torch.distributions.SplicedBinnedParetoOutput(bins_lower_bound: float, bins_upper_bound: float, num_bins: int, tail_percentile_gen_pareto: float)[source]#
Bases:
DistributionOutput
- distr_cls#
alias of
SplicedBinnedPareto
- distribution(distr_args, loc: Optional[Tensor] = None, scale: Optional[Tensor] = None) BinnedUniforms [source]#
Construct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.
- Parameters:
distr_args – Constructor arguments for the underlying Distribution type.
loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.
scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.
- classmethod domain_map(logits: Tensor, upper_gp_xi: Tensor, upper_gp_beta: Tensor, lower_gp_xi: Tensor, lower_gp_beta: Tensor) Tuple[Tensor, Tensor, Tensor, Tensor, Tensor] [source]#
Converts arguments to the right shape and domain.
The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.
- property event_shape: Tuple#
Shape of each individual event compatible with the output object.
- class gluonts.torch.distributions.StudentTOutput(beta: float = 0.0)[source]#
Bases:
DistributionOutput
- args_dim: Dict[str, int] = {'df': 1, 'loc': 1, 'scale': 1}#
- classmethod domain_map(df: Tensor, loc: Tensor, scale: Tensor)[source]#
Converts arguments to the right shape and domain.
The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.
- property event_shape: Tuple#
Shape of each individual event compatible with the output object.
- class gluonts.torch.distributions.TruncatedNormal(loc: Tensor, scale: Tensor, min: Union[Tensor, float] = -1.0, max: Union[Tensor, float] = 1.0, upscale: Union[Tensor, float] = 5.0, tanh_loc: bool = False)[source]#
Bases:
Distribution
Implements a Truncated Normal distribution with location scaling.
Location scaling prevents the location to be “too far” from 0, which ultimately leads to numerically unstable samples and poor gradient computation (e.g. gradient explosion). In practice, the location is computed according to
\[loc = tanh(loc / upscale) * upscale.\]This behaviour can be disabled by switching off the tanh_loc parameter (see below).
- Parameters:
loc – normal distribution location parameter
scale – normal distribution sigma parameter (squared root of variance)
min – minimum value of the distribution. Default = -1.0
max – maximum value of the distribution. Default = 1.0
upscale – scaling factor. Default = 5.0
tanh_loc – if
True
, the above formula is used for the location scaling, otherwise the raw value is kept. Default isFalse
References
Notes
- This implementation is strongly based on:
- arg_constraints = {'loc': Real(), 'scale': GreaterThan(lower_bound=1e-06)}#
- cdf(value)[source]#
Returns the cumulative density/mass function evaluated at value.
- Parameters:
value (Tensor) –
- property entropy#
Returns entropy of distribution, batched over batch_shape.
- Returns:
Tensor of shape batch_shape.
- eps = 1e-06#
- has_rsample = True#
- icdf(value)[source]#
Returns the inverse cumulative density/mass function evaluated at value.
- Parameters:
value (Tensor) –
- log_prob(value)[source]#
Returns the log of the probability density/mass function evaluated at value.
- Parameters:
value (Tensor) –
- property mean#
Returns the mean of the distribution.
- rsample(sample_shape=None)[source]#
Generates a sample_shape shaped reparameterized sample or sample_shape shaped batch of reparameterized samples if the distribution parameters are batched.
- property support#
Returns a
Constraint
object representing this distribution’s support.
- property variance#
Returns the variance of the distribution.
- class gluonts.torch.distributions.TruncatedNormalOutput(min: float = -1.0, max: float = 1.0, upscale: float = 5.0, tanh_loc: bool = False)[source]#
Bases:
DistributionOutput
- distr_cls#
alias of
TruncatedNormal
- distribution(distr_args, loc: Optional[Tensor] = None, scale: Optional[Tensor] = None) Distribution [source]#
Construct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.
- Parameters:
distr_args – Constructor arguments for the underlying Distribution type.
loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.
scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.
- classmethod domain_map(loc: Tensor, scale: Tensor)[source]#
Converts arguments to the right shape and domain.
The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.
- property event_shape: Tuple#
Shape of each individual event compatible with the output object.
Submodules#
- gluonts.torch.distributions.affine_transformed module
- gluonts.torch.distributions.binned_uniforms module
BinnedUniforms
BinnedUniforms.arg_constraints
BinnedUniforms.bins_prob
BinnedUniforms.cdf()
BinnedUniforms.entropy()
BinnedUniforms.enumerate_support()
BinnedUniforms.expand()
BinnedUniforms.get_one_hot_bin_indicator()
BinnedUniforms.has_rsample
BinnedUniforms.icdf()
BinnedUniforms.log_binned_p()
BinnedUniforms.log_bins_prob
BinnedUniforms.log_prob()
BinnedUniforms.mean
BinnedUniforms.median
BinnedUniforms.mode
BinnedUniforms.pdf()
BinnedUniforms.rsample()
BinnedUniforms.sample()
BinnedUniforms.support
BinnedUniforms.variance()
BinnedUniformsOutput
- gluonts.torch.distributions.discrete_distribution module
- gluonts.torch.distributions.distribution_output module
- gluonts.torch.distributions.generalized_pareto module
- gluonts.torch.distributions.implicit_quantile_network module
ImplicitQuantileModule
ImplicitQuantileNetwork
ImplicitQuantileNetworkOutput
ImplicitQuantileNetworkOutput.args_dim
ImplicitQuantileNetworkOutput.distr_cls
ImplicitQuantileNetworkOutput.distribution()
ImplicitQuantileNetworkOutput.domain_map()
ImplicitQuantileNetworkOutput.event_shape
ImplicitQuantileNetworkOutput.get_args_proj()
ImplicitQuantileNetworkOutput.in_features
ImplicitQuantileNetworkOutput.loss()
QuantileLayer
- gluonts.torch.distributions.isqf module
- gluonts.torch.distributions.negative_binomial module
- gluonts.torch.distributions.output module
- gluonts.torch.distributions.piecewise_linear module
- gluonts.torch.distributions.quantile_output module
- gluonts.torch.distributions.spliced_binned_pareto module
- gluonts.torch.distributions.studentT module
- gluonts.torch.distributions.truncated_normal module
TruncatedNormal
TruncatedNormal.arg_constraints
TruncatedNormal.cdf()
TruncatedNormal.cdf_truncated_standard_normal()
TruncatedNormal.entropy
TruncatedNormal.eps
TruncatedNormal.has_rsample
TruncatedNormal.icdf()
TruncatedNormal.icdf_truncated_standard_normal()
TruncatedNormal.log_prob()
TruncatedNormal.log_prob_truncated_standard_normal()
TruncatedNormal.mean
TruncatedNormal.rsample()
TruncatedNormal.support
TruncatedNormal.variance
TruncatedNormalOutput