gluonts.torch.distributions package#
- class gluonts.torch.distributions.AffineTransformed(base_distribution: torch.distributions.distribution.Distribution, loc=None, scale=None)[source]#
Bases:
torch.distributions.transformed_distribution.TransformedDistribution
Represents the distribution of an affinely transformed random variable.
This is the distribution of
Y = scale * X + loc
, whereX
is a random variable distributed according tobase_distribution
.- Parameters
base_distribution – Original distribution
loc – Translation parameter of the affine transformation.
scale – Scaling parameter of the affine transformation.
- property mean#
Returns the mean of the distribution.
- property stddev#
Returns the standard deviation of the distribution.
- property variance#
Returns the variance of the distribution.
- class gluonts.torch.distributions.BetaOutput[source]#
Bases:
gluonts.torch.distributions.distribution_output.DistributionOutput
- args_dim: Dict[str, int] = {'concentration0': 1, 'concentration1': 1}#
- distr_cls#
alias of
torch.distributions.beta.Beta
- classmethod domain_map(concentration1: torch.Tensor, concentration0: torch.Tensor)[source]#
Converts arguments to the right shape and domain. The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.
- property event_shape: Tuple#
Shape of each individual event contemplated by the distributions that this object constructs.
- property value_in_support: float#
A float that will have a valid numeric value when computing the log-loss of the corresponding distribution. By default 0.0. This value will be used when padding data series.
- class gluonts.torch.distributions.BinnedUniforms(bins_lower_bound: float, bins_upper_bound: float, logits: torch._VariableFunctionsClass.tensor, numb_bins: int = 100, validate_args: Optional[bool] = None)[source]#
Bases:
torch.distributions.distribution.Distribution
Binned uniforms distribution.
- Parameters
bins_lower_bound (float) – The lower bound of the bin edges
bins_upper_bound (float) – The upper bound of the bin edges
numb_bins (int) – The number of equidistance bins to allocate between bins_lower_bound and bins_upper_bound. Default value is 100.
logits (tensor) – the logits defining the probability of each bins. These are softmaxed. The tensor is of shape (*batch_shape,)
validate_args (bool) –
- arg_constraints = {'logits': Real()}#
- property bins_prob#
Returns the probability of the observed point to be in each of the bins bins_prob.shape: (*batch_shape, event_shape). event_shape is numb_bins
- cdf(x)[source]#
Cumulative density tensor for a tensor of data points x. ‘x’ is expected to be of shape (*batch_shape)
- expand(batch_shape, _instance=None)[source]#
Returns a new distribution instance (or populates an existing instance provided by a derived class) with batch dimensions expanded to batch_shape. This method calls
expand
on the distribution’s parameters. As such, this does not allocate new memory for the expanded distribution instance. Additionally, this does not repeat any args checking or parameter broadcasting in __init__.py, when an instance is first created.- Parameters
batch_shape (torch.Size) – the desired expanded size.
_instance – new instance provided by subclasses that need to override .expand.
- Returns
New distribution instance with batch dimensions expanded to batch_size.
- get_one_hot_bin_indicator(x, in_float=False)[source]#
‘x’ is to have shape (*batch_shape) which can be for example () or (32, ) or (32, 168, )
- has_rsample = False#
- icdf(quantiles)[source]#
Inverse cdf of a tensor of quantile quantiles ‘quantiles’ is of shape (*batch_shape) with values between (0.0, 1.0)
This is the function to be called from the outside.
- log_binned_p(x)[source]#
Log probability for a tensor of datapoints x. ‘x’ is to have shape (*batch_shape)
- property log_bins_prob#
- log_prob(x)[source]#
Log probability for a tensor of datapoints x. ‘x’ is to have shape (*batch_shape)
- rsample(sample_shape=torch.Size([]))[source]#
We do not have an implementation for the reparameterization trick yet.
- support = Real()#
- class gluonts.torch.distributions.BinnedUniformsOutput(bins_lower_bound: float, bins_upper_bound: float, num_bins: int)[source]#
Bases:
gluonts.torch.distributions.distribution_output.DistributionOutput
- distr_cls#
alias of
gluonts.torch.distributions.binned_uniforms.BinnedUniforms
- distribution(distr_args, loc: Optional[torch.Tensor] = 0, scale: Optional[torch.Tensor] = None) gluonts.torch.distributions.binned_uniforms.BinnedUniforms [source]#
Construct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.
- Parameters
distr_args – Constructor arguments for the underlying Distribution type.
loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.
scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.
- classmethod domain_map(logits: torch.Tensor) torch.Tensor [source]#
Converts arguments to the right shape and domain. The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.
- property event_shape: Tuple#
Shape of each individual event contemplated by the distributions that this object constructs.
- class gluonts.torch.distributions.DistributionOutput[source]#
Bases:
gluonts.torch.distributions.distribution_output.Output
Class to construct a distribution given the output of a network.
- distr_cls: type#
- distribution(distr_args, loc: Optional[torch.Tensor] = None, scale: Optional[torch.Tensor] = None) torch.distributions.distribution.Distribution [source]#
Construct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.
- Parameters
distr_args – Constructor arguments for the underlying Distribution type.
loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.
scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.
- domain_map(*args: torch.Tensor)[source]#
Converts arguments to the right shape and domain. The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.
- property event_dim: int#
Number of event dimensions, i.e., length of the event_shape tuple, of the distributions that this object constructs.
- property event_shape: Tuple#
Shape of each individual event contemplated by the distributions that this object constructs.
- property value_in_support: float#
A float that will have a valid numeric value when computing the log-loss of the corresponding distribution. By default 0.0. This value will be used when padding data series.
- class gluonts.torch.distributions.GammaOutput[source]#
Bases:
gluonts.torch.distributions.distribution_output.DistributionOutput
- args_dim: Dict[str, int] = {'concentration': 1, 'rate': 1}#
- distr_cls#
alias of
torch.distributions.gamma.Gamma
- classmethod domain_map(concentration: torch.Tensor, rate: torch.Tensor)[source]#
Converts arguments to the right shape and domain. The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.
- property event_shape: Tuple#
Shape of each individual event contemplated by the distributions that this object constructs.
- property value_in_support: float#
A float that will have a valid numeric value when computing the log-loss of the corresponding distribution. By default 0.0. This value will be used when padding data series.
- class gluonts.torch.distributions.GeneralizedPareto(xi, beta, validate_args=None)[source]#
Bases:
torch.distributions.distribution.Distribution
Generalised Pareto distribution.
- Parameters
- arg_constraints = {'beta': GreaterThan(lower_bound=0.0), 'xi': GreaterThan(lower_bound=0.0)}#
- has_rsample = False#
- property stddev#
Returns the standard deviation of the distribution.
- support = GreaterThan(lower_bound=0.0)#
- class gluonts.torch.distributions.GeneralizedParetoOutput[source]#
Bases:
gluonts.torch.distributions.distribution_output.DistributionOutput
- distr_cls#
alias of
gluonts.torch.distributions.generalized_pareto.GeneralizedPareto
- distribution(distr_args, loc: Optional[torch.Tensor] = 0, scale: Optional[torch.Tensor] = None) gluonts.torch.distributions.generalized_pareto.GeneralizedPareto [source]#
Construct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.
- Parameters
distr_args – Constructor arguments for the underlying Distribution type.
loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.
scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.
- classmethod domain_map(xi: torch.Tensor, beta: torch.Tensor) Tuple[torch.Tensor, torch.Tensor] [source]#
Converts arguments to the right shape and domain. The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.
- property event_shape: Tuple#
Shape of each individual event contemplated by the distributions that this object constructs.
- class gluonts.torch.distributions.ISQF(spline_knots: torch.Tensor, spline_heights: torch.Tensor, beta_l: torch.Tensor, beta_r: torch.Tensor, qk_y: torch.Tensor, qk_x: torch.Tensor, tol: float = 0.0001, validate_args: bool = False)[source]#
Bases:
torch.distributions.distribution.Distribution
Distribution class for the Incremental (Spline) Quantile Function in the paper
Learning Quantile Functions without Quantile Crossing for Distribution-free Time Series Forecasting
by Park, Robinson, Aubet, Kan, Gasthaus, Wang :param spline_knots: Tensor parametrizing the x-positions (y-positions) of the spline knotsShape: (*batch_shape, (num_qk-1), num_pieces)
- Parameters
spline_heights – Tensor parametrizing the x-positions (y-positions) of the spline knots Shape: (*batch_shape, (num_qk-1), num_pieces)
qk_x – Tensor containing the increasing x-positions (y-positions) of the quantile knots, Shape: (*batch_shape, num_qk)
qk_y – Tensor containing the increasing x-positions (y-positions) of the quantile knots, Shape: (*batch_shape, num_qk)
beta_l – Tensor containing the non-negative learnable parameter of the left (right) tail, Shape: (*batch_shape,)
beta_r – Tensor containing the non-negative learnable parameter of the left (right) tail, Shape: (*batch_shape,)
- property batch_shape: torch.Size([])#
Returns the shape over which parameters are batched.
- cdf(z: torch.Tensor) torch.Tensor [source]#
Computes the quantile level alpha_tilde such that q(alpha_tilde) = z :param z: Tensor of shape = (*batch_shape,)
- Returns
Tensor of shape = (*batch_shape,)
- Return type
alpha_tilde
- cdf_spline(z: torch.Tensor) torch.Tensor [source]#
For observations z and splines defined in [qk_x[k], qk_x[k+1]] Computes the quantile level alpha_tilde such that alpha_tilde = q^{-1}(z) if z is in-between qk_x[k] and qk_x[k+1] = qk_x[k] if z<qk_x[k] = qk_x[k+1] if z>qk_x[k+1] :param z: Observation, shape = (*batch_shape,)
- Returns
Corresponding quantile level, shape = (*batch_shape, num_qk-1)
- Return type
alpha_tilde
- cdf_tail(z: torch.Tensor, left_tail: bool = True) torch.Tensor [source]#
Computes the quantile level alpha_tilde such that alpha_tilde = q^{-1}(z) if z is in the tail region = qk_x_l or qk_x_r if z is in the non-tail region :param z: Observation, shape = (*batch_shape,) :param left_tail: If True, compute alpha_tilde for the left tail
Otherwise, compute alpha_tilde for the right tail
- Returns
Corresponding quantile level, shape = (*batch_shape,)
- Return type
alpha_tilde
- crps(z: torch.Tensor) torch.Tensor [source]#
Compute CRPS in analytical form :param z: Observation to evaluate. Shape = (*batch_shape,)
- Returns
Tensor containing the CRPS, of the same shape as z
- Return type
Tensor
- crps_spline(z: torch.Tensor) torch.Tensor [source]#
Compute CRPS in analytical form for the spline :param z: Observation to evaluate. shape = (*batch_shape,)
- Returns
Tensor containing the CRPS, of the same shape as z
- Return type
Tensor
- crps_tail(z: torch.Tensor, left_tail: bool = True) torch.Tensor [source]#
Compute CRPS in analytical form for left/right tails :param z: Observation to evaluate. shape = (*batch_shape,) :param left_tail: If True, compute CRPS for the left tail
Otherwise, compute CRPS for the right tail
- Returns
Tensor containing the CRPS, of the same shape as z
- Return type
Tensor
- static parameterize_qk(quantile_knots: torch.Tensor) Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor] [source]#
Function to parameterize the x or y positions of the num_qk quantile knots :param quantile_knots: x or y positions of the quantile knots
shape: (*batch_shape, num_qk)
- Returns
qk – x or y positions of the quantile knots (qk), with index=1, …, num_qk-1, shape: (*batch_shape, num_qk-1)
qk_plus – x or y positions of the quantile knots (qk), with index=2, …, num_qk, shape: (*batch_shape, num_qk-1)
qk_l – x or y positions of the left-most quantile knot (qk), shape: (*batch_shape)
qk_r – x or y positions of the right-most quantile knot (qk), shape: (*batch_shape)
- static parameterize_spline(spline_knots: torch.Tensor, qk: torch.Tensor, qk_plus: torch.Tensor, tol: float = 0.0001) Tuple[torch.Tensor, torch.Tensor] [source]#
Function to parameterize the x or y positions of the spline knots :param spline_knots: variable that parameterizes the spline knot positions :param qk: x or y positions of the quantile knots (qk),
with index=1, …, num_qk-1, shape: (*batch_shape, num_qk-1)
- Parameters
qk_plus – x or y positions of the quantile knots (qk), with index=2, …, num_qk, shape: (*batch_shape, num_qk-1)
num_pieces – number of spline knot pieces
tol – tolerance hyperparameter for numerical stability
- Returns
- static parameterize_tail(beta: torch.Tensor, qk_x: torch.Tensor, qk_y: torch.Tensor) Tuple[torch.Tensor, torch.Tensor] [source]#
Function to parameterize the tail parameters Note that the exponential tails are given by q(alpha) = a_l log(alpha) + b_l if left tail = a_r log(1-alpha) + b_r if right tail where a_l=1/beta_l, b_l=-a_l*log(qk_x_l)+q(qk_x_l) a_r=1/beta_r, b_r=a_r*log(1-qk_x_r)+q(qk_x_r) :param beta: parameterizes the left or right tail, shape: (*batch_shape,) :param qk_x: left- or right-most x-positions of the quantile knots,
shape: (*batch_shape,)
- Parameters
qk_y – left- or right-most y-positions of the quantile knots, shape: (*batch_shape,)
- Returns
tail_a – a_l or a_r as described above
tail_b – b_l or b_r as described above
- quantile_internal(alpha: torch.Tensor, dim: Optional[int] = None) torch.Tensor [source]#
Evaluates the quantile function at the quantile levels input_alpha :param alpha: Tensor of shape = (*batch_shape,) if axis=None, or containing an
additional axis on the specified position, otherwise
- Parameters
dim – Index of the axis containing the different quantile levels which are to be computed. Read the description below for detailed information
- Returns
Quantiles tensor, of the same shape as alpha
- Return type
Tensor
- class gluonts.torch.distributions.ISQFOutput(num_pieces: int, qk_x: List[float], tol: float = 0.0001)[source]#
Bases:
gluonts.torch.distributions.distribution_output.DistributionOutput
DistributionOutput class for the Incremental (Spline) Quantile Function :param num_pieces: number of spline pieces for each spline
ISQF reduces to IQF when num_pieces = 1
- Parameters
qk_x – List containing the x-positions of quantile knots
tol – tolerance for numerical safeguarding
- distr_cls#
- distribution(distr_args, loc: Optional[torch.Tensor] = 0, scale: Optional[torch.Tensor] = None) gluonts.torch.distributions.isqf.ISQF [source]#
function outputing the distribution class distr_args: distribution arguments loc: shift to the data mean scale: scale to the data
- classmethod domain_map(spline_knots: torch.Tensor, spline_heights: torch.Tensor, beta_l: torch.Tensor, beta_r: torch.Tensor, quantile_knots: torch.Tensor, tol: float = 0.0001) Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor] [source]#
Domain map function The inputs of this function are specified by self.args_dim.
spline_knots, spline_heights: parameterizing the x-/ y-positions of the spline knots, shape = (*batch_shape, (num_qk-1)*num_pieces)
beta_l, beta_r: parameterizing the left/right tail, shape = (*batch_shape, 1)
quantile_knots: parameterizing the y-positions of the quantile knots, shape = (*batch_shape, num_qk)
- property event_shape: Tuple#
Shape of each individual event contemplated by the distributions that this object constructs.
- class gluonts.torch.distributions.MQF2Distribution(picnn: torch.nn.modules.module.Module, hidden_state: torch.Tensor, prediction_length: int, is_energy_score: bool = True, es_num_samples: int = 50, beta: float = 1.0, threshold_input: float = 100.0, validate_args: bool = False)[source]#
Bases:
torch.distributions.distribution.Distribution
Distribution class for the model MQF2 proposed in the paper
Multivariate Quantile Function Forecaster
by Kan, Aubet, Januschowski, Park, Benidis, Ruthotto, Gasthaus- Parameters
picnn – A SequentialNet instance of a partially input convex neural network (picnn)
hidden_state – hidden_state obtained by unrolling the RNN encoder shape = (batch_size, context_length, hidden_size) in training shape = (batch_size, hidden_size) in inference
prediction_length – Length of the prediction horizon
is_energy_score – If True, use energy score as objective function otherwise use maximum likelihood as objective function (normalizing flows)
es_num_samples – Number of samples drawn to approximate the energy score
beta – Hyperparameter of the energy score (power of the two terms)
threshold_input – Clamping threshold of the (scaled) input when maximum likelihood is used as objective function this is used to make the forecaster more robust to outliers in training samples
validate_args – Sets whether validation is enabled or disabled For more details, refer to the descriptions in torch.distributions.distribution.Distribution
- property batch_shape: torch.Size#
Returns the shape over which parameters are batched.
- energy_score(z: torch.Tensor) torch.Tensor [source]#
Computes the (approximated) energy score sum_i ES(g,z_i), where ES(g,z_i) =
-1/(2*es_num_samples^2) * sum_{w,w’} ||w-w’||_2^beta + 1/es_num_samples * sum_{w’’} ||w’’-z_i||_2^beta, w’s are samples drawn from the quantile function g(., h_i) (gradient of picnn), h_i is the hidden state associated with z_i, and es_num_samples is the number of samples drawn for each of w, w’, w’’ in energy score approximation
- Parameters
z – A batch of time series with shape (batch_size, context_length + prediction_length - 1)
- Returns
Tensor of shape (batch_size * context_length,)
- Return type
loss
- property event_dim: int#
- property event_shape: Tuple#
Returns the shape of a single sample (without batching).
- log_prob(z: torch.Tensor) torch.Tensor [source]#
Computes the log likelihood log(g(z)) + logdet(dg(z)/dz), where g is the gradient of the picnn.
- Parameters
z – A batch of time series with shape (batch_size, context_length + prediciton_length - 1)
- Returns
Tesnor of shape (batch_size * context_length,)
- Return type
loss
- quantile(alpha: torch.Tensor, hidden_state: Optional[torch.Tensor] = None) torch.Tensor [source]#
Generates the predicted paths associated with the quantile levels alpha.
- Parameters
alpha – quantile levels, shape = (batch_shape, prediction_length)
hidden_state – hidden_state, shape = (batch_shape, hidden_size)
- Returns
predicted paths of shape = (batch_shape, prediction_length)
- Return type
results
- rsample(sample_shape: torch.Size = torch.Size([])) torch.Tensor [source]#
Generates the sample paths.
- Parameters
sample_shape – Shape of the samples
- Returns
Tesnor of shape (batch_size, *sample_shape, prediction_length)
- Return type
sample_paths
- stack_sliding_view(z: torch.Tensor) torch.Tensor [source]#
Auxiliary function for loss computation.
Unfolds the observations by sliding a window of size prediction_length over the observations z Then, reshapes the observations into a 2-dimensional tensor for further computation
- Parameters
z – A batch of time series with shape (batch_size, context_length + prediction_length - 1)
- Returns
Unfolded time series with shape (batch_size * context_length, prediction_length)
- Return type
Tensor
- class gluonts.torch.distributions.MQF2DistributionOutput(prediction_length: int, is_energy_score: bool = True, threshold_input: float = 100.0, es_num_samples: int = 50, beta: float = 1.0)[source]#
Bases:
gluonts.torch.distributions.distribution_output.DistributionOutput
- distr_cls#
- distribution(picnn: torch.nn.modules.module.Module, hidden_state: torch.Tensor, loc: Optional[torch.Tensor] = 0, scale: Optional[torch.Tensor] = None) gluonts.torch.distributions.mqf2.MQF2Distribution [source]#
Construct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.
- Parameters
distr_args – Constructor arguments for the underlying Distribution type.
loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.
scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.
- classmethod domain_map(hidden_state: torch.Tensor) Tuple [source]#
Converts arguments to the right shape and domain. The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.
- property event_shape: Tuple#
Shape of each individual event contemplated by the distributions that this object constructs.
- class gluonts.torch.distributions.NegativeBinomialOutput[source]#
Bases:
gluonts.torch.distributions.distribution_output.DistributionOutput
- args_dim: Dict[str, int] = {'logits': 1, 'total_count': 1}#
- distr_cls#
alias of
torch.distributions.negative_binomial.NegativeBinomial
- distribution(distr_args, loc: Optional[torch.Tensor] = None, scale: Optional[torch.Tensor] = None) torch.distributions.distribution.Distribution [source]#
Construct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.
- Parameters
distr_args – Constructor arguments for the underlying Distribution type.
loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.
scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.
- classmethod domain_map(total_count: torch.Tensor, logits: torch.Tensor)[source]#
Converts arguments to the right shape and domain. The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.
- property event_shape: Tuple#
Shape of each individual event contemplated by the distributions that this object constructs.
- class gluonts.torch.distributions.NormalOutput[source]#
Bases:
gluonts.torch.distributions.distribution_output.DistributionOutput
- args_dim: Dict[str, int] = {'loc': 1, 'scale': 1}#
- distr_cls#
alias of
torch.distributions.normal.Normal
- classmethod domain_map(loc: torch.Tensor, scale: torch.Tensor)[source]#
Converts arguments to the right shape and domain. The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.
- property event_shape: Tuple#
Shape of each individual event contemplated by the distributions that this object constructs.
- class gluonts.torch.distributions.PiecewiseLinear(gamma: torch.Tensor, slopes: torch.Tensor, knot_spacings: torch.Tensor, validate_args=False)[source]#
Bases:
torch.distributions.distribution.Distribution
- property batch_shape: torch.Size([])#
Returns the shape over which parameters are batched.
- class gluonts.torch.distributions.PiecewiseLinearOutput(num_pieces: int)[source]#
Bases:
gluonts.torch.distributions.distribution_output.DistributionOutput
- distr_cls#
alias of
gluonts.torch.distributions.piecewise_linear.PiecewiseLinear
- distribution(distr_args, loc: Optional[torch.Tensor] = 0, scale: Optional[torch.Tensor] = None) gluonts.torch.distributions.piecewise_linear.PiecewiseLinear [source]#
Construct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.
- Parameters
distr_args – Constructor arguments for the underlying Distribution type.
loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.
scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.
- classmethod domain_map(gamma: torch.Tensor, slopes: torch.Tensor, knot_spacings: torch.Tensor) Tuple[torch.Tensor, torch.Tensor, torch.Tensor] [source]#
Converts arguments to the right shape and domain. The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.
- property event_shape: Tuple#
Shape of each individual event contemplated by the distributions that this object constructs.
- class gluonts.torch.distributions.PoissonOutput[source]#
Bases:
gluonts.torch.distributions.distribution_output.DistributionOutput
- args_dim: Dict[str, int] = {'rate': 1}#
- distr_cls#
alias of
torch.distributions.poisson.Poisson
- classmethod domain_map(rate: torch.Tensor)[source]#
Converts arguments to the right shape and domain. The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.
- property event_shape: Tuple#
Shape of each individual event contemplated by the distributions that this object constructs.
- class gluonts.torch.distributions.SplicedBinnedPareto(bins_lower_bound: float, bins_upper_bound: float, logits: torch._VariableFunctionsClass.tensor, upper_gp_xi: torch._VariableFunctionsClass.tensor, upper_gp_beta: torch._VariableFunctionsClass.tensor, lower_gp_xi: torch._VariableFunctionsClass.tensor, lower_gp_beta: torch._VariableFunctionsClass.tensor, numb_bins: int = 100, tail_percentile_gen_pareto: float = 0.05, validate_args=None)[source]#
Bases:
gluonts.torch.distributions.binned_uniforms.BinnedUniforms
Spliced Binned-Pareto univariate distribution.
- Parameters
bins_lower_bound (The lower bound of the bin edges) –
bins_upper_bound (The upper bound of the bin edges) –
numb_bins (The number of equidistance bins to allocate between) – bins_lower_bound and bins_upper_bound. Default value is 100.
tail_percentile_gen_pareto (The percentile of the distribution that is) – each tail. Default value is 0.05. NB: This symmetric percentile can still represent asymmetric upper and lower tails.
- arg_constraints = {'logits': Real(), 'lower_gp_beta': GreaterThan(lower_bound=0.0), 'lower_gp_xi': GreaterThan(lower_bound=0.0), 'upper_gp_beta': GreaterThan(lower_bound=0.0), 'upper_gp_xi': GreaterThan(lower_bound=0.0)}#
- cdf(x: torch._VariableFunctionsClass.tensor)[source]#
Cumulative density tensor for a tensor of data points x. ‘x’ is expected to be of shape (*batch_shape)
- has_rsample = False#
- log_prob(x: torch._VariableFunctionsClass.tensor, for_training=True)[source]#
- Parameters
x (a tensor of size 'batch_size', 1) –
for_training (boolean to indicate a return of the log-probability, or) – of the loss (which is an adjusted log-probability)
- support = Real()#
- class gluonts.torch.distributions.SplicedBinnedParetoOutput(bins_lower_bound: float, bins_upper_bound: float, num_bins: int, tail_percentile_gen_pareto: float)[source]#
Bases:
gluonts.torch.distributions.distribution_output.DistributionOutput
- distr_cls#
alias of
gluonts.torch.distributions.spliced_binned_pareto.SplicedBinnedPareto
- distribution(distr_args, loc: Optional[torch.Tensor] = 0, scale: Optional[torch.Tensor] = None) gluonts.torch.distributions.binned_uniforms.BinnedUniforms [source]#
Construct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.
- Parameters
distr_args – Constructor arguments for the underlying Distribution type.
loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.
scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.
- classmethod domain_map(logits: torch.Tensor, upper_gp_xi: torch.Tensor, upper_gp_beta: torch.Tensor, lower_gp_xi: torch.Tensor, lower_gp_beta: torch.Tensor) Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor] [source]#
Converts arguments to the right shape and domain. The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.
- property event_shape: Tuple#
Shape of each individual event contemplated by the distributions that this object constructs.
- class gluonts.torch.distributions.StudentTOutput[source]#
Bases:
gluonts.torch.distributions.distribution_output.DistributionOutput
- args_dim: Dict[str, int] = {'df': 1, 'loc': 1, 'scale': 1}#
- distr_cls#
alias of
torch.distributions.studentT.StudentT
- classmethod domain_map(df: torch.Tensor, loc: torch.Tensor, scale: torch.Tensor)[source]#
Converts arguments to the right shape and domain. The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.
- property event_shape: Tuple#
Shape of each individual event contemplated by the distributions that this object constructs.
Submodules#
- gluonts.torch.distributions.affine_transformed module
- gluonts.torch.distributions.binned_uniforms module
- gluonts.torch.distributions.discrete_distribution module
- gluonts.torch.distributions.distribution_output module
- gluonts.torch.distributions.generalized_pareto module
- gluonts.torch.distributions.isqf module
- gluonts.torch.distributions.mqf2 module
- gluonts.torch.distributions.piecewise_linear module
- gluonts.torch.distributions.spliced_binned_pareto module