gluonts.mx.util module#

class gluonts.mx.util.HybridContext(net: HybridBlock, hybridize: bool, data_batch: Optional[List[NDArray]] = None, **kwargs)[source]#

Bases: object

A context manager that ensures that an MXNet network is operating in a hybridized / not hybridized mode.

Parameters:
  • net – The network whose hybrid mode has to be modified within the enclosing context.

  • hybridize – A boolean flag indicating whether the hybrid mode should be set or not.

  • kwargs – A dictionary of optional arguments to pass to the hybridize() call of the enclosed HybridBlock network.

gluonts.mx.util.assert_shape(x: Union[NDArray, Symbol], expected_shape: Tuple[int, ...])[source]#

Assert expected shape if mode is mx.nd.

Parameters:
  • x – Input Tensor

  • expected_shape – Expected shape

gluonts.mx.util.copy_parameters(net_source: Block, net_dest: Block, ignore_extra: bool = False, allow_missing: bool = False) None[source]#

Copies parameters from one network to another.

Parameters:
  • net_source – Input network.

  • net_dest – Output network.

  • ignore_extra – Whether to ignore parameters from the source that are not present in the target.

  • allow_missing – Whether to allow additional parameters in the target not present in the source.

gluonts.mx.util.cumsum(F, x: Union[NDArray, Symbol], exclusive: bool = False, reverse: bool = False) Union[NDArray, Symbol][source]#

Find cumulative sum on the last axis by multiplying with lower triangular ones-matrix:

\[\begin{split}\operatorname{cumsum}(x) = \begin{cases} \operatorname{ltr\_ones} \times x & \text{for cumulative sum}\\ x \times \operatorname{ltr\_ones} & \text{for cumulative sum in the reverse order} \end{cases}\end{split}\]

Also supports exclusive flag to start the cumsum with zero. For example, if \(x = [a, b, c]\), we have

\[\begin{split}\operatorname{cumsum}(x) = \begin{cases} [a, a + b, a + b + c] & \text{if }\mathit{reverse = False, exclusive = False}\\ [0, a, a + b] & \text{if }\mathit{reverse = False, exclusive = True}\\ [a + b + c, b + c, c] & \text{if }\mathit{reverse = True, exclusive = False}\\ [b + c, c, 0] & \text{if }\mathit{reverse = True, exclusive = True}\\ \end{cases}\end{split}\]
Parameters:
  • F – The function space to use.

  • x – A tensor with shape \((..., n)\).

  • exclusive – If True, the cumulative sum starts with zero.

  • reverse – If True, the cumulative sum is performed in the opposite direction.

Returns:

A modified tensor with identical shape and cumulative sums in the last axis.

Return type:

Tensor

gluonts.mx.util.export_repr_block(rb: HybridBlock, model_dir: Path, model_name: str, epoch: int = 0) None[source]#

Serializes a representable Gluon block.

Parameters:
  • rb – The block to export.

  • model_dir – The path where the model will be saved.

  • model_name – The name identifying the model.

  • epoch – The epoch number, which together with the model_name identifies the model parameters.

gluonts.mx.util.export_symb_block(hb: HybridBlock, model_dir: Path, model_name: str, epoch: int = 0) None[source]#

Serializes a hybridized Gluon HybridBlock.

Parameters:
  • hb – The block to export.

  • model_dir – The path where the model will be saved.

  • model_name – The name identifying the model.

  • epoch – The epoch number, which together with the model_name identifies the model parameters.

gluonts.mx.util.get_hybrid_forward_input_names(hybrid_block_type: Type[HybridBlock])[source]#
gluonts.mx.util.hybrid_block_to_symbol_block(hb: HybridBlock, data_batch: List[NDArray]) SymbolBlock[source]#

Converts a Gluon HybridBlock to a SymbolBlock. Following the Gluon API, this is achieved by a hybridize() call on the passed HybridBlock, a single forward pass (using the provided data batch), and a combination of an export() and an import() calls of the input block.

Note that MXNet has problems with this method.

Parameters:
  • hb – The Gluon HybridBlock to convert.

  • data_batch – Data to use for the forward pass after the hybridize() call.

Returns:

The resulting Gluon block backed by an MXNet symbol graph.

Return type:

mx.gluon.SymbolBlock

gluonts.mx.util.import_repr_block(model_dir: Path, model_name: str, epoch: int = 0) HybridBlock[source]#

Deserializes a representable Gluon block.

Parameters:
  • model_dir – The path where the model is saved.

  • model_name – The name identifying the model.

  • epoch – The epoch number, which together with the model_name identifies the model parameters.

Returns:

The deserialized block.

Return type:

mx.gluon.HybridBlock

gluonts.mx.util.import_symb_block(num_inputs: int, model_dir: Path, model_name: str, epoch: int = 0) SymbolBlock[source]#

Deserializes a hybridized Gluon HybridBlock as a SymbolBlock.

Parameters:
  • num_inputs – The number of inputs of the serialized block.

  • model_dir – The path where the model is saved.

  • model_name – The name identifying the model.

  • epoch – The epoch number, which together with the model_name identifies the model parameters.

Returns:

The deserialized block.

Return type:

mx.gluon.SymbolBlock

gluonts.mx.util.make_nd_diag(F, x: Union[NDArray, Symbol], d: int) Union[NDArray, Symbol][source]#

Make a diagonal tensor, given the diagonal.

Parameters:
  • F – The function space to use.

  • x – Diagonal to use, shape \((..., d)\).

  • d – Last dimension of x.

Returns:

A tensor y of shape \((..., d, d)\) such that \(y[..., i, i] = x[..., i]\).

Return type:

Tensor

gluonts.mx.util.mx_switch(F, *args, **kwargs) Union[NDArray, Symbol][source]#

A switch statement for mxnet.

mx_switch((A, x), (B, y), z)

corresponds to

if A -> x elif B -> y else -> z

Parameters:
  • F – The function space to use.

  • args – Arguments.

  • kwargs – Keyword arguments

Returns:

A tensor with the respective switch entries.

Return type:

Tensor

gluonts.mx.util.weighted_average(F, x: Union[NDArray, Symbol], weights: Optional[Union[NDArray, Symbol]] = None, axis: Optional[int] = None, include_zeros_in_denominator=False) Union[NDArray, Symbol][source]#

Computes the weighted average of a given tensor across a given axis, masking values associated with weight zero,

meaning instead of nan * 0 = nan you will get 0 * 0 = 0.

Parameters:
  • F – The function space to use.

  • x – Input tensor, of which the average must be computed.

  • weights – Weights tensor, of the same shape as x.

  • axis – The axis along which to average x

  • include_zeros_in_denominator – Include zeros in the denominator. Can be useful for sparse time series because the loss can be dominated by few observed examples.

Returns:

The tensor with values averaged along the specified axis.

Return type:

Tensor