gluonts.model.evaluation module#

class gluonts.model.evaluation.BatchForecast(forecasts: List[gluonts.model.forecast.Forecast], allow_nan: bool = False)[source]#

Bases: object

Wrapper around Forecast objects, that adds a batch dimension to arrays returned by __getitem__, for compatibility with gluonts.ev.

allow_nan: bool = False#
forecasts: List[gluonts.model.forecast.Forecast]#
gluonts.model.evaluation.evaluate_forecasts(forecasts: Iterable[gluonts.model.forecast.Forecast], *, test_data: gluonts.dataset.split.TestData, metrics, axis: Optional[Union[int, tuple]] = None, batch_size: int = 100, mask_invalid_label: bool = True, allow_nan_forecast: bool = False, seasonality: Optional[int] = None) pandas.core.frame.DataFrame[source]#

Evaluate forecasts by comparing them with test_data, according to metrics.

Note

This feature is experimental and may be subject to changes.

The optional axis arguments controls aggregation of the metrics: - None (default) aggregates across all dimensions - 0 aggregates across the dataset - 1 aggregates across the first data dimension (time, in the univariate setting) - 2 aggregates across the second data dimension (time, in the multivariate setting)

Return results as a Pandas DataFrame.

gluonts.model.evaluation.evaluate_forecasts_raw(forecasts: Iterable[gluonts.model.forecast.Forecast], *, test_data: gluonts.dataset.split.TestData, metrics, axis: Optional[Union[int, tuple]] = None, batch_size: int = 100, mask_invalid_label: bool = True, allow_nan_forecast: bool = False, seasonality: Optional[int] = None) dict[source]#

Evaluate forecasts by comparing them with test_data, according to metrics.

Note

This feature is experimental and may be subject to changes.

The optional axis arguments controls aggregation of the metrics: - None (default) aggregates across all dimensions - 0 aggregates across the dataset - 1 aggregates across the first data dimension (time, in the univariate setting) - 2 aggregates across the second data dimension (time, in the multivariate setting)

Return results as a dictionary.

gluonts.model.evaluation.evaluate_model(model: gluonts.model.predictor.Predictor, *, test_data: gluonts.dataset.split.TestData, metrics, axis: Optional[Union[int, tuple]] = None, batch_size: int = 100, mask_invalid_label: bool = True, allow_nan_forecast: bool = False, seasonality: Optional[int] = None) pandas.core.frame.DataFrame[source]#

Evaluate model when applied to test_data, according to metrics.

Note

This feature is experimental and may be subject to changes.

The optional axis arguments controls aggregation of the metrics: - None (default) aggregates across all dimensions - 0 aggregates across the dataset - 1 aggregates across the first data dimension (time, in the univariate setting) - 2 aggregates across the second data dimension (time, in the multivariate setting)

Return results as a Pandas DataFrame.