[Download]

Quick Start Tutorial

The GluonTS toolkit contains components and tools for building time series models using MXNet. The models that are currently included are forecasting models but the components also support other time series use cases, such as classification or anomaly detection.

The toolkit is not intended as a forecasting solution for businesses or end users but it rather targets scientists and engineers who want to tweak algorithms or build and experiment with their own models.

GluonTS contains:

  • Components for building new models (likelihoods, feature processing pipelines, calendar features etc.)

  • Data loading and processing

  • A number of pre-built models

  • Plotting and evaluation facilities

  • Artificial and real datasets (only external datasets with blessed license)

In [1]:
%matplotlib inline
import mxnet as mx
from mxnet import gluon
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import json

Datasets

Provided datasets

GluonTS comes with a number of publicly available datasets.

In [2]:
from gluonts.dataset.repository.datasets import get_dataset, dataset_recipes
from gluonts.dataset.util import to_pandas
In [3]:
print(f"Available datasets: {list(dataset_recipes.keys())}")
Available datasets: ['constant', 'exchange_rate', 'solar-energy', 'electricity', 'traffic', 'exchange_rate_nips', 'electricity_nips', 'traffic_nips', 'solar_nips', 'wiki-rolling_nips', 'taxi_30min', 'm3_monthly', 'm3_quarterly', 'm3_yearly', 'm3_other', 'm4_hourly', 'm4_daily', 'm4_weekly', 'm4_monthly', 'm4_quarterly', 'm4_yearly', 'm5']

To download one of the built-in datasets, simply call get_dataset with one of the above names. GluonTS can re-use the saved dataset so that it does not need to be downloaded again: simply set regenerate=False.

In [4]:
dataset = get_dataset("m4_hourly", regenerate=True)
saving time-series into /home/runner/.mxnet/gluon-ts/datasets/m4_hourly/train/data.json
saving time-series into /home/runner/.mxnet/gluon-ts/datasets/m4_hourly/test/data.json

In general, the datasets provided by GluonTS are objects that consists of three main members:

  • dataset.train is an iterable collection of data entries used for training. Each entry corresponds to one time series

  • dataset.test is an iterable collection of data entries used for inference. The test dataset is an extended version of the train dataset that contains a window in the end of each time series that was not seen during training. This window has length equal to the recommended prediction length.

  • dataset.metadata contains metadata of the dataset such as the frequency of the time series, a recommended prediction horizon, associated features, etc.

In [5]:
entry = next(iter(dataset.train))
train_series = to_pandas(entry)
train_series.plot()
plt.grid(which="both")
plt.legend(["train series"], loc="upper left")
plt.show()
../../_images/tutorials_forecasting_quick_start_tutorial_8_0.png
In [6]:
entry = next(iter(dataset.test))
test_series = to_pandas(entry)
test_series.plot()
plt.axvline(train_series.index[-1], color='r') # end of train dataset
plt.grid(which="both")
plt.legend(["test series", "end of train series"], loc="upper left")
plt.show()
../../_images/tutorials_forecasting_quick_start_tutorial_9_0.png
In [7]:
print(f"Length of forecasting window in test dataset: {len(test_series) - len(train_series)}")
print(f"Recommended prediction horizon: {dataset.metadata.prediction_length}")
print(f"Frequency of the time series: {dataset.metadata.freq}")
Length of forecasting window in test dataset: 48
Recommended prediction horizon: 48
Frequency of the time series: H

Custom datasets

At this point, it is important to emphasize that GluonTS does not require this specific format for a custom dataset that a user may have. The only requirements for a custom dataset are to be iterable and have a “target” and a “start” field. To make this more clear, assume the common case where a dataset is in the form of a numpy.array and the index of the time series in a pandas.Timestamp (possibly different for each time series):

In [8]:
N = 10  # number of time series
T = 100  # number of timesteps
prediction_length = 24
freq = "1H"
custom_dataset = np.random.normal(size=(N, T))
start = pd.Timestamp("01-01-2019", freq=freq)  # can be different for each time series

Now, you can split your dataset and bring it in a GluonTS appropriate format with just two lines of code:

In [9]:
from gluonts.dataset.common import ListDataset
In [10]:
# train dataset: cut the last window of length "prediction_length", add "target" and "start" fields
train_ds = ListDataset(
    [{'target': x, 'start': start} for x in custom_dataset[:, :-prediction_length]],
    freq=freq
)
# test dataset: use the whole dataset, add "target" and "start" fields
test_ds = ListDataset(
    [{'target': x, 'start': start} for x in custom_dataset],
    freq=freq
)

Training an existing model (Estimator)

GluonTS comes with a number of pre-built models. All the user needs to do is configure some hyperparameters. The existing models focus on (but are not limited to) probabilistic forecasting. Probabilistic forecasts are predictions in the form of a probability distribution, rather than simply a single point estimate.

We will begin with GulonTS’s pre-built feedforward neural network estimator, a simple but powerful forecasting model. We will use this model to demonstrate the process of training a model, producing forecasts, and evaluating the results.

GluonTS’s built-in feedforward neural network (SimpleFeedForwardEstimator) accepts an input window of length context_length and predicts the distribution of the values of the subsequent prediction_length values. In GluonTS parlance, the feedforward neural network model is an example of Estimator. In GluonTS, Estimator objects represent a forecasting model as well as details such as its coefficients, weights, etc.

In general, each estimator (pre-built or custom) is configured by a number of hyperparameters that can be either common (but not binding) among all estimators (e.g., the prediction_length) or specific for the particular estimator (e.g., number of layers for a neural network or the stride in a CNN).

Finally, each estimator is configured by a Trainer, which defines how the model will be trained i.e., the number of epochs, the learning rate, etc.

In [11]:
from gluonts.model.simple_feedforward import SimpleFeedForwardEstimator
from gluonts.mx.trainer import Trainer
In [12]:
estimator = SimpleFeedForwardEstimator(
    num_hidden_dimensions=[10],
    prediction_length=dataset.metadata.prediction_length,
    context_length=100,
    freq=dataset.metadata.freq,
    trainer=Trainer(
        ctx="cpu",
        epochs=5,
        learning_rate=1e-3,
        num_batches_per_epoch=100
    )
)

After specifying our estimator with all the necessary hyperparameters we can train it using our training dataset dataset.train by invoking the train method of the estimator. The training algorithm returns a fitted model (or a Predictor in GluonTS parlance) that can be used to construct forecasts.

In [13]:
predictor = estimator.train(dataset.train)
  0%|          | 0/100 [00:00<?, ?it/s]
learning rate from ``lr_scheduler`` has been overwritten by ``learning_rate`` in optimizer.
100%|██████████| 100/100 [00:01<00:00, 70.19it/s, epoch=1/5, avg_epoch_loss=5.49]
100%|██████████| 100/100 [00:01<00:00, 72.40it/s, epoch=2/5, avg_epoch_loss=4.95]
100%|██████████| 100/100 [00:01<00:00, 72.17it/s, epoch=3/5, avg_epoch_loss=4.69]
100%|██████████| 100/100 [00:01<00:00, 72.14it/s, epoch=4/5, avg_epoch_loss=4.86]
100%|██████████| 100/100 [00:01<00:00, 72.59it/s, epoch=5/5, avg_epoch_loss=4.62]

Visualize and evaluate forecasts

With a predictor in hand, we can now predict the last window of the dataset.test and evaluate our model’s performance.

GluonTS comes with the make_evaluation_predictions function that automates the process of prediction and model evaluation. Roughly, this function performs the following steps:

  • Removes the final window of length prediction_length of the dataset.test that we want to predict

  • The estimator uses the remaining data to predict (in the form of sample paths) the “future” window that was just removed

  • The module outputs the forecast sample paths and the dataset.test (as python generator objects)

In [14]:
from gluonts.evaluation import make_evaluation_predictions
In [15]:
forecast_it, ts_it = make_evaluation_predictions(
    dataset=dataset.test,  # test dataset
    predictor=predictor,  # predictor
    num_samples=100,  # number of sample paths we want for evaluation
)

First, we can convert these generators to lists to ease the subsequent computations.

In [16]:
forecasts = list(forecast_it)
tss = list(ts_it)

We can examine the first element of these lists (that corresponds to the first time series of the dataset). Let’s start with the list containing the time series, i.e., tss. We expect the first entry of tss to contain the (target of the) first time series of dataset.test.

In [17]:
# first entry of the time series list
ts_entry = tss[0]
In [18]:
# first 5 values of the time series (convert from pandas to numpy)
np.array(ts_entry[:5]).reshape(-1,)
Out[18]:
array([605., 586., 586., 559., 511.], dtype=float32)
In [19]:
# first entry of dataset.test
dataset_test_entry = next(iter(dataset.test))
In [20]:
# first 5 values
dataset_test_entry['target'][:5]
Out[20]:
array([605., 586., 586., 559., 511.], dtype=float32)

The entries in the forecast list are a bit more complex. They are objects that contain all the sample paths in the form of numpy.ndarray with dimension (num_samples, prediction_length), the start date of the forecast, the frequency of the time series, etc. We can access all these information by simply invoking the corresponding attribute of the forecast object.

In [21]:
# first entry of the forecast list
forecast_entry = forecasts[0]
In [22]:
print(f"Number of sample paths: {forecast_entry.num_samples}")
print(f"Dimension of samples: {forecast_entry.samples.shape}")
print(f"Start date of the forecast window: {forecast_entry.start_date}")
print(f"Frequency of the time series: {forecast_entry.freq}")
Number of sample paths: 100
Dimension of samples: (100, 48)
Start date of the forecast window: 1750-01-30 04:00:00
Frequency of the time series: H

We can also do calculations to summarize the sample paths, such computing the mean or a quantile for each of the 48 time steps in the forecast window.

In [23]:
print(f"Mean of the future window:\n {forecast_entry.mean}")
print(f"0.5-quantile (median) of the future window:\n {forecast_entry.quantile(0.5)}")
Mean of the future window:
 [606.8115  589.3873  494.18314 436.5127  499.6012  470.38684 399.4912
 460.64023 564.07556 584.7424  644.73615 745.57404 750.6387  794.6106
 838.2049  852.7559  897.35846 848.0243  868.1782  828.3992  777.5739
 821.75885 768.6433  673.1716  612.78033 640.09344 585.15027 484.61755
 509.335   495.47546 542.6661  502.29294 553.59406 584.1324  552.9599
 680.29236 764.5014  780.71655 842.6411  843.1161  958.43494 925.69025
 854.36255 830.8371  802.2266  784.9484  749.2241  733.1738 ]
0.5-quantile (median) of the future window:
 [621.84985 586.406   510.64313 443.80988 493.05148 477.746   412.33945
 472.35458 573.39435 555.0672  642.9127  724.6893  766.50165 806.75854
 835.7875  867.4524  886.662   840.9374  860.94794 830.40314 775.3318
 818.33325 756.2193  655.7057  604.9758  639.1774  589.50305 483.66135
 507.08575 507.97586 519.9267  495.96588 551.8072  582.08405 553.2077
 666.8418  746.3346  778.23553 840.05396 852.7274  920.59534 940.28314
 861.40753 837.7757  778.49805 786.2116  758.8611  732.9514 ]

Forecast objects have a plot method that can summarize the forecast paths as the mean, prediction intervals, etc. The prediction intervals are shaded in different colors as a “fan chart”.

In [24]:
def plot_prob_forecasts(ts_entry, forecast_entry):
    plot_length = 150
    prediction_intervals = (50.0, 90.0)
    legend = ["observations", "median prediction"] + [f"{k}% prediction interval" for k in prediction_intervals][::-1]

    fig, ax = plt.subplots(1, 1, figsize=(10, 7))
    ts_entry[-plot_length:].plot(ax=ax)  # plot the time series
    forecast_entry.plot(prediction_intervals=prediction_intervals, color='g')
    plt.grid(which="both")
    plt.legend(legend, loc="upper left")
    plt.show()
In [25]:
plot_prob_forecasts(ts_entry, forecast_entry)
../../_images/tutorials_forecasting_quick_start_tutorial_38_0.png

We can also evaluate the quality of our forecasts numerically. In GluonTS, the Evaluator class can compute aggregate performance metrics, as well as metrics per time series (which can be useful for analyzing performance across heterogeneous time series).

In [26]:
from gluonts.evaluation import Evaluator
In [27]:
evaluator = Evaluator(quantiles=[0.1, 0.5, 0.9])
agg_metrics, item_metrics = evaluator(iter(tss), iter(forecasts), num_series=len(dataset.test))
Running evaluation: 100%|██████████| 414/414 [00:00<00:00, 12699.79it/s]

Aggregate metrics aggregate both across time-steps and across time series.

In [28]:
print(json.dumps(agg_metrics, indent=4))
{
    "MSE": 8560246.223221488,
    "abs_error": 9372088.149812698,
    "abs_target_sum": 145558863.59960938,
    "abs_target_mean": 7324.822041043147,
    "seasonal_error": 336.9046924038302,
    "MASE": 3.626442574628695,
    "MAPE": 0.24127758354932408,
    "sMAPE": 0.1866482937537529,
    "OWA": NaN,
    "MSIS": 62.41743758281069,
    "QuantileLoss[0.1]": 4988547.480651474,
    "Coverage[0.1]": 0.10653180354267307,
    "QuantileLoss[0.5]": 9372088.226715088,
    "Coverage[0.5]": 0.5003019323671501,
    "QuantileLoss[0.9]": 6756746.77090988,
    "Coverage[0.9]": 0.8812902576489526,
    "RMSE": 2925.78984604525,
    "NRMSE": 0.3994349391222317,
    "ND": 0.06438692854591542,
    "wQuantileLoss[0.1]": 0.03427168471425784,
    "wQuantileLoss[0.5]": 0.06438692907424044,
    "wQuantileLoss[0.9]": 0.04641934268939987,
    "mean_absolute_QuantileLoss": 7039127.492758814,
    "mean_wQuantileLoss": 0.04835931882596605,
    "MAE_Coverage": 0.008514492753623532
}

Individual metrics are aggregated only across time-steps.

In [29]:
item_metrics.head()
Out[29]:
item_id MSE abs_error abs_target_sum abs_target_mean seasonal_error MASE MAPE sMAPE OWA MSIS QuantileLoss[0.1] Coverage[0.1] QuantileLoss[0.5] Coverage[0.5] QuantileLoss[0.9] Coverage[0.9]
0 0.0 2542.591960 1914.587891 31644.0 659.250000 42.371302 0.941374 0.063370 0.061596 NaN 12.958938 943.796924 0.0000 1914.587952 0.708333 1377.381409 1.000000
1 1.0 167211.812500 17996.416016 124149.0 2586.437500 165.107988 2.270789 0.155405 0.142119 NaN 13.317452 4929.397363 0.3125 17996.416626 0.937500 8300.685913 1.000000
2 2.0 36132.429688 6960.657227 65030.0 1354.791667 78.889053 1.838198 0.096699 0.103249 NaN 13.084701 3453.102136 0.0000 6960.657227 0.250000 2360.578320 0.812500
3 3.0 174276.281250 17374.326172 235783.0 4912.145833 258.982249 1.397645 0.074149 0.074534 NaN 14.660027 9054.986426 0.0000 17374.325928 0.437500 7393.536279 0.979167
4 4.0 92048.520833 11008.824219 131088.0 2731.000000 200.494083 1.143927 0.082753 0.078752 NaN 12.777649 4138.273743 0.0000 11008.824707 0.791667 7001.301855 1.000000
In [30]:
item_metrics.plot(x='MSIS', y='MASE', kind='scatter')
plt.grid(which="both")
plt.show()
../../_images/tutorials_forecasting_quick_start_tutorial_46_0.png