Download this notebook

Quick Start Tutorial#

GluonTS contains:

  • A number of pre-built models

  • Components for building new models (likelihoods, feature processing pipelines, calendar features etc.)

  • Data loading and processing

  • Plotting and evaluation facilities

  • Artificial and real datasets (only external datasets with blessed license)

[1]:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import json

Datasets#

Provided datasets#

GluonTS comes with a number of publicly available datasets.

[2]:
from gluonts.dataset.repository import get_dataset, dataset_names
from gluonts.dataset.util import to_pandas
[3]:
print(f"Available datasets: {dataset_names}")
Available datasets: ['constant', 'exchange_rate', 'solar-energy', 'electricity', 'traffic', 'exchange_rate_nips', 'electricity_nips', 'traffic_nips', 'solar_nips', 'wiki2000_nips', 'wiki-rolling_nips', 'taxi_30min', 'kaggle_web_traffic_with_missing', 'kaggle_web_traffic_without_missing', 'kaggle_web_traffic_weekly', 'm1_yearly', 'm1_quarterly', 'm1_monthly', 'nn5_daily_with_missing', 'nn5_daily_without_missing', 'nn5_weekly', 'tourism_monthly', 'tourism_quarterly', 'tourism_yearly', 'cif_2016', 'london_smart_meters_without_missing', 'wind_farms_without_missing', 'car_parts_without_missing', 'dominick', 'fred_md', 'pedestrian_counts', 'hospital', 'covid_deaths', 'kdd_cup_2018_without_missing', 'weather', 'm3_monthly', 'm3_quarterly', 'm3_yearly', 'm3_other', 'm4_hourly', 'm4_daily', 'm4_weekly', 'm4_monthly', 'm4_quarterly', 'm4_yearly', 'm5', 'uber_tlc_daily', 'uber_tlc_hourly', 'airpassengers', 'australian_electricity_demand', 'electricity_hourly', 'electricity_weekly', 'rideshare_without_missing', 'saugeenday', 'solar_10_minutes', 'solar_weekly', 'sunspot_without_missing', 'temperature_rain_without_missing', 'vehicle_trips_without_missing', 'ercot', 'ett_small_15min', 'ett_small_1h']

To download one of the built-in datasets, simply call get_dataset with one of the above names. GluonTS can re-use the saved dataset so that it does not need to be downloaded again the next time around.

[4]:
dataset = get_dataset("m4_hourly")

In general, the datasets provided by GluonTS are objects that consists of three main members:

  • dataset.train is an iterable collection of data entries used for training. Each entry corresponds to one time series.

  • dataset.test is an iterable collection of data entries used for inference. The test dataset is an extended version of the train dataset that contains a window in the end of each time series that was not seen during training. This window has length equal to the recommended prediction length.

  • dataset.metadata contains metadata of the dataset such as the frequency of the time series, a recommended prediction horizon, associated features, etc.

[5]:
entry = next(iter(dataset.train))
train_series = to_pandas(entry)
train_series.plot()
plt.grid(which="both")
plt.legend(["train series"], loc="upper left")
plt.show()
/opt/hostedtoolcache/Python/3.11.15/x64/lib/python3.11/site-packages/gluonts/dataset/common.py:263: FutureWarning: 'H' is deprecated and will be removed in a future version, please use 'h' instead.
  return pd.Period(val, freq)
../../_images/tutorials_forecasting_quick_start_tutorial_8_1.png
[6]:
entry = next(iter(dataset.test))
test_series = to_pandas(entry)
test_series.plot()
plt.axvline(train_series.index[-1], color="r")  # end of train dataset
plt.grid(which="both")
plt.legend(["test series", "end of train series"], loc="upper left")
plt.show()
../../_images/tutorials_forecasting_quick_start_tutorial_9_0.png
[7]:
print(
    f"Length of forecasting window in test dataset: {len(test_series) - len(train_series)}"
)
print(f"Recommended prediction horizon: {dataset.metadata.prediction_length}")
print(f"Frequency of the time series: {dataset.metadata.freq}")
Length of forecasting window in test dataset: 48
Recommended prediction horizon: 48
Frequency of the time series: H

Custom datasets#

At this point, it is important to emphasize that GluonTS does not require this specific format for a custom dataset that a user may have. The only requirements for a custom dataset are to be iterable and have a “target” and a “start” field. To make this more clear, assume the common case where a dataset is in the form of a numpy.array and the index of the time series in a pandas.Period (possibly different for each time series):

[8]:
N = 10  # number of time series
T = 100  # number of timesteps
prediction_length = 24
freq = "1H"
custom_dataset = np.random.normal(size=(N, T))
start = pd.Period("01-01-2019", freq=freq)  # can be different for each time series
/tmp/ipykernel_3755/2007592095.py:6: FutureWarning: 'H' is deprecated and will be removed in a future version, please use 'h' instead.
  start = pd.Period("01-01-2019", freq=freq)  # can be different for each time series

Now, you can split your dataset and bring it in a GluonTS appropriate format with just two lines of code:

[9]:
from gluonts.dataset.common import ListDataset
[10]:
# train dataset: cut the last window of length "prediction_length", add "target" and "start" fields
train_ds = ListDataset(
    [{"target": x, "start": start} for x in custom_dataset[:, :-prediction_length]],
    freq=freq,
)
# test dataset: use the whole dataset, add "target" and "start" fields
test_ds = ListDataset(
    [{"target": x, "start": start} for x in custom_dataset], freq=freq
)
/opt/hostedtoolcache/Python/3.11.15/x64/lib/python3.11/site-packages/gluonts/dataset/common.py:255: FutureWarning: 'H' is deprecated and will be removed in a future version, please use 'h' instead.
  ProcessDataEntry(to_offset(freq), one_dim_target, use_timestamp),

Training an existing model (Estimator)#

GluonTS comes with a number of pre-built models. All the user needs to do is configure some hyperparameters. The existing models focus on (but are not limited to) probabilistic forecasting. Probabilistic forecasts are predictions in the form of a probability distribution, rather than simply a single point estimate.

We will begin with GluonTS’s pre-built feedforward neural network estimator, a simple but powerful forecasting model. We will use this model to demonstrate the process of training a model, producing forecasts, and evaluating the results.

GluonTS’s built-in feedforward neural network (SimpleFeedForwardEstimator) accepts an input window of length context_length and predicts the distribution of the values of the subsequent prediction_length values. In GluonTS parlance, the feedforward neural network model is an example of an Estimator. In GluonTS, Estimator objects represent a forecasting model as well as details such as its coefficients, weights, etc.

In general, each estimator (pre-built or custom) is configured by a number of hyperparameters that can be either common (but not binding) among all estimators (e.g., the prediction_length) or specific for the particular estimator (e.g., number of layers for a neural network or the stride in a CNN).

Finally, each estimator is configured by a Trainer, which defines how the model will be trained i.e., the number of epochs, the learning rate, etc.

[11]:
from gluonts.mx import SimpleFeedForwardEstimator, Trainer
[12]:
estimator = SimpleFeedForwardEstimator(
    num_hidden_dimensions=[10],
    prediction_length=dataset.metadata.prediction_length,
    context_length=100,
    trainer=Trainer(ctx="cpu", epochs=5, learning_rate=1e-3, num_batches_per_epoch=100),
)

After specifying our estimator with all the necessary hyperparameters we can train it using our training dataset dataset.train by invoking the train method of the estimator. The training algorithm returns a fitted model (or a Predictor in GluonTS parlance) that can be used to construct forecasts.

[13]:
predictor = estimator.train(dataset.train)
100%|██████████| 100/100 [00:00<00:00, 142.91it/s, epoch=1/5, avg_epoch_loss=5.45]
100%|██████████| 100/100 [00:00<00:00, 153.62it/s, epoch=2/5, avg_epoch_loss=4.9]
100%|██████████| 100/100 [00:00<00:00, 152.45it/s, epoch=3/5, avg_epoch_loss=4.81]
100%|██████████| 100/100 [00:00<00:00, 153.26it/s, epoch=4/5, avg_epoch_loss=4.77]
100%|██████████| 100/100 [00:00<00:00, 151.47it/s, epoch=5/5, avg_epoch_loss=4.69]

Visualize and evaluate forecasts#

With a predictor in hand, we can now predict the last window of the dataset.test and evaluate our model’s performance.

GluonTS comes with the make_evaluation_predictions function that automates the process of prediction and model evaluation. Roughly, this function performs the following steps:

  • Removes the final window of length prediction_length of the dataset.test that we want to predict

  • The estimator uses the remaining data to predict (in the form of sample paths) the “future” window that was just removed

  • The module outputs the forecast sample paths and the dataset.test (as python generator objects)

[14]:
from gluonts.evaluation import make_evaluation_predictions
[15]:
forecast_it, ts_it = make_evaluation_predictions(
    dataset=dataset.test,  # test dataset
    predictor=predictor,  # predictor
    num_samples=100,  # number of sample paths we want for evaluation
)

First, we can convert these generators to lists to ease the subsequent computations.

[16]:
forecasts = list(forecast_it)
tss = list(ts_it)

We can examine the first element of these lists (that corresponds to the first time series of the dataset). Let’s start with the list containing the time series, i.e., tss. We expect the first entry of tss to contain the (target of the) first time series of dataset.test.

[17]:
# first entry of the time series list
ts_entry = tss[0]
[18]:
# first 5 values of the time series (convert from pandas to numpy)
np.array(ts_entry[:5]).reshape(
    -1,
)
[18]:
array([605., 586., 586., 559., 511.], dtype=float32)
[19]:
# first entry of dataset.test
dataset_test_entry = next(iter(dataset.test))
[20]:
# first 5 values
dataset_test_entry["target"][:5]
[20]:
array([605., 586., 586., 559., 511.], dtype=float32)

The entries in the forecast list are a bit more complex. They are objects that contain all the sample paths in the form of numpy.ndarray with dimension (num_samples, prediction_length), the start date of the forecast, the frequency of the time series, etc. We can access all this information by simply invoking the corresponding attribute of the forecast object.

[21]:
# first entry of the forecast list
forecast_entry = forecasts[0]
[22]:
print(f"Number of sample paths: {forecast_entry.num_samples}")
print(f"Dimension of samples: {forecast_entry.samples.shape}")
print(f"Start date of the forecast window: {forecast_entry.start_date}")
print(f"Frequency of the time series: {forecast_entry.freq}")
Number of sample paths: 100
Dimension of samples: (100, 48)
Start date of the forecast window: 1750-01-30 04:00
Frequency of the time series: <Hour>

We can also do calculations to summarize the sample paths, such as computing the mean or a quantile for each of the 48 time steps in the forecast window.

[23]:
print(f"Mean of the future window:\n {forecast_entry.mean}")
print(f"0.5-quantile (median) of the future window:\n {forecast_entry.quantile(0.5)}")
Mean of the future window:
 [642.49335 587.1387  551.5237  500.00793 508.56937 489.56073 462.14212
 487.10562 430.8347  519.6605  628.47125 650.31976 752.6242  707.6186
 905.82227 867.1521  868.29956 867.2777  811.4833  855.5248  801.67584
 786.49365 771.1625  704.5736  666.3257  500.2709  517.8458  523.9433
 451.45483 531.1853  461.47986 435.76102 545.03406 593.20276 585.79865
 697.37103 755.1676  805.0681  790.86584 851.2734  888.9961  919.801
 888.0658  867.09845 860.6079  745.6156  746.2435  757.29626]
0.5-quantile (median) of the future window:
 [654.8568  585.59094 563.4789  502.53918 502.15118 496.10092 473.2056
 489.68436 439.63287 495.1474  624.15424 641.0081  773.3442  722.4895
 906.0859  882.5232  862.319   869.80316 808.69086 853.84955 810.73346
 782.3234  784.5374  686.84204 666.91736 496.426   523.3877  524.8952
 458.2476  517.104   402.83182 429.71075 540.9492  592.5491  584.9961
 693.2116  742.706   823.4136  775.7897  866.1551  872.7233  940.14453
 894.7063  846.22107 837.499   740.84015 756.04877 766.01587]

Forecast objects have a plot method that can summarize the forecast paths as the mean, prediction intervals, etc. The prediction intervals are shaded in different colors as a “fan chart”.

[24]:
plt.plot(ts_entry[-150:].to_timestamp())
forecast_entry.plot(show_label=True)
plt.legend()
[24]:
<matplotlib.legend.Legend at 0x7fd47ba19e10>
../../_images/tutorials_forecasting_quick_start_tutorial_37_1.png

We can also evaluate the quality of our forecasts numerically. In GluonTS, the Evaluator class can compute aggregate performance metrics, as well as metrics per time series (which can be useful for analyzing performance across heterogeneous time series).

[25]:
from gluonts.evaluation import Evaluator
[26]:
evaluator = Evaluator(quantiles=[0.1, 0.5, 0.9])
agg_metrics, item_metrics = evaluator(tss, forecasts)
Running evaluation: 414it [00:00, 8895.48it/s]

The aggregate metrics, agg_metrics, aggregate both across time-steps and across time series.

[27]:
print(json.dumps(agg_metrics, indent=4))
{
    "MSE": 8455357.892385118,
    "abs_error": 9231370.021139145,
    "abs_target_sum": 145558863.59960938,
    "abs_target_mean": 7324.822041043146,
    "seasonal_error": 336.9046924038305,
    "MASE": 3.5026089308855233,
    "MAPE": 0.24924205244187383,
    "sMAPE": 0.1881649199627256,
    "MSIS": 69.16097180705307,
    "num_masked_target_values": 0.0,
    "QuantileLoss[0.1]": 5639190.091149425,
    "Coverage[0.1]": 0.09495772946859904,
    "QuantileLoss[0.5]": 9231369.971742153,
    "Coverage[0.5]": 0.5012077294685989,
    "QuantileLoss[0.9]": 7276457.857292078,
    "Coverage[0.9]": 0.8870772946859904,
    "RMSE": 2907.809810215434,
    "NRMSE": 0.3969802670866425,
    "ND": 0.06342018474760831,
    "wQuantileLoss[0.1]": 0.03874164686158321,
    "wQuantileLoss[0.5]": 0.06342018440824737,
    "wQuantileLoss[0.9]": 0.04998979572489329,
    "mean_absolute_QuantileLoss": 7382339.306727886,
    "mean_wQuantileLoss": 0.05071720899824129,
    "MAE_Coverage": 0.39569243156199674,
    "OWA": NaN
}

Individual metrics are aggregated only across time-steps.

[28]:
item_metrics.head()
[28]:
item_id forecast_start MSE abs_error abs_target_sum abs_target_mean seasonal_error MASE MAPE sMAPE num_masked_target_values ND MSIS QuantileLoss[0.1] Coverage[0.1] QuantileLoss[0.5] Coverage[0.5] QuantileLoss[0.9] Coverage[0.9]
0 0 1750-01-30 04:00 2336.692383 1962.246094 31644.0 659.250000 42.371302 0.964807 0.063380 0.062373 0.0 0.062010 14.819007 1170.136145 0.000000 1962.246155 0.708333 1493.836365 1.000000
1 1 1750-01-30 04:00 145045.468750 16553.175781 124149.0 2586.437500 165.107988 2.088680 0.137231 0.126536 0.0 0.133333 15.621626 3974.995752 0.166667 16553.175781 0.958333 8911.103174 1.000000
2 2 1750-01-30 04:00 31806.023438 6709.075195 65030.0 1354.791667 78.889053 1.771759 0.094445 0.101139 0.0 0.103169 14.410688 3732.559784 0.000000 6709.075134 0.187500 2055.006396 0.833333
3 3 1750-01-30 04:00 184726.562500 17076.015625 235783.0 4912.145833 258.982249 1.373648 0.072994 0.074155 0.0 0.072423 15.985233 10882.055322 0.020833 17076.014893 0.437500 8262.187354 0.979167
4 4 1750-01-30 04:00 103116.041667 11362.331055 131088.0 2731.000000 200.494083 1.180659 0.085083 0.081247 0.0 0.086677 14.716721 5570.188269 0.020833 11362.330811 0.708333 7842.103223 1.000000
[29]:
item_metrics.plot(x="MSIS", y="MASE", kind="scatter")
plt.grid(which="both")
plt.show()
../../_images/tutorials_forecasting_quick_start_tutorial_45_0.png