fev (Forecast EValuation library) is a lightweight package that makes it easy to benchmark time series forecasting models.
- Extensible: Easy to define your own forecasting tasks and benchmarks.
- Reproducible: Ensures that the results obtained by different users are comparable.
- Easy to use: Compatible with most popular forecasting libraries.
- Minimal dependencies: Just a thin wrapper on top of π€
datasets.
Existing forecasting benchmarks usually fall into one of two categories:
- Standalone datasets without any supporting infrastructure. These provide no guarantees that the results obtained by different users are comparable. For example, changing the start date or duration of the forecast horizon totally changes the meaning of the scores.
- Bespoke end-to-end systems that combine models, datasets and forecasting tasks. Such packages usually come with lots of dependencies and assumptions, which makes extending or integrating these libraries into existing systems difficult.
fev aims for the middle ground - it provides the core benchmarking functionality without introducing unnecessary constraints or bloated dependencies. The library supports point & probabilistic forecasting, different types of covariates, as well as all popular forecasting metrics.
- 2025-09-16: The new version
0.6.0contains major new functionality, updated documentation, as well as some breaking changes to theTaskAPI. Please check the release notes for more details.
pip install fev
Create a task from a dataset stored on Hugging Face Hub
import fev
task = fev.Task(
dataset_path="autogluon/chronos_datasets",
dataset_config="m4_hourly",
horizon=24,
)Iterate over the rolling evaluation windows:
for window in task.iter_windows():
past_data, future_data = window.get_input_data()past_datacontains the past data before the forecast horizon (item ID, past timestamps, target, all covariates).future_datacontains future data that is known at prediction time (item ID, future timestamps, and known covariates)
Make predictions
def naive_forecast(y: list, horizon: int) -> dict[str, list[float]]:
# Make predictions for a single time series
return {"predictions": [y[-1] for _ in range(horizon)]}
predictions_per_window = []
for window in task.iter_windows():
past_data, future_data = window.get_input_data()
predictions = [
naive_forecast(ts[task.target_column], task.horizon) for ts in past_data
]
predictions_per_window.append(predictions)Get an evaluation summary
task.evaluation_summary(predictions_per_window, model_name="naive")
# {'model_name': 'naive',
# 'dataset_path': 'autogluon/chronos_datasets',
# 'dataset_config': 'm4_hourly',
# 'horizon': 24,
# 'num_windows': 1,
# 'initial_cutoff': -24,
# 'window_step_size': 24,
# 'min_context_length': 1,
# 'max_context_length': None,
# 'seasonality': 1,
# 'eval_metric': 'MASE',
# 'extra_metrics': [],
# 'quantile_levels': None,
# 'id_column': 'id',
# 'timestamp_column': 'timestamp',
# 'target_column': 'target',
# 'generate_univariate_targets_from': None,
# 'past_dynamic_columns': [],
# 'excluded_columns': [],
# 'task_name': 'm4_hourly',
# 'test_error': 3.815112047601983,
# 'training_time_s': None,
# 'inference_time_s': None,
# 'dataset_fingerprint': '19e36bb78b718d8d',
# 'trained_on_this_dataset': False,
# 'fev_version': '0.6.0',
# 'MASE': 3.815112047601983}The evaluation summary contains all information necessary to uniquely identify the forecasting task.
Multiple evaluation summaries produced by different models on different tasks can be aggregated into a single table.
# Dataframes, dicts, JSON or CSV files supported
summaries = "https://raw.githubusercontent.com/autogluon/fev/refs/heads/main/benchmarks/example/results/results.csv"
fev.leaderboard(summaries)
# | model_name | skill_score | win_rate | ... |
# |:---------------|--------------:|-----------:| ... |
# | auto_theta | 0.126 | 0.667 | ... |
# | auto_arima | 0.113 | 0.667 | ... |
# | auto_ets | 0.049 | 0.444 | ... |
# | seasonal_naive | 0 | 0.222 | ... |- Tutorials
- Quickstart: Define a task and evaluate a model.
- Datasets: Use
fevwith your own datasets. - Tasks & benchmarks: Advanced features for defining tasks and benchmarks.
- Adapters: Easily convert data into formats expected by popular time series libraries like AutoGluon, Nixtlaverse, GluonTS, Darts and more.
- Models: Evaluate your models and submit results to the leaderboard.
- API reference
Examples of model implementations compatible with fev are available in examples/.
We host leaderboards obtained using fev under https://huggingface.co/spaces/autogluon/fev-leaderboard. A mirror of the leaderboard is available at https://huggingface.co/spaces/autogluon/fev-leaderboard-mirror.
This leaderboard includes results for the benchmark from fev-bench: A Realistic Benchmark for Time Series Forecasting.
Previous results for Chronos Benchmark II are available in benchmarks/chronos_zeroshot/.
Repositories with datasets in format compatible with fev:
If you find this package useful for your research, please consider citing the associated paper(s):
@article{shchur2025fev,
title={{fev-bench}: A Realistic Benchmark for Time Series Forecasting},
author={Shchur, Oleksandr and Ansari, Abdul Fatir and Turkmen, Caner and Stella, Lorenzo and Erickson, Nick and Guerron, Pablo and Bohlke-Schneider, Michael and Wang, Yuyang},
year={2025},
eprint={2509.26468},
archivePrefix={arXiv},
primaryClass={cs.LG}
}