Code used to generate the results in "Multi-Window Autoformer for Dynamic Systems Modelling (Autoformer ++)" for the 8th Edition of the Workshop on Nonlinear System Identification Benchmarks.
The architecture is derived from the Autoformer (Wu, 2021), but implemented in Keras with the Jax Backend (should also compile with Tensorflow/Pytorch, but not tested), and improves it by allowing multiple periodicity windows across the Encoder/Decoder stacks, as well as future control signals passed to the Decoder.
The slides used during the presentation are also available in the repo.
- Set up the Python environment;
keras_jax.yml
provides a Conda env descriptor that can be used to automatically generate the virtual environment usingconda env create --file keras_jax.yml
- Download the medium-sized dataset (Vlaar, 2017) and place
Benchmark_EEG_medium.mat
in thedata
directory - Run the hyperparameter optimization using
nohup python cortical_hyperopt.py -s "study_workshop" > cortical_hyperopt_output.txt &
to monitor the progress without locking the terminal - Process the best model using
nohup python process_best.py -s "study_workshop" > best_output_timed.txt &
- Generate the baselines using:
nohup python autoformer_baseline.py -s "study_workshop" > autoformer_baseline_timed.txt &
nohup python informer_baseline.py -s "study_workshop" > informer_baseline_timed.txt &
nohup python lstm_baseline.py -s "study_workshop" > lstm_baseline_timed.txt &
- Generate the plots and numerical results using
plot_results.py
(No arguments)
Additional hyperparameter spaces can be searched by creating a JSON file following the example in study_workshop.json
and placing it in the study_parameters
directory.
If you find this repo useful, please cite our paper.
@conference{Vanegas2024Autoformer++,
title = {Multi-Window Autoformer for Dynamic Systems Modelling},
author = {Sergio Vanegas and Lasse Lensu and Fredy Ruiz},
year = 2024,
month = {April},
booktitle= {Book of Abstracts - Workshop on Nonlinear System Identification Benchmarks},
publisher = {Dalle Molle Institute for Artificial Intelligence},
address = {Lugano, Switzerland},
pages = {25},
editor = {Dario Piga and Marco Forgione and Maarten Schoukens},
organization = {LUT University}
}