-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Time series forecasting / prediction #168
Comments
ATFNet
|
EarthPT
|
TinyTimeMixer (TTM)About
DetailsTTM-1 currently supports 2 modes:
HF: https://huggingface.co/ibm/TTM |
Neural 🧠 ForecastAbout
Features
Web: https://nixtlaverse.nixtla.io/neuralforecast/ |
Awesome AI for Time Series (AI4TS) Papers, Tutorials, and Surveys
Repository: https://github.com/qingsongedu/awesome-AI-for-time-series-papers |
PatchTST (ICLR 2023)
Paper: A Time Series is Worth 64 Words: Long-term Forecasting with Transformers |
tsaiState-of-the-art Deep Learning library for Time Series and Sequences.
Repository: https://github.com/timeseriesAI/tsai |
skforecast
Homepage: https://skforecast.org/ |
About
At 12, we shared a few notes about time series anomaly detection, and forecasting/prediction. Other than using traditional statistics-based time series forecasting methods like Holt-Winters or ARIMA, and libraries like Prophet and friends, other kinds of prediction methods are emerging, based on machine learning models and outcomes from deep learning operations, like TimeGPT, or Chronos, that allow for zero-shot inference.
TimeGPT-1
Note
Azul Garza, Max Mergenthaler-Canseco; Nixtla; San Francisco, CA, USA; 5 Oct 2023
In this paper, we introduce TimeGPT, the first foundation model for time series, capable of generating accurate predictions for diverse datasets not seen during training. We evaluate our pre-trained model against established statistical, machine learning, and deep learning methods, demonstrating that TimeGPT zero-shot inference excels in performance, efficiency, and simplicity.
Our study provides compelling evidence that insights from other domains of artificial intelligence can be effectively applied to time series analysis. We conclude that large-scale time series models offer an exciting opportunity to democratize access to precise predictions and reduce uncertainty by leveraging the capabilities of contemporary advancements in deep learning.
-- https://arxiv.org/pdf/2310.03589.pdf
Chronos: Learning the Language of Time Series
Chronos is a family of pretrained time series forecasting models based on language model architectures. A time series is transformed into a sequence of tokens via scaling and quantization, and a language model is trained on these tokens using the cross-entropy loss. Once trained, probabilistic forecasts are obtained by sampling multiple future trajectories given the historical context. Chronos models have been trained on a large corpus of publicly available time series data, as well as synthetic data generated using Gaussian processes.
Note
Abdul Fatir Ansari, Lorenzo Stella, Caner Turkmen, Xiyuan Zhang, Pedro Mercado, Huibin Shen, Oleksandr Shchur, Syama Sundar Rangapuram, Sebastian Pineda Arango, Shubham Kapoor, Jasper Zschiegner, Danielle C. Maddix, Michael W. Mahoney, Kari Torkkola, Andrew Gordon Wilson, Michael Bohlke-Schneider, Yuyang Wang
Amazon Web Services, UC San Diego, University of Freiburg, Amazon Supply Chain Optimization Technologies; 12 Mar 2024
We introduce Chronos, a simple yet effective framework for pretrained probabilistic time series models. Chronos tokenizes time series values using scaling and quantization into a fixed vocabulary and trains existing transformer-based language model architectures on these tokenized time series via the cross-entropy loss. We pretrained Chronos models based on the T5 family (ranging from 20M to 710M parameters) on a large collection of publicly available datasets, complemented by a synthetic dataset that we generated via Gaussian processes to improve generalization.
In a comprehensive benchmark consisting of 42 datasets, and comprising both classical local models and deep learning methods, we show that Chronos models: (a) significantly outperform other methods on datasets that were part of the training corpus; and (b) have comparable and occasionally superior zero-shot performance on new datasets, relative to methods that were trained specifically on them. Our results demonstrate that Chronos models can leverage time series data from diverse domains to improve zero-shot accuracy on unseen forecasting tasks, positioning pretrained models as a viable tool to greatly simplify forecasting pipelines.
-- https://arxiv.org/pdf/2403.07815.pdf
Footnotes
https://kotori.readthedocs.io/en/latest/development/research/timeseries-analysis.html ↩
https://kotori.readthedocs.io/en/latest/development/backlog.html ↩
The text was updated successfully, but these errors were encountered: