Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

PyTorch Dual-Attention LSTM-Autoencoder For Multivariate Time Series

License

NotificationsYou must be signed in to change notification settings

JulesBelveze/time-series-autoencoder

Repository files navigation

Hits

This repository contains an autoencoder for multivariate time series forecasting.It features two attention mechanisms describedinA Dual-Stage Attention-Based Recurrent Neural Network for Time Series Predictionand was inspired bySeanny123's repository.

Autoencoder architecture

Download and dependencies

To clone the repository please run:

git clone https://github.com/JulesBelveze/time-series-autoencoder.git
Use uv

Then installuv

# install uvcurl -LsSf https://astral.sh/uv/install.sh| sh# linux/mac# orbrew install uv# mac with homebrew

setup environment and install dependencies

cd time-series-autoencoderuv venvuv pip sync pyproject.toml
Install directly from requirements.txt
pip install -r requirements.txt

Usage

The project usesHydra as a configuration parser. You can simply change the parametersdirectly within your.yaml file or you can override/set parameter using flags (for a complete guide please refer tothe docs).

python3 main.py -cn=[PATH_TO_FOLDER_CONFIG] -cp=[CONFIG_NAME]

Optional arguments:

  -h, --help            show this help message and exit  --batch-size BATCH_SIZE                        batch size  --output-size OUTPUT_SIZE                        size of the ouput: default value to 1 for forecasting  --label-col LABEL_COL                        name of the target column  --input-att INPUT_ATT                        whether or not activate the input attention mechanism  --temporal-att TEMPORAL_ATT                        whether or not activate the temporal attention                        mechanism  --seq-len SEQ_LEN     window length to use for forecasting  --hidden-size-encoder HIDDEN_SIZE_ENCODER                        size of the encoder's hidden states  --hidden-size-decoder HIDDEN_SIZE_DECODER                        size of the decoder's hidden states  --reg-factor1 REG_FACTOR1                        contribution factor of the L1 regularization if using                        a sparse autoencoder  --reg-factor2 REG_FACTOR2                        contribution factor of the L2 regularization if using                        a sparse autoencoder  --reg1 REG1           activate/deactivate L1 regularization  --reg2 REG2           activate/deactivate L2 regularization  --denoising DENOISING                        whether or not to use a denoising autoencoder  --do-train DO_TRAIN   whether or not to train the model  --do-eval DO_EVAL     whether or not evaluating the mode  --data-path DATA_PATH                        path to data file  --output-dir OUTPUT_DIR                        name of folder to output files  --ckpt CKPT           checkpoint path for evaluation

Features

  • handles multivariate time series
  • attention mechanisms
  • denoising autoencoder
  • sparse autoencoder

Examples

You can find under theexamples scripts to train the model in both cases:

  • reconstruction: the dataset can be foundhere
  • forecasting: the dataset can be foundhere

About

PyTorch Dual-Attention LSTM-Autoencoder For Multivariate Time Series

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors5

Languages


[8]ページ先頭

©2009-2025 Movatter.jp