|
6 | 6 | </p> |
7 | 7 |
|
8 | 8 | This repository contains an autoencoder for multivariate time series forecasting. |
9 | | -It features two attention mechanisms described in *[A Dual-Stage Attention-Based Recurrent Neural Network for Time Series Prediction](https://arxiv.org/abs/1704.02971)* and was inspired by [Seanny123's repository](https://github.com/Seanny123/da-rnn). |
| 9 | +It features two attention mechanisms described |
| 10 | +in *[A Dual-Stage Attention-Based Recurrent Neural Network for Time Series Prediction](https://arxiv.org/abs/1704.02971)* |
| 11 | +and was inspired by [Seanny123's repository](https://github.com/Seanny123/da-rnn). |
10 | 12 |
|
11 | 13 |  |
| 14 | + |
12 | 15 | ## Download and dependencies |
| 16 | + |
13 | 17 | To clone the repository please run: |
| 18 | + |
14 | 19 | ``` |
15 | 20 | git clone https://github.com/JulesBelveze/time-series-autoencoder.git |
16 | 21 | ``` |
17 | 22 |
|
18 | 23 | To install all the required dependencies please run: |
| 24 | + |
19 | 25 | ``` |
20 | 26 | python3 -m venv .venv/tsa |
21 | 27 | source .venv/tsa/bin/active |
22 | 28 | poetry install |
23 | 29 | ``` |
24 | 30 |
|
25 | 31 | ## Usage |
| 32 | + |
| 33 | +The project uses [Hydra](https://hydra.cc/docs/intro/) as a configuration parser. You can simply change the parameters |
| 34 | +directly within your `.yaml` file or you can override/set parameter using flags (for a complete guide please refer to |
| 35 | +the docs). |
| 36 | + |
26 | 37 | ``` |
27 | | -python main.py [-h] [--batch-size BATCH_SIZE] [--output-size OUTPUT_SIZE] |
28 | | - [--label-col LABEL_COL] [--input-att INPUT_ATT] |
29 | | - [--temporal-att TEMPORAL_ATT] [--seq-len SEQ_LEN] |
30 | | - [--hidden-size-encoder HIDDEN_SIZE_ENCODER] |
31 | | - [--hidden-size-decoder HIDDEN_SIZE_DECODER] |
32 | | - [--reg-factor1 REG_FACTOR1] [--reg-factor2 REG_FACTOR2] |
33 | | - [--reg1 REG1] [--reg2 REG2] [--denoising DENOISING] |
34 | | - [--do-train DO_TRAIN] [--do-eval DO_EVAL] |
35 | | - [--data-path DATA_PATH] [--output-dir OUTPUT_DIR] [--ckpt CKPT] |
| 38 | +python3 main.py -cn=[PATH_TO_FOLDER_CONFIG] -cp=[CONFIG_NAME] |
36 | 39 | ``` |
| 40 | + |
37 | 41 | Optional arguments: |
| 42 | + |
38 | 43 | ``` |
39 | 44 | -h, --help show this help message and exit |
40 | 45 | --batch-size BATCH_SIZE |
@@ -71,16 +76,19 @@ Optional arguments: |
71 | 76 | name of folder to output files |
72 | 77 | --ckpt CKPT checkpoint path for evaluation |
73 | 78 | ``` |
74 | | - |
75 | | - ## Features |
76 | | - * handles multivariate time series |
77 | | - * attention mechanisms |
78 | | - * denoising autoencoder |
79 | | - * sparse autoencoder |
80 | | - |
81 | | - ## Examples |
82 | | - You can find under the `examples` scripts to train the model in both cases: |
83 | | - * reconstruction: the dataset can be found [here](https://gist.github.com/JulesBelveze/99ecdbea62f81ce647b131e7badbb24a) |
84 | | - * forecasting: the dataset can be found [here](https://gist.github.com/JulesBelveze/e9997b9b0b68101029b461baf698bd72) |
| 79 | + |
| 80 | +## Features |
| 81 | + |
| 82 | +* handles multivariate time series |
| 83 | +* attention mechanisms |
| 84 | +* denoising autoencoder |
| 85 | +* sparse autoencoder |
| 86 | + |
| 87 | +## Examples |
| 88 | + |
| 89 | +You can find under the `examples` scripts to train the model in both cases: |
| 90 | + |
| 91 | +* reconstruction: the dataset can be found [here](https://gist.github.com/JulesBelveze/99ecdbea62f81ce647b131e7badbb24a) |
| 92 | +* forecasting: the dataset can be found [here](https://gist.github.com/JulesBelveze/e9997b9b0b68101029b461baf698bd72) |
85 | 93 |
|
86 | 94 |
|
0 commit comments