Skip to content

Commit b25c02c

Browse files
committed
update README
1 parent 8e0f4df commit b25c02c

File tree

1 file changed

+29
-21
lines changed

1 file changed

+29
-21
lines changed

README.md

Lines changed: 29 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -6,35 +6,40 @@
66
</p>
77

88
This repository contains an autoencoder for multivariate time series forecasting.
9-
It features two attention mechanisms described in *[A Dual-Stage Attention-Based Recurrent Neural Network for Time Series Prediction](https://arxiv.org/abs/1704.02971)* and was inspired by [Seanny123's repository](https://github.com/Seanny123/da-rnn).
9+
It features two attention mechanisms described
10+
in *[A Dual-Stage Attention-Based Recurrent Neural Network for Time Series Prediction](https://arxiv.org/abs/1704.02971)*
11+
and was inspired by [Seanny123's repository](https://github.com/Seanny123/da-rnn).
1012

1113
![Autoencoder architecture](autoenc_architecture.png)
14+
1215
## Download and dependencies
16+
1317
To clone the repository please run:
18+
1419
```
1520
git clone https://github.com/JulesBelveze/time-series-autoencoder.git
1621
```
1722

1823
To install all the required dependencies please run:
24+
1925
```
2026
python3 -m venv .venv/tsa
2127
source .venv/tsa/bin/active
2228
poetry install
2329
```
2430

2531
## Usage
32+
33+
The project uses [Hydra](https://hydra.cc/docs/intro/) as a configuration parser. You can simply change the parameters
34+
directly within your `.yaml` file or you can override/set parameter using flags (for a complete guide please refer to
35+
the docs).
36+
2637
```
27-
python main.py [-h] [--batch-size BATCH_SIZE] [--output-size OUTPUT_SIZE]
28-
[--label-col LABEL_COL] [--input-att INPUT_ATT]
29-
[--temporal-att TEMPORAL_ATT] [--seq-len SEQ_LEN]
30-
[--hidden-size-encoder HIDDEN_SIZE_ENCODER]
31-
[--hidden-size-decoder HIDDEN_SIZE_DECODER]
32-
[--reg-factor1 REG_FACTOR1] [--reg-factor2 REG_FACTOR2]
33-
[--reg1 REG1] [--reg2 REG2] [--denoising DENOISING]
34-
[--do-train DO_TRAIN] [--do-eval DO_EVAL]
35-
[--data-path DATA_PATH] [--output-dir OUTPUT_DIR] [--ckpt CKPT]
38+
python3 main.py -cn=[PATH_TO_FOLDER_CONFIG] -cp=[CONFIG_NAME]
3639
```
40+
3741
Optional arguments:
42+
3843
```
3944
-h, --help show this help message and exit
4045
--batch-size BATCH_SIZE
@@ -71,16 +76,19 @@ Optional arguments:
7176
name of folder to output files
7277
--ckpt CKPT checkpoint path for evaluation
7378
```
74-
75-
## Features
76-
* handles multivariate time series
77-
* attention mechanisms
78-
* denoising autoencoder
79-
* sparse autoencoder
80-
81-
## Examples
82-
You can find under the `examples` scripts to train the model in both cases:
83-
* reconstruction: the dataset can be found [here](https://gist.github.com/JulesBelveze/99ecdbea62f81ce647b131e7badbb24a)
84-
* forecasting: the dataset can be found [here](https://gist.github.com/JulesBelveze/e9997b9b0b68101029b461baf698bd72)
79+
80+
## Features
81+
82+
* handles multivariate time series
83+
* attention mechanisms
84+
* denoising autoencoder
85+
* sparse autoencoder
86+
87+
## Examples
88+
89+
You can find under the `examples` scripts to train the model in both cases:
90+
91+
* reconstruction: the dataset can be found [here](https://gist.github.com/JulesBelveze/99ecdbea62f81ce647b131e7badbb24a)
92+
* forecasting: the dataset can be found [here](https://gist.github.com/JulesBelveze/e9997b9b0b68101029b461baf698bd72)
8593

8694

0 commit comments

Comments
 (0)