Skip to content

Commit 1e13e96

Browse files
committed
fix bugs
1 parent 538cc01 commit 1e13e96

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,4 +69,4 @@ So the learning curves will look like:
6969
## 6. Future work
7070
As it was mentioned, the aim of this repository is to provdie a base line for the text classification task. It's important to mention that, the problem of text classifications goes beyond than a two-stacked LSTM architecture where texts are preprocessed under tokens-based methodology. Recent works have shown impressive results by implemeting transformers based architectures (e.g. <a href="https://jalammar.github.io/a-visual-guide-to-using-bert-for-the-first-time/"> BERT</a>). Nevertheless, by following this thread, this proposed model can be improved by removing the tokens-based methodology and implementing a word embeddings based model instead (e.g. <a href="https://radimrehurek.com/gensim/models/word2vec.html">word2vec-gensim</a>). Likewise, bi-directional LSTMs can be applied in order to catch more context (in a forward and backward way).
7171

72-
<i>The question remains open: how to learn semantics? what is semantics? would DL-based are capable to learn semantics?</i>
72+
<i>The question remains open: how to learn semantics? what is semantics? would DL-based models be capable to learn semantics?</i>

src/model.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ def __init__(self, args):
1010
self.batch_size = args.batch_size
1111
self.hidden_dim = args.hidden_dim
1212
self.LSTM_layers = args.lstm_layers
13-
self.input_size = args.max_words # in case of embeddings, it would be 300
13+
self.input_size = args.max_words # embedding dimention
1414

1515
self.dropout = nn.Dropout(0.5)
1616
self.embedding = nn.Embedding(self.input_size, self.hidden_dim, padding_idx=0)

0 commit comments

Comments
 (0)