diff options
author | nunzip <np.scarh@gmail.com> | 2019-03-11 17:16:47 +0000 |
---|---|---|
committer | nunzip <np.scarh@gmail.com> | 2019-03-11 17:16:47 +0000 |
commit | ef97121a773ff7d5c47a8d6d68280c2bdd1e11c4 (patch) | |
tree | c71947c235b4501616271d81cbdd24936a4a7f0d | |
parent | 8413e2f43543b36f5239e7c8477f9bbaed010022 (diff) | |
download | e4-gan-ef97121a773ff7d5c47a8d6d68280c2bdd1e11c4.tar.gz e4-gan-ef97121a773ff7d5c47a8d6d68280c2bdd1e11c4.tar.bz2 e4-gan-ef97121a773ff7d5c47a8d6d68280c2bdd1e11c4.zip |
Fix grammar mistakes
-rw-r--r-- | report/paper.md | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/report/paper.md b/report/paper.md index fbc4eb3..6ec9f57 100644 --- a/report/paper.md +++ b/report/paper.md @@ -205,7 +205,7 @@ an increase in accuracy by around 0.3%. In absence of original data the testing ## Adapted Training Strategy For this section we will use 550 samples from MNIST (55 samples per class). Training the classifier -yelds major challanges, since the amount of samples aailable for training is relatively small. +yelds major challanges, since the amount of samples available for training is relatively small. Training for 100 epochs, similarly to the previous section, is clearly not enough. The MNIST test set accuracy reached in this case is only 62%, while training for 300 epochs we can reach up to 88%. The learning curve in figure \ref{fig:few_real} suggests |