aboutsummaryrefslogtreecommitdiff
path: root/report
diff options
context:
space:
mode:
authornunzip <np.scarh@gmail.com>2019-03-11 17:16:47 +0000
committernunzip <np.scarh@gmail.com>2019-03-11 17:16:47 +0000
commitef97121a773ff7d5c47a8d6d68280c2bdd1e11c4 (patch)
treec71947c235b4501616271d81cbdd24936a4a7f0d /report
parent8413e2f43543b36f5239e7c8477f9bbaed010022 (diff)
downloade4-gan-ef97121a773ff7d5c47a8d6d68280c2bdd1e11c4.tar.gz
e4-gan-ef97121a773ff7d5c47a8d6d68280c2bdd1e11c4.tar.bz2
e4-gan-ef97121a773ff7d5c47a8d6d68280c2bdd1e11c4.zip
Fix grammar mistakes
Diffstat (limited to 'report')
-rw-r--r--report/paper.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/report/paper.md b/report/paper.md
index fbc4eb3..6ec9f57 100644
--- a/report/paper.md
+++ b/report/paper.md
@@ -205,7 +205,7 @@ an increase in accuracy by around 0.3%. In absence of original data the testing
## Adapted Training Strategy
For this section we will use 550 samples from MNIST (55 samples per class). Training the classifier
-yelds major challanges, since the amount of samples aailable for training is relatively small.
+yelds major challanges, since the amount of samples available for training is relatively small.
Training for 100 epochs, similarly to the previous section, is clearly not enough. The MNIST test set accuracy reached in this case
is only 62%, while training for 300 epochs we can reach up to 88%. The learning curve in figure \ref{fig:few_real} suggests