diff options
author | Vasil Zlatanov <vz215@eews506a-047.ee.ic.ac.uk> | 2019-03-11 17:34:13 +0000 |
---|---|---|
committer | nunzip <np.scarh@gmail.com> | 2019-03-11 17:45:21 +0000 |
commit | 205e6d4d024090f12251b61371f0290487c2798e (patch) | |
tree | 69340d570a5a855132ce47f84a3b1e2b20528cf5 | |
parent | 5a3c268b381ca63908e95c201f8049b22828856e (diff) | |
download | e4-gan-205e6d4d024090f12251b61371f0290487c2798e.tar.gz e4-gan-205e6d4d024090f12251b61371f0290487c2798e.tar.bz2 e4-gan-205e6d4d024090f12251b61371f0290487c2798e.zip |
A few spelling fixes
-rw-r--r-- | report/paper.md | 11 |
1 files changed, 4 insertions, 7 deletions
diff --git a/report/paper.md b/report/paper.md index e6894bb..522eaed 100644 --- a/report/paper.md +++ b/report/paper.md @@ -199,18 +199,15 @@ As observed in figure \ref{fig:mix1} we performed two experiments for performanc \end{center} \end{figure} -Both experiments show that an optimal amount of data to boost testing accuracy on the original MNIST dataset is around 30% generated data as in both cases we observe -an increase in accuracy by around 0.3%. In absence of original data the testing accuracy drops significantly to around 20% for both cases. +Both experiments show that an optimal amount of data to boost testing accuracy on the original MNIST dataset is around 30% generated data as in both cases we observe an increase in accuracy by around 0.3%. In absence of original data the testing accuracy drops significantly to around 20% for both cases. ## Adapted Training Strategy -For this section we will use 550 samples from MNIST (55 samples per class). Training the classifier -yelds major challanges, since the amount of samples available for training is relatively small. +For this section we will use 550 samples from MNIST (55 samples per class). Training the classifier yields major challenges, since the amount of samples available for training is relatively small. Training for 100 epochs, similarly to the previous section, is clearly not enough. The MNIST test set accuracy reached in this case is only 62%, while training for 300 epochs we can reach up to 88%. The learning curve in figure \ref{fig:few_real} suggests -we cannot achieve much better whith this very small amount of data, since the validation accuracy flattens, while the training accuracy -almost reaches 100%. +we cannot achieve much better with this very small amount of data, since the validation accuracy plateaus, while the training accuracy almost reaches 100%. \begin{figure} \begin{center} @@ -221,7 +218,7 @@ almost reaches 100%. \end{figure} We conduct one experiment, feeding the test set to a LeNet trained exclusively on data generated from our CGAN. It is noticeable that training -for the first 5 epochs gives good results (figure \ref{fig:fake_only}) when compared to the learning curve obtained while training the network ith only the few real samples. This +for the first 5 epochs gives good results (figure \ref{fig:fake_only}) when compared to the learning curve obtained when training the network with only the few real samples. This indicates that we can use the generated data to train the first steps of the network (initial weights) and apply the real sample for 300 epochs to obtain a finer tuning. As observed in figure \ref{fig:few_init} the first steps of retraining will show oscillation, since the fine tuning will try and adapt to the newly fed data. The maximum accuracy reached before the validation curve plateaus is 88.6%, indicating that this strategy proved to be somewhat successfull at improving testing accuracy. |