From 205e6d4d024090f12251b61371f0290487c2798e Mon Sep 17 00:00:00 2001 From: Vasil Zlatanov Date: Mon, 11 Mar 2019 17:34:13 +0000 Subject: A few spelling fixes --- report/paper.md | 11 ++++------- 1 file changed, 4 insertions(+), 7 deletions(-) diff --git a/report/paper.md b/report/paper.md index e6894bb..522eaed 100644 --- a/report/paper.md +++ b/report/paper.md @@ -199,18 +199,15 @@ As observed in figure \ref{fig:mix1} we performed two experiments for performanc \end{center} \end{figure} -Both experiments show that an optimal amount of data to boost testing accuracy on the original MNIST dataset is around 30% generated data as in both cases we observe -an increase in accuracy by around 0.3%. In absence of original data the testing accuracy drops significantly to around 20% for both cases. +Both experiments show that an optimal amount of data to boost testing accuracy on the original MNIST dataset is around 30% generated data as in both cases we observe an increase in accuracy by around 0.3%. In absence of original data the testing accuracy drops significantly to around 20% for both cases. ## Adapted Training Strategy -For this section we will use 550 samples from MNIST (55 samples per class). Training the classifier -yelds major challanges, since the amount of samples available for training is relatively small. +For this section we will use 550 samples from MNIST (55 samples per class). Training the classifier yields major challenges, since the amount of samples available for training is relatively small. Training for 100 epochs, similarly to the previous section, is clearly not enough. The MNIST test set accuracy reached in this case is only 62%, while training for 300 epochs we can reach up to 88%. The learning curve in figure \ref{fig:few_real} suggests -we cannot achieve much better whith this very small amount of data, since the validation accuracy flattens, while the training accuracy -almost reaches 100%. +we cannot achieve much better with this very small amount of data, since the validation accuracy plateaus, while the training accuracy almost reaches 100%. \begin{figure} \begin{center} @@ -221,7 +218,7 @@ almost reaches 100%. \end{figure} We conduct one experiment, feeding the test set to a LeNet trained exclusively on data generated from our CGAN. It is noticeable that training -for the first 5 epochs gives good results (figure \ref{fig:fake_only}) when compared to the learning curve obtained while training the network ith only the few real samples. This +for the first 5 epochs gives good results (figure \ref{fig:fake_only}) when compared to the learning curve obtained when training the network with only the few real samples. This indicates that we can use the generated data to train the first steps of the network (initial weights) and apply the real sample for 300 epochs to obtain a finer tuning. As observed in figure \ref{fig:few_init} the first steps of retraining will show oscillation, since the fine tuning will try and adapt to the newly fed data. The maximum accuracy reached before the validation curve plateaus is 88.6%, indicating that this strategy proved to be somewhat successfull at improving testing accuracy. -- cgit v1.2.3-54-g00ecf