From 47e6ea316baeba86c6df12634ffbeab2a1da8b73 Mon Sep 17 00:00:00 2001 From: nunzip Date: Sun, 10 Mar 2019 13:23:02 +0000 Subject: Finish part 4 --- report/paper.md | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/report/paper.md b/report/paper.md index 81be991..d058051 100644 --- a/report/paper.md +++ b/report/paper.md @@ -171,8 +171,15 @@ $$ \textrm{IS}(x) = \exp(\mathcal{E}_x \left( \textrm{KL} ( p(y\|x) \|\| p(y) ) ## Results -Retrain with different portions and test BOTH fake and real queries. Please **vary** the portions -of the real training and synthetic images, e.g. 10%, 20%, 50%, and 100%, of each. +In this section we analyze the effect of retraining the classification network using a mix of real and generated data, highlighting the benefits of +injecting generated samples in the original training set to boost testing accuracy. + +As observed in figure \ref{fig:mix1} we performed two experiments for performance evaluation: + +\begin{itemize} +\item Keeping the same number of training samples while just changing the amount of real to generated data (55.000 samples in total). +\item Keeping the whole training set from MNIST and adding generated samples from CGAN. +\end{itemize} \begin{figure} \begin{center} @@ -183,6 +190,9 @@ of the real training and synthetic images, e.g. 10%, 20%, 50%, and 100%, of each \end{center} \end{figure} +Both experiments show that an optimal amount of data to boost testing accuracy on the original MNIST dataset is around 30% generated data as in both cases we observe +an increase in accuracy by around 0.3%. In absence of original data the testing accuracy drops significantly to around 20% in both cases. + ## Adapted Training Strategy For this section we will use 550 samples from MNIST (55 samples per class). Training the classifier -- cgit v1.2.3-54-g00ecf