diff options
author | nunzip <np.scarh@gmail.com> | 2019-03-10 13:23:02 +0000 |
---|---|---|
committer | nunzip <np.scarh@gmail.com> | 2019-03-10 13:23:02 +0000 |
commit | 47e6ea316baeba86c6df12634ffbeab2a1da8b73 (patch) | |
tree | f564168a2c6e733f827aa38540a08ae6f396ea23 | |
parent | 434679320585d08733246dc83eb7844d9b386d90 (diff) | |
download | e4-gan-47e6ea316baeba86c6df12634ffbeab2a1da8b73.tar.gz e4-gan-47e6ea316baeba86c6df12634ffbeab2a1da8b73.tar.bz2 e4-gan-47e6ea316baeba86c6df12634ffbeab2a1da8b73.zip |
Finish part 4
-rw-r--r-- | report/paper.md | 14 |
1 files changed, 12 insertions, 2 deletions
diff --git a/report/paper.md b/report/paper.md index 81be991..d058051 100644 --- a/report/paper.md +++ b/report/paper.md @@ -171,8 +171,15 @@ $$ \textrm{IS}(x) = \exp(\mathcal{E}_x \left( \textrm{KL} ( p(y\|x) \|\| p(y) ) ## Results -Retrain with different portions and test BOTH fake and real queries. Please **vary** the portions -of the real training and synthetic images, e.g. 10%, 20%, 50%, and 100%, of each. +In this section we analyze the effect of retraining the classification network using a mix of real and generated data, highlighting the benefits of +injecting generated samples in the original training set to boost testing accuracy. + +As observed in figure \ref{fig:mix1} we performed two experiments for performance evaluation: + +\begin{itemize} +\item Keeping the same number of training samples while just changing the amount of real to generated data (55.000 samples in total). +\item Keeping the whole training set from MNIST and adding generated samples from CGAN. +\end{itemize} \begin{figure} \begin{center} @@ -183,6 +190,9 @@ of the real training and synthetic images, e.g. 10%, 20%, 50%, and 100%, of each \end{center} \end{figure} +Both experiments show that an optimal amount of data to boost testing accuracy on the original MNIST dataset is around 30% generated data as in both cases we observe +an increase in accuracy by around 0.3%. In absence of original data the testing accuracy drops significantly to around 20% in both cases. + ## Adapted Training Strategy For this section we will use 550 samples from MNIST (55 samples per class). Training the classifier |