aboutsummaryrefslogtreecommitdiff
path: root/report
diff options
context:
space:
mode:
Diffstat (limited to 'report')
-rw-r--r--report/paper.md14
1 files changed, 12 insertions, 2 deletions
diff --git a/report/paper.md b/report/paper.md
index 81be991..d058051 100644
--- a/report/paper.md
+++ b/report/paper.md
@@ -171,8 +171,15 @@ $$ \textrm{IS}(x) = \exp(\mathcal{E}_x \left( \textrm{KL} ( p(y\|x) \|\| p(y) )
## Results
-Retrain with different portions and test BOTH fake and real queries. Please **vary** the portions
-of the real training and synthetic images, e.g. 10%, 20%, 50%, and 100%, of each.
+In this section we analyze the effect of retraining the classification network using a mix of real and generated data, highlighting the benefits of
+injecting generated samples in the original training set to boost testing accuracy.
+
+As observed in figure \ref{fig:mix1} we performed two experiments for performance evaluation:
+
+\begin{itemize}
+\item Keeping the same number of training samples while just changing the amount of real to generated data (55.000 samples in total).
+\item Keeping the whole training set from MNIST and adding generated samples from CGAN.
+\end{itemize}
\begin{figure}
\begin{center}
@@ -183,6 +190,9 @@ of the real training and synthetic images, e.g. 10%, 20%, 50%, and 100%, of each
\end{center}
\end{figure}
+Both experiments show that an optimal amount of data to boost testing accuracy on the original MNIST dataset is around 30% generated data as in both cases we observe
+an increase in accuracy by around 0.3%. In absence of original data the testing accuracy drops significantly to around 20% in both cases.
+
## Adapted Training Strategy
For this section we will use 550 samples from MNIST (55 samples per class). Training the classifier