aboutsummaryrefslogtreecommitdiff
path: root/report
diff options
context:
space:
mode:
authorVasil Zlatanov <v@skozl.com>2019-03-15 22:19:36 +0000
committerVasil Zlatanov <v@skozl.com>2019-03-15 22:19:36 +0000
commit6aeecafd57b3d070dae23d699f8451a35d1a1ef3 (patch)
tree1d93709aa2da6c7b42ad2b3033f2f8f441bda296 /report
parentd93a0336889147cbfe2f83720fa950cec61ac94b (diff)
downloade4-gan-6aeecafd57b3d070dae23d699f8451a35d1a1ef3.tar.gz
e4-gan-6aeecafd57b3d070dae23d699f8451a35d1a1ef3.tar.bz2
e4-gan-6aeecafd57b3d070dae23d699f8451a35d1a1ef3.zip
Improved till page 5
Diffstat (limited to 'report')
-rw-r--r--report/paper.md21
1 files changed, 10 insertions, 11 deletions
diff --git a/report/paper.md b/report/paper.md
index fd8512f..0ed78df 100644
--- a/report/paper.md
+++ b/report/paper.md
@@ -179,7 +179,7 @@ tend to collapse to very small regions.
\includegraphics[width=8em]{fig/cdcloss1.png}
\includegraphics[width=8em]{fig/cdcloss2.png}
\includegraphics[width=8em]{fig/cdcloss3.png}
-\caption{cDCGAN G-D loss; Left G/D=1; Middle G/D=3; Right G/D=6}
+\caption{cDCGAN G-D loss; Left *G/D=1*; Middle $G/D=3$; Right $G/D=6$}
\label{fig:cdcloss}
\end{center}
\end{figure}
@@ -245,13 +245,13 @@ Virtual Batch Normalization is a further optimisation technique proposed by Tim
### Dropout
-Despite the difficulties in judging differences between G-D losses and image quality, dropout rate seems to have a noticeable effect on accuracy and Inception Score, with a variation of 3.6% between our best and worst dropout cases. Ultimately, judging from the measurements, it is preferable to use a low dropout rate (0.1 seems to be the one that achieves the best results).
+Dropout appears to have a noticeable effect on accuracy and Inception Score, with a variation of 3.6% between our best and worst dropout cases. The measurements indicate that it is preferable to use a low dropout rate (0.1 seems to be the one that achieves the best results).
### G-D Balancing on cDCGAN
-Despite achieving lower losses oscillation, using G/D=3 to incentivize generator training did not improve the performance of cDCGAN as it is observed from
-the Inception Score and testing accuracy. We obtain in fact 5% less test accuracy, meaning that using this technique in our architecture produces on
-average lower quality images when compared to our standard cDCGAN.
+Despite achieving lower loss oscillation, using *G/D=3* to incentivize generator training did not improve the performance of cDCGAN as meassured by
+the Inception Score and testing accuracy. We obtain 5% less test accuracy, meaning that using this technique in our architecture produces on
+lower quality images on average when compared to our standard cDCGAN.
# Re-training the handwritten digit classifier
@@ -262,10 +262,10 @@ average lower quality images when compared to our standard cDCGAN.
In this section we analyze the effect of retraining the classification network using a mix of real and generated data, highlighting the benefits of
injecting generated samples in the original training set to boost testing accuracy.
-As observed in figure \ref{fig:mix1} we performed two experiments for performance evaluation:
+As shown in figure \ref{fig:mix1} we performed two experiments for performance evaluation:
-* Keeping the same number of training samples while just changing the ratio of real to generated data (55,000 samples in total).
-* Keeping the whole training set from MNIST and adding generated samples from cDCGAN.
+* Using the same number of training samples while only changing the ratio of real to generated data (55,000 samples in total).
+* Using the whole training set from MNIST and adding generated samples from cDCGAN.
\begin{figure}
\begin{center}
@@ -308,8 +308,7 @@ boosted to 92%, making this technique the most successful attempt of improvement
\end{figure}
Examples of misclassification are displayed in figure \ref{fig:retrain_fail}. It is visible from a cross comparison between these results and the precision-recall
-curve displayed in figure \ref{fig:pr-retrain} that the network we trained performs really well for most of the digits, but the low confidence on digit $8$ lowers
-the overall performance.
+curve displayed in figure \ref{fig:pr-retrain} that the network performs well for most of the digits, but has is brought down by the relatively low precision for the digit 8, lowering the micro-average'd precision.
\begin{figure}
\begin{center}
@@ -513,7 +512,7 @@ $$ L_{\textrm{total}} = \alpha L_{\textrm{LeNet}} + \beta L_{\textrm{generator}}
\begin{figure}[H]
\begin{center}
\includegraphics[width=18em]{fig/clustcollapse.png}
-\caption{cDCGAN G/D=6 PCA Embeddings through LeNet (10000 samples per class)}
+\caption{cDCGAN *G/D=6* PCA Embeddings through LeNet (10000 samples per class)}
\label{fig:clustcollapse}
\end{center}
\end{figure}