aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authornunzip <np.scarh@gmail.com>2019-03-13 16:42:14 +0000
committernunzip <np.scarh@gmail.com>2019-03-13 16:42:14 +0000
commit4a55c6ae11fc358c6b48749264e9922b3e00698c (patch)
treeb1ea9bc56c2131f488455089d98fab2bc5a9014b
parent21fb715f2758f9d61acdf949c3e726a6875f90ba (diff)
downloade4-gan-4a55c6ae11fc358c6b48749264e9922b3e00698c.tar.gz
e4-gan-4a55c6ae11fc358c6b48749264e9922b3e00698c.tar.bz2
e4-gan-4a55c6ae11fc358c6b48749264e9922b3e00698c.zip
Add details
-rw-r--r--report/paper.md15
1 files changed, 8 insertions, 7 deletions
diff --git a/report/paper.md b/report/paper.md
index 1686bc0..5587da4 100644
--- a/report/paper.md
+++ b/report/paper.md
@@ -102,7 +102,7 @@ We evaluate permutations of the architecture involving:
* Deep Convolutional GAN - DCGAN + conditional label input
* One-Sided Label Smoothing (LS)
* Various Dropout (DO)- Use 0.1, 0.3 and 0.5
-* Virtual Batch Normalisation (VBN)- Normalisation based on one batch [@improved]
+* Virtual Batch Normalisation (VBN)- Normalisation based on one batch(BN) [@improved]
\begin{figure}
\begin{center}
@@ -117,7 +117,7 @@ We evaluate permutations of the architecture involving:
When comparing the three levels of depth for the architectures it is possible to notice significant differences for the G-D losses balancing. In
a shallow architecture we notice a high oscillation of the generator loss (figure \ref{fig:cshort}), which is being overpowered by the discriminator. Despite this we don't
experience any issues with vanishing gradient, hence no mode collapse is reached.
-Similarly, with a deep architecture the discriminator still overpowers the generator, and an equilibrium between the two losses is not acheived. The image quality in both cases is not really high: we can see that even after 20,000 batches the some pictures appear to be slightly blurry \ref{fig:clong}.
+Similarly, with a deep architecture the discriminator still overpowers the generator, and an equilibrium between the two losses is not acheived. The image quality in both cases is not really high: we can see that even after 20,000 batches the some pictures appear to be slightly blurry (figure \ref{fig:clong}).
The best compromise is reached for 3 Dense-LeakyReLu-BN blocks as shown in figure \ref{fig:cmed}. It is possible to observe that G-D losses are perfectly balanced,
and their value goes below 1, meaning the GAN is approaching the theoretical Nash Equilibrium of 0.5.
The image quality is better than the two examples reported earlier, proving that this Medium-depth architecture is the best compromise.
@@ -134,7 +134,7 @@ The image quality is better than the two examples reported earlier, proving that
The three levels of dropout rates attempted do not affect the performance significantly, and as we can see in figures \ref{fig:cg_drop1_1} (0.1), \ref{fig:cmed}(0.3) and \ref{fig:cg_drop2_1}(0.5), both
image quality and G-D losses are comparable.
-The biggest improvement in performance is obtained through one-sided label smoothing, shifting the true labels form 1 to 0.9 to incentivize the discriminator.
+The biggest improvement in performance is obtained through one-sided label smoothing, shifting the true labels form 1 to 0.9 to reinforce discriminator behaviour.
Using 0.1 instead of zero for the fake labels does not improve performance, as the discriminator loses incentive to do better (generator behaviour is reinforced). Performance results for
one-sided labels smoothing with true labels = 0.9 are shown in figure \ref{fig:smooth}.
@@ -188,21 +188,20 @@ Medium CGAN+VBN+LS & 0.763 & 3.91 & 19:43 \\
### Architecture
-We observe increased accruacy as we increase the depth of the GAN arhitecture at the cost of the training time. There appears to be diminishing returns with the deeper networks, and larger improvements are achievable with specific optimisation techniques.
+We observe increased accruacy as we increase the depth of the GAN arhitecture at the cost of the training time. There appears to be diminishing returns with the deeper networks, and larger improvements are achievable with specific optimisation techniques. Despite the initial considerations about G-D losses for the Convolutional CGAN, there seems to be an improvement in inception score and test accuracy with respect to the other analysed cases. One sided label smoothing however did not improve this performanc any further, suggesting that reinforcing discriminator behaviour does not benefit the system in this case.
### One Side Label Smoothing
-One sided label smoothing involves relaxing our confidence on the labels in our data. This lowers the loss target to below 1. Tim Salimans et. al. [@improved] show smoothing of the positive labels reduces the vulnerability of the neural network to adversarial examples. We observe significant improvements to the Inception score and classification accuracy.
+One sided label smoothing involves relaxing our confidence on the labels in our data. Tim Salimans et. al. [@improved] show smoothing of the positive labels reduces the vulnerability of the neural network to adversarial examples. We observe significant improvements to the Inception score and classification accuracy in the case of our baseline (Medium CGAN).
### Virtual Batch Normalisation
-Virtual Batch Noramlisation is a further optimisation technique proposed by Tim Salimans et. al. [@improved]. Virtual batch normalisation is a modification to the batch normalisation layer, which performs normalisation based on statistics from a reference batch. We observe that VBN improved the classification accuracy and the Inception score.
+Virtual Batch Noramlisation is a further optimisation technique proposed by Tim Salimans et. al. [@improved]. Virtual batch normalisation is a modification to the batch normalisation layer, which performs normalisation based on statistics from a reference batch. We observe that VBN improved the classification accuracy and the Inception score. TODO EXPLAIN WHY
### Dropout
The effect of dropout for the non-convolutional CGAN architecture does not affect performance as much as in DCGAN, nor does it seem to affect the quality of images produced, together with the G-D loss remain almost unchanged. Ultimately, judging from the inception scores, it is preferable to use a low dropout rate (in our case 0.1 seems to be the dropout rate that achieves the best results).
-
# Re-training the handwritten digit classifier
## Results
@@ -274,6 +273,8 @@ boosted to 92%, making this technique the most successfull attempt of improvemen
Failures classification examples are displayed in figure \ref{fig:retrain_fail}. The results showed indicate that the network we trained is actually performing quite well,
as most of the testing images that got misclassified (mainly nines and fours) show ambiguities.
+\newpage
+
# Bonus Questions
## Relation to PCA