aboutsummaryrefslogtreecommitdiff
path: root/report
diff options
context:
space:
mode:
Diffstat (limited to 'report')
-rw-r--r--report/paper.md30
1 files changed, 18 insertions, 12 deletions
diff --git a/report/paper.md b/report/paper.md
index a40a1e6..dc9f95a 100644
--- a/report/paper.md
+++ b/report/paper.md
@@ -31,8 +31,7 @@ DCGAN exploits convolutional stride to perform downsampling and transposed convo
We use batch normalization at the output of each convolutional layer (exception made for the output layer of the generator
and the input layer of the discriminator). The activation functions of the intermediate layers are `ReLU` (for generator) and `LeakyReLU` with slope 0.2 (for discriminator).
-The activation functions used for the output are `tanh` for the generator and `sigmoid` for the discriminator. The convolutional layers' output in
-the discriminator uses dropout before feeding the next layers. We noticed a significant improvement in performance, and estimated an optimal droput rate of 0.25.
+The activation functions used for the output are `tanh` for the generator and `sigmoid` for the discriminator. The convolutional layers' output in the discriminator uses dropout before feeding the next layers. We noticed a significant improvement in performance, and estimated an optimal droput rate of 0.25.
The optimizer used for training is `Adam(learning_rate=0.002, beta=0.5)`.
The main architecture used can be observed in figure \ref{fig:dcganarc}.
@@ -49,11 +48,9 @@ The main architecture used can be observed in figure \ref{fig:dcganarc}.
We evaluate three different GAN architectures, varying the size of convolutional layers in the generator, while retaining the structure presented in figure \ref{fig:dcganarc}:
-\begin{itemize}
-\item Shallow: Conv128-Conv64
-\item Medium: Conv256-Conv128
-\item Deep: Conv512-Conv256
-\end{itemize}
+* Shallow: Conv128-Conv64
+* Medium: Conv256-Conv128
+* Deep: Conv512-Conv256
\begin{figure}
\begin{center}
@@ -89,6 +86,17 @@ While training the different proposed DCGAN architectures, we did not observe mo
## CGAN Architecture description
+CGAN is a conditional version foa Generative adversarial network which utilises labeled data. Unlike DCGAN, CGAN is trained with explicitly provided labels which allows CGAN to associate features with specific labels. This has the intrinsic advantage of allowing us to specify the label of generated data. The baseline CGAN which we evaluate is visible in figure \ref{fig:cganrc}. The baseline GAN arhitecture presents a series blocks each contained a dense layer, ReLu layer and a Batch Normalisation layer. The baseline discriminator use Dense layers, followed by ReLu and a Droupout layer.
+
+We evaluate permutations of the architecture involving:
+
+* Shallow CGAN
+* Deep CGAN
+* Deep Convolutional GAN
+* Label Smoothing (One Sided)
+* Various Dropout
+* Virtual Batch Normalisation
+
\begin{figure}
\begin{center}
\includegraphics[width=24em]{fig/CGAN_arch.pdf}
@@ -140,7 +148,7 @@ The effect of dropout for the non-convolutional CGAN architecture does not affec
## Results
Measure the inception scores i.e. we use the class labels to
-generate images in CGAN and compare them with the predicted labels of the generated images.
+generate images in CGAN and compare them with the predicted labels of the generated images.
Also report the recognition accuracies on the
MNIST real testing set (10K), in comparison to the inception scores.
@@ -179,10 +187,8 @@ injecting generated samples in the original training set to boost testing accura
As observed in figure \ref{fig:mix1} we performed two experiments for performance evaluation:
-\begin{itemize}
-\item Keeping the same number of training samples while just changing the amount of real to generated data (55.000 samples in total).
-\item Keeping the whole training set from MNIST and adding generated samples from CGAN.
-\end{itemize}
+* Keeping the same number of training samples while just changing the amount of real to generated data (55.000 samples in total).
+* Keeping the whole training set from MNIST and adding generated samples from CGAN.
\begin{figure}
\begin{center}