aboutsummaryrefslogtreecommitdiff
path: root/report/paper.md
diff options
context:
space:
mode:
Diffstat (limited to 'report/paper.md')
-rw-r--r--report/paper.md151
1 files changed, 12 insertions, 139 deletions
diff --git a/report/paper.md b/report/paper.md
index 2177177..d89a394 100644
--- a/report/paper.md
+++ b/report/paper.md
@@ -85,15 +85,15 @@ but no mode collapse was observed even with the shallow model.
## CGAN Architecture description
-CGAN is a conditional version of a GAN which utilises labeled data. Unlike DCGAN, CGAN is trained with explicitly provided labels which allow CGAN to associate features with specific labels. This has the intrinsic advantage of allowing us to specify the label of generated data. The baseline CGAN which we evaluate is visible in figure \ref{fig:cganarc}. The baseline CGAN architecture presents a series blocks each contained a dense layer, LeakyReLu layer (slope=0.2) and a Batch Normalisation layer. The baseline discriminator uses Dense layers, followed by LeakyReLu (slope=0.2) and a Droupout layer.
+CGAN is a conditional version of a GAN which utilises labeled data. Unlike DCGAN, CGAN is trained with explicitly provided labels which allow CGAN to associate features with specific labels. This has the intrinsic advantage of allowing us to specify the label of generated data. The baseline CGAN which we evaluate is visible in figure \ref{fig:cganarc}. The baseline CGAN architecture presents a series blocks each contained a dense layer, `LeakyReLu` layer (slope=0.2) and a Batch Normalisation layer. The baseline discriminator uses Dense layers, followed by `LeakyReLu` (slope=0.2) and a Droupout layer.
The optimizer used for training is `Adam`(`learning_rate=0.002`, `beta=0.5`).
The Convolutional CGAN (CDCGAN) analysed follows the structure presented in the relevant Appendix section. It uses TODO ADD BRIEF DESCRIPTION
We evaluate permutations of the architecture involving:
-* Shallow CGAN - 1 Dense-LeakyReLu-BN block
-* Deep CGAN - 5 Dense-LeakyReLu-BN
+* Shallow CGAN - 1 `Dense-LeakyReLu-BN` block
+* Deep CGAN - 5 `Dense-LeakyReLu-BN`
* Deep Convolutional GAN - DCGAN + conditional label input
* One-Sided Label Smoothing (LS)
* Various Dropout (DO)- Use 0.1, 0.3 and 0.5
@@ -382,8 +382,6 @@ $$ L_{\textrm{total}} = \alpha L_{\textrm{LeNet}} + \beta L_{\textrm{generator}}
<div id="refs"></div>
-\newpage
-
# Appendix
## DCGAN-Appendix
@@ -540,145 +538,20 @@ $$ L_{\textrm{total}} = \alpha L_{\textrm{LeNet}} + \beta L_{\textrm{generator}}
## CDCGAN Alternative Architecture
-### Generator
-```
-__________________________________________________________________________________________________
-Layer (type) Output Shape Param # Connected to
-==================================================================================================
-input_1 (InputLayer) (None, 100) 0
-__________________________________________________________________________________________________
-dense_2 (Dense) (None, 3136) 316736 input_1[0][0]
-__________________________________________________________________________________________________
-reshape_2 (Reshape) (None, 7, 7, 64) 0 dense_2[0][0]
-__________________________________________________________________________________________________
-conv2d_transpose_1 (Conv2DTrans (None, 14, 14, 64) 36928 reshape_2[0][0]
-__________________________________________________________________________________________________
-batch_normalization_1 (BatchNor (None, 14, 14, 64) 256 conv2d_transpose_1[0][0]
-__________________________________________________________________________________________________
-activation_1 (Activation) (None, 14, 14, 64) 0 batch_normalization_1[0][0]
-__________________________________________________________________________________________________
-input_2 (InputLayer) (None, 1) 0
-__________________________________________________________________________________________________
-conv2d_transpose_2 (Conv2DTrans (None, 28, 28, 64) 36928 activation_1[0][0]
-__________________________________________________________________________________________________
-dense_1 (Dense) (None, 64) 128 input_2[0][0]
-__________________________________________________________________________________________________
-batch_normalization_2 (BatchNor (None, 28, 28, 64) 256 conv2d_transpose_2[0][0]
-__________________________________________________________________________________________________
-reshape_1 (Reshape) (None, 1, 1, 64) 0 dense_1[0][0]
-__________________________________________________________________________________________________
-activation_2 (Activation) (None, 28, 28, 64) 0 batch_normalization_2[0][0]
-__________________________________________________________________________________________________
-up_sampling2d_1 (UpSampling2D) (None, 28, 28, 64) 0 reshape_1[0][0]
-__________________________________________________________________________________________________
-multiply_1 (Multiply) (None, 28, 28, 64) 0 activation_2[0][0]
- up_sampling2d_1[0][0]
-__________________________________________________________________________________________________
-conv2d_1 (Conv2D) (None, 28, 28, 64) 36928 multiply_1[0][0]
-__________________________________________________________________________________________________
-batch_normalization_3 (BatchNor (None, 28, 28, 64) 256 conv2d_1[0][0]
-__________________________________________________________________________________________________
-activation_3 (Activation) (None, 28, 28, 64) 0 batch_normalization_3[0][0]
-__________________________________________________________________________________________________
-multiply_2 (Multiply) (None, 28, 28, 64) 0 activation_3[0][0]
- up_sampling2d_1[0][0]
-__________________________________________________________________________________________________
-conv2d_2 (Conv2D) (None, 28, 28, 64) 36928 multiply_2[0][0]
-__________________________________________________________________________________________________
-batch_normalization_4 (BatchNor (None, 28, 28, 64) 256 conv2d_2[0][0]
-__________________________________________________________________________________________________
-activation_4 (Activation) (None, 28, 28, 64) 0 batch_normalization_4[0][0]
-__________________________________________________________________________________________________
-multiply_3 (Multiply) (None, 28, 28, 64) 0 activation_4[0][0]
- up_sampling2d_1[0][0]
-__________________________________________________________________________________________________
-conv2d_3 (Conv2D) (None, 28, 28, 1) 577 multiply_3[0][0]
-__________________________________________________________________________________________________
-activation_5 (Activation) (None, 28, 28, 1) 0 conv2d_3[0][0]
-==================================================================================================
-Total params: 466,177
-Trainable params: 465,665
-Non-trainable params: 512
-__________________________________________________________________________________________________
-```
-
-### Discriminator
-
-```
-__________________________________________________________________________________________________
-Layer (type) Output Shape Param # Connected to
-==================================================================================================
-input_3 (InputLayer) (None, 28, 28, 1) 0
-__________________________________________________________________________________________________
-input_2 (InputLayer) (None, 1) 0
-__________________________________________________________________________________________________
-conv2d_4 (Conv2D) (None, 28, 28, 64) 640 input_3[0][0]
-__________________________________________________________________________________________________
-dense_3 (Dense) (None, 64) 128 input_2[0][0]
-__________________________________________________________________________________________________
-batch_normalization_5 (BatchNor (None, 28, 28, 64) 256 conv2d_4[0][0]
-__________________________________________________________________________________________________
-reshape_3 (Reshape) (None, 1, 1, 64) 0 dense_3[0][0]
-__________________________________________________________________________________________________
-leaky_re_lu_1 (LeakyReLU) (None, 28, 28, 64) 0 batch_normalization_5[0][0]
-__________________________________________________________________________________________________
-up_sampling2d_2 (UpSampling2D) (None, 28, 28, 64) 0 reshape_3[0][0]
-__________________________________________________________________________________________________
-multiply_4 (Multiply) (None, 28, 28, 64) 0 leaky_re_lu_1[0][0]
- up_sampling2d_2[0][0]
-__________________________________________________________________________________________________
-conv2d_5 (Conv2D) (None, 28, 28, 64) 36928 multiply_4[0][0]
-__________________________________________________________________________________________________
-batch_normalization_6 (BatchNor (None, 28, 28, 64) 256 conv2d_5[0][0]
-__________________________________________________________________________________________________
-leaky_re_lu_2 (LeakyReLU) (None, 28, 28, 64) 0 batch_normalization_6[0][0]
-__________________________________________________________________________________________________
-multiply_5 (Multiply) (None, 28, 28, 64) 0 leaky_re_lu_2[0][0]
- up_sampling2d_2[0][0]
-__________________________________________________________________________________________________
-conv2d_6 (Conv2D) (None, 28, 28, 64) 36928 multiply_5[0][0]
-__________________________________________________________________________________________________
-batch_normalization_7 (BatchNor (None, 28, 28, 64) 256 conv2d_6[0][0]
-__________________________________________________________________________________________________
-leaky_re_lu_3 (LeakyReLU) (None, 28, 28, 64) 0 batch_normalization_7[0][0]
-__________________________________________________________________________________________________
-multiply_6 (Multiply) (None, 28, 28, 64) 0 leaky_re_lu_3[0][0]
- up_sampling2d_2[0][0]
-__________________________________________________________________________________________________
-conv2d_7 (Conv2D) (None, 14, 14, 64) 36928 multiply_6[0][0]
-__________________________________________________________________________________________________
-batch_normalization_8 (BatchNor (None, 14, 14, 64) 256 conv2d_7[0][0]
-__________________________________________________________________________________________________
-leaky_re_lu_4 (LeakyReLU) (None, 14, 14, 64) 0 batch_normalization_8[0][0]
-__________________________________________________________________________________________________
-conv2d_8 (Conv2D) (None, 7, 7, 64) 36928 leaky_re_lu_4[0][0]
-__________________________________________________________________________________________________
-batch_normalization_9 (BatchNor (None, 7, 7, 64) 256 conv2d_8[0][0]
-__________________________________________________________________________________________________
-leaky_re_lu_5 (LeakyReLU) (None, 7, 7, 64) 0 batch_normalization_9[0][0]
-__________________________________________________________________________________________________
-flatten_1 (Flatten) (None, 3136) 0 leaky_re_lu_5[0][0]
-__________________________________________________________________________________________________
-dropout_1 (Dropout) (None, 3136) 0 flatten_1[0][0]
-__________________________________________________________________________________________________
-dense_4 (Dense) (None, 1) 3137 dropout_1[0][0]
-==================================================================================================
-Total params: 152,897
-Trainable params: 152,257
-Non-trainable params: 640
-__________________________________________________________________________________________________
-```
-
-## Retrain-Appendix
+\begin{figure}[H]
+\begin{center}
+\includegraphics[width=24em]{fig/cdcgen.pdf}
+\end{center}
+\end{figure}
-\begin{figure}
+\begin{figure}[H]
\begin{center}
-\includegraphics[width=24em]{fig/train_few_real.png}
-\caption{Training with few real samples}
-\label{fig:few_real}
+\includegraphics[width=24em]{fig/cdcdesc.pdf}
\end{center}
\end{figure}
+## Retrain-Appendix
+
\begin{figure}[H]
\begin{center}
\includegraphics[width=24em]{fig/fake_only.png}