aboutsummaryrefslogtreecommitdiff
path: root/report
diff options
context:
space:
mode:
authornunzip <np.scarh@gmail.com>2019-03-07 19:43:39 +0000
committernunzip <np.scarh@gmail.com>2019-03-07 19:43:39 +0000
commita4181df0b5cd0cea323139e7407b8fe1b7d0ad73 (patch)
tree47bdc375d5d6a4d967d1c86d679f1b42f94c84a7 /report
parent95de6b8e13302311ae2923818a8ac224b2c9fcc8 (diff)
downloade4-gan-a4181df0b5cd0cea323139e7407b8fe1b7d0ad73.tar.gz
e4-gan-a4181df0b5cd0cea323139e7407b8fe1b7d0ad73.tar.bz2
e4-gan-a4181df0b5cd0cea323139e7407b8fe1b7d0ad73.zip
Writing more DCGAN
Diffstat (limited to 'report')
-rw-r--r--report/paper.md40
1 files changed, 29 insertions, 11 deletions
diff --git a/report/paper.md b/report/paper.md
index 100887f..371cd7f 100644
--- a/report/paper.md
+++ b/report/paper.md
@@ -11,7 +11,7 @@ GAN's employ two neural networks - a *discriminator* and a *generator* which con
Training a shallow GAN with no convolutional layers poses multiple problems: mode collapse and generating low quality images due to unbalanced G-D losses.
-Mode collapse can be observed in figure \ref{fig:mode_collapse}, after 200.000 iterations of the GAN network **presented in appendix XXX**. The output of the generator only represents few of the labels originally fed. At that point the loss function of the generator stops
+Mode collapse can be observed in figure \ref{fig:mode_collapse}, after 200.000 iterations of the GAN network presented in appendix, figure \ref{fig:vanilla_gan} . The output of the generator only represents few of the labels originally fed. At that point the loss function of the generator stops
improving as shown in figure \ref{fig:vanilla_loss}. We observe, the discriminator loss tentding to zero as it learns ti classify the fake 1's, while the generator is stuck producing 1's.
\begin{figure}
@@ -104,7 +104,17 @@ While training the different proposed DCGAN architectures, we did not observe mo
the simple GAN presented in the introduction.
Applying Virtual Batch Normalization on Medium DCGAN does not provide observable changes in G-D balancing, but reduces within-batch correlation. Although it
-is difficult to qualitatively assess the improvements, figure \ref{fig:} shows results of the introduction of this technique.
+is difficult to qualitatively assess the improvements, figure \ref{fig:vbn_dc} shows results of the introduction of this technique.
+
+\begin{figure}
+\begin{center}
+\includegraphics[width=24em]{fig/vbn_dc.pdf}
+\caption{DCGAN Virtual Batch Normalization}
+\label{fig:vbn_dc}
+\end{center}
+\end{figure}
+
+
# CGAN
@@ -138,15 +148,15 @@ with L2-Net logits.
$$ \textrm{IS}(x) = \exp(\mathcal{E}_x \left( \textrm{KL} ( p(y\|x) \|\| p(y) ) \right) ) $$
-GAN type Inception Score (L2-Net)
-MNIST(ref) 9.67
-cGAN 6.01
-cGAN+VB 6.2
-cGAN+LS 6.3
-cGAN+VB+LS 6.4
-cDCGAN+VB 6.5
-cDCGAN+LS 6.8
-cDCGAN+VB+LS 7.3
+GAN type Inception Score (L2-Net) Test Accuracy (L2-Net)
+MNIST(ref) 9.67 1%
+cGAN 6.01 2%
+cGAN+VB 6.2 3%
+cGAN+LS 6.3 .
+cGAN+VB+LS 6.4 .
+cDCGAN+VB 6.5 .
+cDCGAN+LS 6.8 .
+cDCGAN+VB+LS 7.3 .
@@ -204,4 +214,12 @@ architecture and loss function?
# Appendix
+\begin{figure}
+\begin{center}
+\includegraphics[width=24em]{fig/vanilla_gan_arc.pdf}
+\caption{Vanilla GAN Architecture}
+\label{fig:vanilla_gan}
+\end{center}
+\end{figure}
+