diff options
| -rw-r--r-- | report/paper.md | 40 | 
1 files changed, 29 insertions, 11 deletions
| diff --git a/report/paper.md b/report/paper.md index 100887f..371cd7f 100644 --- a/report/paper.md +++ b/report/paper.md @@ -11,7 +11,7 @@ GAN's employ two neural networks - a *discriminator* and a *generator* which con  Training a shallow GAN with no convolutional layers poses multiple problems: mode collapse and generating low quality images due to unbalanced G-D losses. -Mode collapse can be observed in figure \ref{fig:mode_collapse}, after 200.000 iterations of the GAN network **presented in appendix XXX**. The output of the generator only represents few of the labels originally fed. At that point the loss function of the generator stops  +Mode collapse can be observed in figure \ref{fig:mode_collapse}, after 200.000 iterations of the GAN network presented in appendix, figure \ref{fig:vanilla_gan} . The output of the generator only represents few of the labels originally fed. At that point the loss function of the generator stops   improving as shown in figure \ref{fig:vanilla_loss}. We observe, the discriminator loss tentding to zero as it learns ti classify the fake 1's, while the generator is stuck producing 1's.  \begin{figure} @@ -104,7 +104,17 @@ While training the different proposed DCGAN architectures, we did not observe mo  the simple GAN presented in the introduction.  Applying Virtual Batch Normalization on Medium DCGAN does not provide observable changes in G-D balancing, but reduces within-batch correlation. Although it  -is difficult to qualitatively assess the improvements, figure \ref{fig:} shows results of the introduction of this technique. +is difficult to qualitatively assess the improvements, figure \ref{fig:vbn_dc} shows results of the introduction of this technique. + +\begin{figure} +\begin{center} +\includegraphics[width=24em]{fig/vbn_dc.pdf} +\caption{DCGAN Virtual Batch Normalization} +\label{fig:vbn_dc} +\end{center} +\end{figure} + +  # CGAN @@ -138,15 +148,15 @@ with L2-Net logits.  $$ \textrm{IS}(x) = \exp(\mathcal{E}_x \left( \textrm{KL} ( p(y\|x) \|\| p(y) ) \right) ) $$ -GAN type     Inception Score (L2-Net) -MNIST(ref)   9.67   -cGAN         6.01 -cGAN+VB      6.2 -cGAN+LS      6.3 -cGAN+VB+LS   6.4 -cDCGAN+VB    6.5 -cDCGAN+LS    6.8 -cDCGAN+VB+LS 7.3 +GAN type     Inception Score (L2-Net) 	Test Accuracy (L2-Net) +MNIST(ref)   9.67  			1% +cGAN         6.01			2% +cGAN+VB      6.2			3% +cGAN+LS      6.3			. +cGAN+VB+LS   6.4			. +cDCGAN+VB    6.5			. +cDCGAN+LS    6.8			. +cDCGAN+VB+LS 7.3			. @@ -204,4 +214,12 @@ architecture and loss function?  # Appendix  +\begin{figure} +\begin{center} +\includegraphics[width=24em]{fig/vanilla_gan_arc.pdf} +\caption{Vanilla GAN Architecture} +\label{fig:vanilla_gan} +\end{center} +\end{figure} + | 
