aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorVasil Zlatanov <v@skozl.com>2019-03-07 17:54:06 +0000
committerVasil Zlatanov <v@skozl.com>2019-03-07 17:54:06 +0000
commit95de6b8e13302311ae2923818a8ac224b2c9fcc8 (patch)
tree57af0a794b5abdf1646e17abca3b2c2910fe07bc
parentdefc939aac1ba77f8cb87b97b1a111ce23d73c52 (diff)
downloade4-gan-95de6b8e13302311ae2923818a8ac224b2c9fcc8.tar.gz
e4-gan-95de6b8e13302311ae2923818a8ac224b2c9fcc8.tar.bz2
e4-gan-95de6b8e13302311ae2923818a8ac224b2c9fcc8.zip
Rewrite intro
-rw-r--r--report/paper.md21
1 files changed, 12 insertions, 9 deletions
diff --git a/report/paper.md b/report/paper.md
index 55a0a63..100887f 100644
--- a/report/paper.md
+++ b/report/paper.md
@@ -1,14 +1,18 @@
# Introduction
-In this coursework we will present two variants of GAN architectures (DCGAN and CGAN) trained with the MNIST_dataset.
-The dataset contains 60.000 training images and 10.000 testing images of size 28x28, representing different digits (10 classes in total).
+In this coursework we present two variants of the GAN architecture - DCGAN and CGAN, applied to the MNIST dataaset and evaluate performance metrics across various optimisations techniques. The MNIST dataset contains 60,000 training images and 10,000 testing images of size 28x28, spread across ten classes representing the ten handwritten digits.
-Training a shallow GAN with no convolutional layers poses multiple problems: mode collapse, relatively low quality of images generated and unbalanced G-D losses.
+## GAN
+Generative Adversarial Networks present a system of models which learn to output data, similar to training data. A trained GAN takes noise as an input and is able to provide an output with the same dimensions and ideally features as the samples it has been trained with.
-As it can be seen in \ref{fig:mode_collapse}, after 200.000 iterations the network (**presented in appendix XXX**) shows mode collapse
-as the output of the generator only represents few of the labels originally fed. At that point the loss function of the generator stops
-improving as shown in figure \ref{fig:vanilla_loss}. As we observe, G-D balance in not achieved as the discriminator loss almost reaches zero,
-while the generator loss keeps increasing.
+GAN's employ two neural networks - a *discriminator* and a *generator* which contest in a zero-sum game. The task of the *discriminator* is to distinguish generated images from real images, while the task of the generator is to produce realistic images which are able to fool the discriminator.
+
+### Mode Collapse
+
+Training a shallow GAN with no convolutional layers poses multiple problems: mode collapse and generating low quality images due to unbalanced G-D losses.
+
+Mode collapse can be observed in figure \ref{fig:mode_collapse}, after 200.000 iterations of the GAN network **presented in appendix XXX**. The output of the generator only represents few of the labels originally fed. At that point the loss function of the generator stops
+improving as shown in figure \ref{fig:vanilla_loss}. We observe, the discriminator loss tentding to zero as it learns ti classify the fake 1's, while the generator is stuck producing 1's.
\begin{figure}
\begin{center}
@@ -129,13 +133,12 @@ MNIST real testing set (10K), in comparison to the inception scores.
**Please measure and discuss the inception scores for the different hyper-parameters/tricks and/or
architectures in Q2.**
-We measure the performance of the considered GAN's using the Inecption score [@inception], as calculated
+We measure the performance of the considered GAN's using the Inecption score [-inception], as calculated
with L2-Net logits.
$$ \textrm{IS}(x) = \exp(\mathcal{E}_x \left( \textrm{KL} ( p(y\|x) \|\| p(y) ) \right) ) $$
GAN type Inception Score (L2-Net)
------------- -----------------------------
MNIST(ref) 9.67
cGAN 6.01
cGAN+VB 6.2