aboutsummaryrefslogtreecommitdiff
path: root/report
diff options
context:
space:
mode:
authorVasil Zlatanov <v@skozl.com>2019-03-14 23:45:27 +0000
committerVasil Zlatanov <v@skozl.com>2019-03-14 23:45:27 +0000
commitacb5c19dcda7955b44112ecc9cc0794babc7cf4c (patch)
tree9e959f1e4b6f3245871f86015676f7418cb7278c /report
parentbac0bc27fcba5f2d59326cf327f16d5c2cc62809 (diff)
downloade4-gan-acb5c19dcda7955b44112ecc9cc0794babc7cf4c.tar.gz
e4-gan-acb5c19dcda7955b44112ecc9cc0794babc7cf4c.tar.bz2
e4-gan-acb5c19dcda7955b44112ecc9cc0794babc7cf4c.zip
Push stuff around
Diffstat (limited to 'report')
-rw-r--r--report/paper.md4
1 files changed, 1 insertions, 3 deletions
diff --git a/report/paper.md b/report/paper.md
index a028093..b4e812e 100644
--- a/report/paper.md
+++ b/report/paper.md
@@ -18,7 +18,7 @@ Training a shallow GAN with no convolutional layers poses problems such as mode
Some of the main challanges faced when training a GAN are: **mode collapse**, **low quality** of images and **mismatch** between generator and discriminator loss. Mode collapse is achieved with our naive *vanilla GAN* (Appendix-\ref{fig:vanilla_gan}) implementation after 200,000 batches. The generated images observed during a mode collapse can be seen in figure \ref{fig:mode_collapse}. The output of the generator only represents few of the labels originally fed. When mode collapse is reached the loss function of the generator stops improving as shown in figure \ref{fig:vanilla_loss}. We observe the discriminator loss tends to zero as the discriminator learns to assume and classify the fake 1s, while the generator is stuck producing 1 and hence not able to improve.
-A significant improvement to this vanilla architecture is Deep Convolutional Generative Adversarial Networks (DCGAN).
+An improvement to the vanilla architecture is Deep Convolutional Generative Adversarial Networks (DCGAN).
# DCGAN
@@ -329,8 +329,6 @@ the overall performance.
\end{figure}
-\newpage
-
# Bonus Questions
## Relation to PCA