aboutsummaryrefslogtreecommitdiff
path: root/report
diff options
context:
space:
mode:
authornunzip <np.scarh@gmail.com>2019-03-14 19:44:05 +0000
committernunzip <np.scarh@gmail.com>2019-03-14 19:44:05 +0000
commitd09995baa87c27472bd7bbc5c1d4ccf05cb02f8a (patch)
tree26c194a161cc3597027fbddb02849e365528a00c /report
parente45aef7b5df38948406b7a079b63cdedab94c423 (diff)
downloade4-gan-d09995baa87c27472bd7bbc5c1d4ccf05cb02f8a.tar.gz
e4-gan-d09995baa87c27472bd7bbc5c1d4ccf05cb02f8a.tar.bz2
e4-gan-d09995baa87c27472bd7bbc5c1d4ccf05cb02f8a.zip
Fix section 1
Diffstat (limited to 'report')
-rw-r--r--report/paper.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/report/paper.md b/report/paper.md
index f22ad3b..9aaa278 100644
--- a/report/paper.md
+++ b/report/paper.md
@@ -16,7 +16,7 @@ Training a shallow GAN with no convolutional layers poses problems such as mode
\end{center}
\end{figure}
-Mode collapse is achieved with our naive *vanilla GAN* (Appendix-\ref{fig:vanilla_gan}) implementation after 200,000 batches. The generated images observed during a mode collapse can be seen on figure \ref{fig:mode_collapse}. The output of the generator only represents few of the labels originally fed. When mode collapse is reached loss function of the generator stops improving as shown in figure \ref{fig:vanilla_loss}. We observe, the discriminator loss tends to zero as the discriminator learns to assume and classify the fake 1s, while the generator is stuck producing 1 and hence not able to improve.
+Some of the main challanges faced when training a GAN are: **mode collapse**, **low quality** of images and **mismatch** between generator and discriminator loss. Mode collapse is achieved with our naive *vanilla GAN* (Appendix-\ref{fig:vanilla_gan}) implementation after 200,000 batches. The generated images observed during a mode collapse can be seen on figure \ref{fig:mode_collapse}. The output of the generator only represents few of the labels originally fed. When mode collapse is reached loss function of the generator stops improving as shown in figure \ref{fig:vanilla_loss}. We observe, the discriminator loss tends to zero as the discriminator learns to assume and classify the fake 1s, while the generator is stuck producing 1 and hence not able to improve.
A significant improvement to this vanilla architecture is Deep Convolutional Generative Adversarial Networks (DCGAN).