aboutsummaryrefslogtreecommitdiff
path: root/report/paper.md
diff options
context:
space:
mode:
Diffstat (limited to 'report/paper.md')
-rw-r--r--report/paper.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/report/paper.md b/report/paper.md
index b4a2a63..0227b1e 100644
--- a/report/paper.md
+++ b/report/paper.md
@@ -10,7 +10,7 @@ $$ V (D,G) = E_{x~p_{data}(x)}[logD(x)] + E_{zp_z(z)}[log(1-D(G(z)))] $$
The issue with shallow architectures (**present the example we used for mode collapse**) can be ontain really fast training,
while producing overall good results.
-One of the main issues that raises from this kind of architectures is mode collapse. As the discriminator keeps getting
+One of the main issues enctoured with GAN architectures is mode collapse. As the discriminator keeps getting
better, the generator tries to focus on one single class label to improve its loss. This issue can be observed in figure
\ref{fig:mode_collapse}, in which we can observe how after 200 thousand iterations, the output of the generator only represents few
of the labels originally fed to train the network. At that point the loss function of the generator starts getting worse as shown in figure