diff options
author | Vasil Zlatanov <vasil@netcraft.com> | 2019-03-06 23:49:46 +0000 |
---|---|---|
committer | Vasil Zlatanov <vasil@netcraft.com> | 2019-03-06 23:49:46 +0000 |
commit | 5d779afb5a9511323e3402537af172d68930d85c (patch) | |
tree | c31d546c7759c53b23948e170d690e727a295810 /report | |
parent | b418990448f461da50a732b4e66dd8e9066199d8 (diff) | |
download | e4-gan-5d779afb5a9511323e3402537af172d68930d85c.tar.gz e4-gan-5d779afb5a9511323e3402537af172d68930d85c.tar.bz2 e4-gan-5d779afb5a9511323e3402537af172d68930d85c.zip |
Replace softmax with relu as we apply it in the function anyway
Diffstat (limited to 'report')
-rw-r--r-- | report/paper.md | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/report/paper.md b/report/paper.md index b4a2a63..0227b1e 100644 --- a/report/paper.md +++ b/report/paper.md @@ -10,7 +10,7 @@ $$ V (D,G) = E_{x~p_{data}(x)}[logD(x)] + E_{zp_z(z)}[log(1-D(G(z)))] $$ The issue with shallow architectures (**present the example we used for mode collapse**) can be ontain really fast training, while producing overall good results. -One of the main issues that raises from this kind of architectures is mode collapse. As the discriminator keeps getting +One of the main issues enctoured with GAN architectures is mode collapse. As the discriminator keeps getting better, the generator tries to focus on one single class label to improve its loss. This issue can be observed in figure \ref{fig:mode_collapse}, in which we can observe how after 200 thousand iterations, the output of the generator only represents few of the labels originally fed to train the network. At that point the loss function of the generator starts getting worse as shown in figure |