aboutsummaryrefslogtreecommitdiff
path: root/report
diff options
context:
space:
mode:
authornunzip <np.scarh@gmail.com>2019-03-14 17:05:31 +0000
committernunzip <np.scarh@gmail.com>2019-03-14 17:05:31 +0000
commit4570936bb6a5d713eba58dba8fc103c517754158 (patch)
tree0110aa8d6f63d4ce7cb41862cea69c6b74404564 /report
parentc917a6b70fa66f33a533edd90e8800105bc110ae (diff)
downloade4-gan-4570936bb6a5d713eba58dba8fc103c517754158.tar.gz
e4-gan-4570936bb6a5d713eba58dba8fc103c517754158.tar.bz2
e4-gan-4570936bb6a5d713eba58dba8fc103c517754158.zip
Minor changes
Diffstat (limited to 'report')
-rw-r--r--report/paper.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/report/paper.md b/report/paper.md
index f1ebe7f..ee4a626 100644
--- a/report/paper.md
+++ b/report/paper.md
@@ -168,7 +168,7 @@ the same classes, indicating that mode collapse still did not occur.
The best performing architecture was CDCGAN. It is difficult to assess any potential improvement at this stage, since the samples produced
between 8,000 and 13,000 batches are indistinguishable from the ones of the MNIST dataset (as it can be seen in figure \ref{fig:cdc}, middle). Training CDCGAN for more than
-15,000 batches is however not beneficial, as the discriminator will keep improving, leading the generator to produce bad samples as shown in the reported example.
+15,000 batches is however not beneficial, as the discriminator will almost reach a loss of zero, leading the generator to oscillate and produce bad samples as shown in the reported example.
We find a good balance for 12,000 batches.
\begin{figure}