From 4570936bb6a5d713eba58dba8fc103c517754158 Mon Sep 17 00:00:00 2001 From: nunzip Date: Thu, 14 Mar 2019 17:05:31 +0000 Subject: Minor changes --- report/paper.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'report') diff --git a/report/paper.md b/report/paper.md index f1ebe7f..ee4a626 100644 --- a/report/paper.md +++ b/report/paper.md @@ -168,7 +168,7 @@ the same classes, indicating that mode collapse still did not occur. The best performing architecture was CDCGAN. It is difficult to assess any potential improvement at this stage, since the samples produced between 8,000 and 13,000 batches are indistinguishable from the ones of the MNIST dataset (as it can be seen in figure \ref{fig:cdc}, middle). Training CDCGAN for more than -15,000 batches is however not beneficial, as the discriminator will keep improving, leading the generator to produce bad samples as shown in the reported example. +15,000 batches is however not beneficial, as the discriminator will almost reach a loss of zero, leading the generator to oscillate and produce bad samples as shown in the reported example. We find a good balance for 12,000 batches. \begin{figure} -- cgit v1.2.3-54-g00ecf