aboutsummaryrefslogtreecommitdiff
path: root/report
diff options
context:
space:
mode:
authornunzip <np.scarh@gmail.com>2019-03-07 21:20:11 +0000
committernunzip <np.scarh@gmail.com>2019-03-07 21:20:11 +0000
commit625a86af5f3bd63f5dccbb256eb3b3849cba9da6 (patch)
tree4c9b70740cb45d312b92096a14838b5c8b215845 /report
parent7b288397a51633c878631c0e6d48c96ffce09a84 (diff)
downloade4-gan-625a86af5f3bd63f5dccbb256eb3b3849cba9da6.tar.gz
e4-gan-625a86af5f3bd63f5dccbb256eb3b3849cba9da6.tar.bz2
e4-gan-625a86af5f3bd63f5dccbb256eb3b3849cba9da6.zip
Fix DCGAN
Diffstat (limited to 'report')
-rw-r--r--report/paper.md37
1 files changed, 34 insertions, 3 deletions
diff --git a/report/paper.md b/report/paper.md
index 371cd7f..02a689b 100644
--- a/report/paper.md
+++ b/report/paper.md
@@ -100,9 +100,6 @@ Examples of this can be observed for all the output groups reported above as som
specific issue is solved by training the network for more epochs or introducing a deeper architecture, as it can be deducted from a qualitative comparison
between figures \ref{fig:dcshort}, \ref{fig:dcmed} and \ref{fig:dclong}.
-While training the different proposed DCGAN architectures, we did not observe mode collapse, confirming that the architecture used performed better than
-the simple GAN presented in the introduction.
-
Applying Virtual Batch Normalization on Medium DCGAN does not provide observable changes in G-D balancing, but reduces within-batch correlation. Although it
is difficult to qualitatively assess the improvements, figure \ref{fig:vbn_dc} shows results of the introduction of this technique.
@@ -114,7 +111,11 @@ is difficult to qualitatively assess the improvements, figure \ref{fig:vbn_dc} s
\end{center}
\end{figure}
+We evaluated the effect of different dropout rates (results in appendix, figures \ref{dcdrop1_1}, \ref{dcdrop1_2}, \ref{dcdrop2_1}, \ref{dcdrop2_2}) and concluded that the optimization
+of this parameter is essential to obtain good performance: a high dropout rate would result in DCGAN producing only artifacts that do not really match any specific class due to the generator performing better than the discriminator. Conversely a low dropout rate would lead to an initial stabilisation of G-D losses, but it would result into oscillation when training for a large number of epochs.
+While training the different proposed DCGAN architectures, we did not observe mode collapse, confirming that the architecture used performed better than
+the simple GAN presented in the introduction.
# CGAN
@@ -222,4 +223,34 @@ architecture and loss function?
\end{center}
\end{figure}
+\begin{figure}
+\begin{center}
+\includegraphics[width=24em]{fig/dcgan_dropout01_gd.png}
+\caption{DCGAN Dropout 0.1 G-D Losses}
+\label{fig:dcdrop1_1}
+\end{center}
+\end{figure}
+\begin{figure}
+\begin{center}
+\includegraphics[width=14em]{fig/dcgan_dropout01.png}
+\caption{DCGAN Dropout 0.1 Generated Images}
+\label{fig:dcdrop1_2}
+\end{center}
+\end{figure}
+
+\begin{figure}
+\begin{center}
+\includegraphics[width=24em]{fig/dcgan_dropout05_gd.png}
+\caption{DCGAN Dropout 0.5 G-D Losses}
+\label{fig:dcdrop2_1}
+\end{center}
+\end{figure}
+
+\begin{figure}
+\begin{center}
+\includegraphics[width=14em]{fig/dcgan_dropout05.png}
+\caption{DCGAN Dropout 0.5 Generated Images}
+\label{fig:dcdrop2_2}
+\end{center}
+\end{figure}