diff options
author | nunzip <np.scarh@gmail.com> | 2019-03-07 21:16:06 +0000 |
---|---|---|
committer | nunzip <np.scarh@gmail.com> | 2019-03-07 21:16:06 +0000 |
commit | f44c9ef0bcdaee1ed8ea0716299dfa6608b86972 (patch) | |
tree | cb6723cf2a923cc89e250eca93d140befaea041e | |
parent | 3bd669ae1bddd0dfeda3ced44c9042222bb809a2 (diff) | |
download | e4-gan-f44c9ef0bcdaee1ed8ea0716299dfa6608b86972.tar.gz e4-gan-f44c9ef0bcdaee1ed8ea0716299dfa6608b86972.tar.bz2 e4-gan-f44c9ef0bcdaee1ed8ea0716299dfa6608b86972.zip |
Add more DCGAN
-rw-r--r-- | report/paper.md | 37 |
1 files changed, 34 insertions, 3 deletions
diff --git a/report/paper.md b/report/paper.md index 371cd7f..02a689b 100644 --- a/report/paper.md +++ b/report/paper.md @@ -100,9 +100,6 @@ Examples of this can be observed for all the output groups reported above as som specific issue is solved by training the network for more epochs or introducing a deeper architecture, as it can be deducted from a qualitative comparison between figures \ref{fig:dcshort}, \ref{fig:dcmed} and \ref{fig:dclong}. -While training the different proposed DCGAN architectures, we did not observe mode collapse, confirming that the architecture used performed better than -the simple GAN presented in the introduction. - Applying Virtual Batch Normalization on Medium DCGAN does not provide observable changes in G-D balancing, but reduces within-batch correlation. Although it is difficult to qualitatively assess the improvements, figure \ref{fig:vbn_dc} shows results of the introduction of this technique. @@ -114,7 +111,11 @@ is difficult to qualitatively assess the improvements, figure \ref{fig:vbn_dc} s \end{center} \end{figure} +We evaluated the effect of different dropout rates (results in appendix, figures \ref{dcdrop1_1}, \ref{dcdrop1_2}, \ref{dcdrop2_1}, \ref{dcdrop2_2}) and concluded that the optimization +of this parameter is essential to obtain good performance: a high dropout rate would result in DCGAN producing only artifacts that do not really match any specific class due to the generator performing better than the discriminator. Conversely a low dropout rate would lead to an initial stabilisation of G-D losses, but it would result into oscillation when training for a large number of epochs. +While training the different proposed DCGAN architectures, we did not observe mode collapse, confirming that the architecture used performed better than +the simple GAN presented in the introduction. # CGAN @@ -222,4 +223,34 @@ architecture and loss function? \end{center} \end{figure} +\begin{figure} +\begin{center} +\includegraphics[width=24em]{fig/dcgan_dropout01_gd.png} +\caption{DCGAN Dropout 0.1 G-D Losses} +\label{fig:dcdrop1_1} +\end{center} +\end{figure} +\begin{figure} +\begin{center} +\includegraphics[width=14em]{fig/dcgan_dropout01.png} +\caption{DCGAN Dropout 0.1 Generated Images} +\label{fig:dcdrop1_2} +\end{center} +\end{figure} + +\begin{figure} +\begin{center} +\includegraphics[width=24em]{fig/dcgan_dropout05_gd.png} +\caption{DCGAN Dropout 0.5 G-D Losses} +\label{fig:dcdrop2_1} +\end{center} +\end{figure} + +\begin{figure} +\begin{center} +\includegraphics[width=14em]{fig/dcgan_dropout05.png} +\caption{DCGAN Dropout 0.5 Generated Images} +\label{fig:dcdrop2_2} +\end{center} +\end{figure} |