aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorVasil Zlatanov <vz215@eews506a-047.ee.ic.ac.uk>2019-03-11 15:37:58 +0000
committernunzip <np.scarh@gmail.com>2019-03-11 15:41:14 +0000
commit5dc4f974d373b6b7dc51c64da351872e907619bf (patch)
tree288ae4adc25a5eed45c7aadd872e983852ae5fc5
parent20d397edb973314ba327230f2a403ee495f38335 (diff)
downloade4-gan-5dc4f974d373b6b7dc51c64da351872e907619bf.tar.gz
e4-gan-5dc4f974d373b6b7dc51c64da351872e907619bf.tar.bz2
e4-gan-5dc4f974d373b6b7dc51c64da351872e907619bf.zip
Move some comments
-rw-r--r--report/paper.md50
1 files changed, 29 insertions, 21 deletions
diff --git a/report/paper.md b/report/paper.md
index eaaed12..11d8c36 100644
--- a/report/paper.md
+++ b/report/paper.md
@@ -113,15 +113,7 @@ We evaluate permutations of the architecture involving:
\end{center}
\end{figure}
-
-## Tests on MNIST
-
-Try **different architectures, hyper-parameters**, and, if necessary, the aspects of **one-sided label
-smoothing**, **virtual batch normalization**, balancing G and D.
-Please perform qualitative analyses on the generated images, and discuss, with results, what
-challenge and how they are specifically addressing. Is there the **mode collapse issue?**
-
-The effect of dropout for the non-convolutional CGAN architecture does not affect performance as much as in DCGAN, as the images produced, together with the G-D loss remain almost unchanged. Results are presented in figures \ref{fig:cg_drop1_1}, \ref{fig:cg_drop1_2}, \ref{fig:cg_drop2_1}, \ref{fig:cg_drop2_2}.
+## Tests on MNIST
\begin{figure}
\begin{center}
@@ -132,23 +124,14 @@ The effect of dropout for the non-convolutional CGAN architecture does not affec
\end{center}
\end{figure}
-\begin{figure}
-\begin{center}
-\includegraphics[width=24em]{fig/smoothing_ex.pdf}
-\includegraphics[width=24em]{fig/smoothing.png}
-\caption{One sided label smoothing}
-\label{fig:smooth}
-\end{center}
-\end{figure}
-
-# Inception Score
+### Inception Score
Inception score is calculated as introduced by Tim Salimans et. al [@improved]. However as we are evaluating MNIST, we use LeNet as the basis of the inceptioen score.
-Inception score is calculated with the logits of the LeNet
+We use the logits extracted from LeNet:
$$ \textrm{IS}(x) = \exp(\mathbb{E}_x \left( \textrm{KL} ( p(y\mid x) \| p(y) ) \right) ) $$
-## Classifier Architecture Used
+### Classifier Architecture Used
\begin{table}[]
\begin{tabular}{llll}
@@ -167,9 +150,34 @@ Medium CGAN+VBN+LS & 0.783 & 4.31 & 10:38 \\
\end{tabular}
\end{table}
+## Discussion
+
+### Architecture
+
+### One Side Label Smoothing
+
+\begin{figure}
+\begin{center}
+\includegraphics[width=24em]{fig/smoothing_ex.pdf}
+\includegraphics[width=24em]{fig/smoothing.png}
+\caption{One sided label smoothing}
+\label{fig:smooth}
+\end{center}
+\end{figure}
+
+
+
+### Virtual Batch Normalisation
+
+
+### Dropout
+The effect of dropout for the non-convolutional CGAN architecture does not affect performance as much as in DCGAN, nor does it seem to affect the quality of images produced, together with the G-D loss remain almost unchanged. Results are presented in figures \ref{fig:cg_drop1_1}, \ref{fig:cg_drop1_2}, \ref{fig:cg_drop2_1}, \ref{fig:cg_drop2_2}.
+
**Please measure and discuss the inception scores for the different hyper-parameters/tricks and/or
+
+
# Re-training the handwritten digit classifier
## Results