diff options
-rw-r--r-- | report/bibliography.bib | 7 | ||||
-rw-r--r-- | report/paper.md | 38 |
2 files changed, 22 insertions, 23 deletions
diff --git a/report/bibliography.bib b/report/bibliography.bib index 0defd2d..3ccece5 100644 --- a/report/bibliography.bib +++ b/report/bibliography.bib @@ -1,3 +1,10 @@ +@misc{improved, +Author = {Tim Salimans and Ian Goodfellow and Wojciech Zaremba and Vicki Cheung and Alec Radford and Xi Chen}, +Title = {Improved Techniques for Training GANs}, +Year = {2016}, +Eprint = {arXiv:1606.03498}, +} + @misc{inception-note, Author = {Shane Barratt and Rishi Sharma}, Title = {A Note on the Inception Score}, diff --git a/report/paper.md b/report/paper.md index dc9f95a..34e9b6a 100644 --- a/report/paper.md +++ b/report/paper.md @@ -86,16 +86,16 @@ While training the different proposed DCGAN architectures, we did not observe mo ## CGAN Architecture description -CGAN is a conditional version foa Generative adversarial network which utilises labeled data. Unlike DCGAN, CGAN is trained with explicitly provided labels which allows CGAN to associate features with specific labels. This has the intrinsic advantage of allowing us to specify the label of generated data. The baseline CGAN which we evaluate is visible in figure \ref{fig:cganrc}. The baseline GAN arhitecture presents a series blocks each contained a dense layer, ReLu layer and a Batch Normalisation layer. The baseline discriminator use Dense layers, followed by ReLu and a Droupout layer. +CGAN is a conditional version of a GAN which utilises labeled data. Unlike DCGAN, CGAN is trained with explicitly provided labels which allow CGAN to associate features with specific labels. This has the intrinsic advantage of allowing us to specify the label of generated data. The baseline CGAN which we evaluate is visible in figure \ref{fig:cganrc}. The baseline GAN arhitecture presents a series blocks each contained a dense layer, ReLu layer and a Batch Normalisation layer. The baseline discriminator uses Dense layers, followed by ReLu and a Droupout layer. We evaluate permutations of the architecture involving: -* Shallow CGAN -* Deep CGAN -* Deep Convolutional GAN -* Label Smoothing (One Sided) -* Various Dropout -* Virtual Batch Normalisation +* Shallow CGAN - 1 Dense-ReLu-BN block +* Deep CGAN - 5 Dense-ReLu-BN +* Deep Convolutional GAN - DCGAN + conditional label input +* Label Smoothing (One Sided) - Truth labels to 0 and $1-\alpha$ (0.9) +* Various Dropout - Use 0.1 and 0.5 Dropout parameters +* Virtual Batch Normalisation - Normalisation based on one batch [@improved] \begin{figure} \begin{center} @@ -143,24 +143,13 @@ The effect of dropout for the non-convolutional CGAN architecture does not affec # Inception Score -## Classifier Architecture Used - -## Results - -Measure the inception scores i.e. we use the class labels to -generate images in CGAN and compare them with the predicted labels of the generated images. - -Also report the recognition accuracies on the -MNIST real testing set (10K), in comparison to the inception scores. - -**Please measure and discuss the inception scores for the different hyper-parameters/tricks and/or -architectures in Q2.** - -We measure the performance of the considered GAN's using the Inecption score [-inception], as calculated -with L2-Net logits. +Inception score is calculated as introduced by Tim Salimans et. al [@improved]. However as we are evaluating MNIST, we use LeNet as the basis of the inceptioen score. +Inception score is calculated with the logits of the LeNet $$ \textrm{IS}(x) = \exp(\mathbb{E}_x \left( \textrm{KL} ( p(y\mid x) \| p(y) ) \right) ) $$ +## Classifier Architecture Used + \begin{table}[] \begin{tabular}{llll} & Accuracy & Inception Sc. & GAN Tr. Time \\ \hline @@ -174,10 +163,13 @@ Medium CGAN DO=0.1 & 0.761 & 3.836 & 10:36 \\ Medium CGAN DO=0.5 & 0.725 & 3.677 & 10:36 \\ Medium CGAN+VBN & ? & ? & ? \\ Medium CGAN+VBN+LS & ? & ? & ? \\ -*MNIST original & 0.9846 & 9.685 & N/A +*MNIST original & 0.9846 & 9.685 & N/A \\ \hline \end{tabular} \end{table} + +**Please measure and discuss the inception scores for the different hyper-parameters/tricks and/or + # Re-training the handwritten digit classifier ## Results |