aboutsummaryrefslogtreecommitdiff
path: root/report
diff options
context:
space:
mode:
Diffstat (limited to 'report')
-rw-r--r--report/build/cw1_vz215_np1915.pdfbin233980 -> 0 bytes
-rw-r--r--report/fig/generic_gan_loss.pngbin0 -> 32275 bytes
-rw-r--r--report/fig/generic_gan_mode_collapse.pdfbin0 -> 205946 bytes
-rw-r--r--report/fig/large_dcgan_ex.pdfbin0 -> 329497 bytes
-rw-r--r--report/fig/long_dcgan.pngbin0 -> 18557 bytes
-rw-r--r--report/fig/med_dcgan.pngbin0 -> 18041 bytes
-rw-r--r--report/fig/med_dcgan_ex.pdfbin0 -> 363034 bytes
-rw-r--r--report/fig/mix_scores.PNGbin0 -> 6133 bytes
-rw-r--r--report/fig/mix_zoom.pngbin0 -> 23623 bytes
-rw-r--r--report/fig/short_dcgan.pngbin0 -> 22431 bytes
-rw-r--r--report/fig/short_dcgan_ex.pdfbin0 -> 327312 bytes
-rw-r--r--report/paper.md43
12 files changed, 43 insertions, 0 deletions
diff --git a/report/build/cw1_vz215_np1915.pdf b/report/build/cw1_vz215_np1915.pdf
deleted file mode 100644
index 3a6e8a5..0000000
--- a/report/build/cw1_vz215_np1915.pdf
+++ /dev/null
Binary files differ
diff --git a/report/fig/generic_gan_loss.png b/report/fig/generic_gan_loss.png
new file mode 100644
index 0000000..701b191
--- /dev/null
+++ b/report/fig/generic_gan_loss.png
Binary files differ
diff --git a/report/fig/generic_gan_mode_collapse.pdf b/report/fig/generic_gan_mode_collapse.pdf
new file mode 100644
index 0000000..fef0c9b
--- /dev/null
+++ b/report/fig/generic_gan_mode_collapse.pdf
Binary files differ
diff --git a/report/fig/large_dcgan_ex.pdf b/report/fig/large_dcgan_ex.pdf
new file mode 100644
index 0000000..9dac5e5
--- /dev/null
+++ b/report/fig/large_dcgan_ex.pdf
Binary files differ
diff --git a/report/fig/long_dcgan.png b/report/fig/long_dcgan.png
new file mode 100644
index 0000000..4e12495
--- /dev/null
+++ b/report/fig/long_dcgan.png
Binary files differ
diff --git a/report/fig/med_dcgan.png b/report/fig/med_dcgan.png
new file mode 100644
index 0000000..9a809c9
--- /dev/null
+++ b/report/fig/med_dcgan.png
Binary files differ
diff --git a/report/fig/med_dcgan_ex.pdf b/report/fig/med_dcgan_ex.pdf
new file mode 100644
index 0000000..1acbd71
--- /dev/null
+++ b/report/fig/med_dcgan_ex.pdf
Binary files differ
diff --git a/report/fig/mix_scores.PNG b/report/fig/mix_scores.PNG
new file mode 100644
index 0000000..85d3a4f
--- /dev/null
+++ b/report/fig/mix_scores.PNG
Binary files differ
diff --git a/report/fig/mix_zoom.png b/report/fig/mix_zoom.png
new file mode 100644
index 0000000..0e40cab
--- /dev/null
+++ b/report/fig/mix_zoom.png
Binary files differ
diff --git a/report/fig/short_dcgan.png b/report/fig/short_dcgan.png
new file mode 100644
index 0000000..ea8199b
--- /dev/null
+++ b/report/fig/short_dcgan.png
Binary files differ
diff --git a/report/fig/short_dcgan_ex.pdf b/report/fig/short_dcgan_ex.pdf
new file mode 100644
index 0000000..d12fa5b
--- /dev/null
+++ b/report/fig/short_dcgan_ex.pdf
Binary files differ
diff --git a/report/paper.md b/report/paper.md
index 77d3db7..b4a2a63 100644
--- a/report/paper.md
+++ b/report/paper.md
@@ -1,7 +1,50 @@
+# Introduction
+
+A Generative Adversarial Network is a system in which two blocks, discriminator and generator are competing in a "minmax game",
+in which the objective of the two blocks is respectively maximization and minimization of the function presented below,
+until an equilibrium is reached. During the weights update performed through the optimization process, the generator and discrimitaor are
+updated in alternating cycles.
+
+$$ V (D,G) = E_{x~p_{data}(x)}[logD(x)] + E_{zp_z(z)}[log(1-D(G(z)))] $$
+
+The issue with shallow architectures (**present the example we used for mode collapse**) can be ontain really fast training,
+while producing overall good results.
+
+One of the main issues that raises from this kind of architectures is mode collapse. As the discriminator keeps getting
+better, the generator tries to focus on one single class label to improve its loss. This issue can be observed in figure
+\ref{fig:mode_collapse}, in which we can observe how after 200 thousand iterations, the output of the generator only represents few
+of the labels originally fed to train the network. At that point the loss function of the generator starts getting worse as shown in figure
+\ref{fig:vanilla_loss}. As we observe, G-D balance in not achieved as the discriminator loss almost reaches zero, while the generator loss keeps
+increasing.
+
+\begin{figure}
+\begin{center}
+\includegraphics[width=24em]{fig/generic_gan_loss.png}
+\caption{Shallow GAN D-G Loss}
+\label{fig:vanilla_loss}
+\end{center}
+\end{figure}
+
+\begin{figure}
+\begin{center}
+\includegraphics[width=24em]{fig/generic_gan_mode_collapse.pdf}
+\caption{Shallow GAN mode collapse}
+\label{fig:mode_collapse}
+\end{center}
+\end{figure}
+
+
# DCGAN
## DCGAN Architecture description
+Insert connection of schematic.
+
+The typical structure of the generator for DCGAN consists of a sequential model in which the input is fed through a dense layer and upsampled.
+The following block involves Convolution+Batch_normalization+Relu_activation. The output is then upsampled again and fed to another Convolution+Batch_Normalization+Relu_activation block. The final output is obtained through a Convolution+Tanh_activation layer. The depth of the convolutional layers decreases from input to output.
+
+The discriminator is designed through blocks that involve Convolution+Batch_Normalization+LeakyReLU_activation+Dropout. The depth of the convolutional layers increases from input to output.
+
## Tests on MNIST
Try some **different architectures, hyper-parameters**, and, if necessary, the aspects of **virtual batch