aboutsummaryrefslogtreecommitdiff
path: root/report
diff options
context:
space:
mode:
Diffstat (limited to 'report')
-rw-r--r--report/paper.md8
1 files changed, 4 insertions, 4 deletions
diff --git a/report/paper.md b/report/paper.md
index 11d8c36..fbc4eb3 100644
--- a/report/paper.md
+++ b/report/paper.md
@@ -220,7 +220,7 @@ almost reaches 100%.
\end{center}
\end{figure}
-We conduct one experiment, feeding the test set to a L2-Net trained exclusively on data generated from our CGAN. It is noticeable that training
+We conduct one experiment, feeding the test set to a LeNet trained exclusively on data generated from our CGAN. It is noticeable that training
for the first 5 epochs gives good results (figure \ref{fig:fake_only}) when compared to the learning curve obtained while training the network ith only the few real samples. This
indicates that we can use the generated data to train the first steps of the network (initial weights) and apply the real sample for 300 epochs to obtain
a finer tuning. As observed in figure \ref{fig:few_init} the first steps of retraining will show oscillation, since the fine tuning will try and adapt to the newly fed data. The maximum accuracy reached before the validation curve plateaus is 88.6%, indicating that this strategy proved to be somewhat successfull at
@@ -235,7 +235,7 @@ improving testing accuracy.
\end{figure}
-We try to improve the results obtained earlier by retraining L2-Net with mixed data: few real samples and plenty of generated samples (160.000)
+We try to improve the results obtained earlier by retraining LeNet with mixed data: few real samples and plenty of generated samples (160.000)
(learning curve show in figure \ref{fig:training_mixed}. The peak accuracy reached is 91%. We then try to remove the generated
samples to apply fine tuning, using only the real samples. After 300 more epochs (figure \ref{fig:training_mixed}) the test accuracy is
boosted to 92%, making this technique the most successfull attempt of improvement while using a limited amount of data from MNIST dataset.
@@ -298,10 +298,10 @@ separate graphs. Explain the trends in the graphs.
## Factoring in classification loss into GAN
-Classification accuracy and Inception score can be factored into the GAN to attemp to produce more realistic images. Shane Barrat and Rishi Sharma are able to indirectly optimise the inception score to over 900, and note that directly optimising for maximised Inception score produces adversarial examples [@inception-note].
+Classification accuracy and Inception score can be factored into the GAN to attempt to produce more realistic images. Shane Barrat and Rishi Sharma are able to indirectly optimise the inception score to over 900, and note that directly optimising for maximised Inception score produces adversarial examples [@inception-note].
Nevertheless, a pretrained static classifier may be added to the GAN model, and it's loss incorporated into the loss added too the loss of the gan.
-$$ L_{\textrm{total}} = \alpha L_{2-\textrm{LeNet}} + \beta L_{\textrm{generator}} $$
+$$ L_{\textrm{total}} = \alpha L_{\textrm{LeNet}} + \beta L_{\textrm{generator}} $$
# References