aboutsummaryrefslogtreecommitdiff
path: root/report
diff options
context:
space:
mode:
authornunzip <np.scarh@gmail.com>2018-11-19 22:27:08 +0000
committernunzip <np.scarh@gmail.com>2018-11-19 22:27:08 +0000
commitfcc4990e364ab0df19cec513cda90f3f49e2efae (patch)
treec48d1a5f71e1635918d42feedf2792e7a3323709 /report
parentd7f69b70517148af31369f77a78296a0dc85b9c2 (diff)
downloadvz215_np1915-fcc4990e364ab0df19cec513cda90f3f49e2efae.tar.gz
vz215_np1915-fcc4990e364ab0df19cec513cda90f3f49e2efae.tar.bz2
vz215_np1915-fcc4990e364ab0df19cec513cda90f3f49e2efae.zip
More report shrinking
Diffstat (limited to 'report')
-rwxr-xr-xreport/paper.md39
1 files changed, 31 insertions, 8 deletions
diff --git a/report/paper.md b/report/paper.md
index b9392e6..e67b3d0 100755
--- a/report/paper.md
+++ b/report/paper.md
@@ -399,7 +399,7 @@ So far we have established a combined PCA-LDA model which has good recognition w
## Committee Machine Design
-Since each model in the ensemble outputs its own predicted labels, we need to defined a strategy for combining the predictions such that we obtain a combined response which is better than that of an individual model. For this project, we consider two committee machine designs.
+Since each model in the ensemble outputs its own predicted labels, we need to define a strategy for combining the predictions such that we obtain a combined response which is better than that of an individual model. For this project, we consider two committee machine designs.
### Majority Voting
@@ -422,14 +422,26 @@ The first strategy which we may use when using ensemble learning is randomisatio
Bagging is performed by generating each dataset for the ensembles by randomly picking with replacement. We chose to perform bagging independently for each face such that we can maintain the split training and testing split ratio used with and without bagging.
-![Ensemble size effect on accuracy with bagging\label{fig:bagging-e}](fig/bagging.pdf)
+\begin{figure}
+\begin{center}
+\includegraphics[width=19em]{fig/bagging.pdf}
+\caption{Ensemble size effect on accuracy with bagging}
+\label{fig:bagging-e}
+\end{center}
+\end{figure}
## Feature Space Randomisation
Feature space randomisations involves randomising the features which are analysed by the model. In the case of PCA-LDA this can be achieved by randomising the eigenvectors used when performing the PCA step. For instance, instead of choosing the most variant 120 eigenfaces, we may chose to use the 90 eigenvectors with biggest variance and picking 70 of the rest non-zero eigenvectors randomly.
-![Ensemble size effect on accraucy with 160 eeigen values (m_c=90,m_r=70)\label{fig:random-e}](fig/random-ensemble.pdf)
+\begin{figure}
+\begin{center}
+\includegraphics[width=19em]{fig/random-ensemble.pdf}
+\caption{Ensemble size effect on accraucy with 160 eigenvalues (mc=90,mr=70)}
+\label{fig:random-e}
+\end{center}
+\end{figure}
In figure \ref{fig:random-e} we can see the effect of ensemble size when using the bigget 90 eigenvalues and 70 random eigenvalues.
@@ -441,18 +453,29 @@ The randomness hyper-parameter regarding feature space randomsiation can be defi
The optimal number of constant and random eigenvectors to use is therefore an interesting question.
-![Optimal M and Randomness Hyperparameter\label{fig:opti-rand}](fig/vaskplot1.pdf)
-![Optimal M and Randomness Hyperparameter\label{fig:opti-rand2}](fig/vaskplot3.pdf)
+\begin{figure}
+\begin{center}
+\includegraphics[width=23em]{fig/vaskplot3.pdf}
+\caption{Recognition accuracy varying M and Randomness Parameter}
+\label{fig:opti-rand}
+\end{center}
+\end{figure}
The optimal randomness after doing an exhaustive search as seen on figure \label{fig:opti-rand}peaks at 95 randomised eigenvalues out of 155 total eigenvalues, or 60 static and 95 random eigenvalues. The values of $M_{\textrm{lda}}$ in the figures is the maximum of 51.
## Comparison
-Combining bagging and feature space we are able to achieve higher test accuracy then individual model.
+Combining bagging and feature space randomization we are able to achieve higher test accuracy than the individual models.
-### Ensemmble Confusion Matrix
+### Ensemble Confusion Matrix
-![Ensemble confusion matrix\label{fig:ens-cm}](fig/ensemble-cm.pdf)
+\begin{figure}
+\begin{center}
+\includegraphics[width=19em]{fig/ensemble-cm.pdf}
+\caption{Ensemble confusion matrix}
+\label{fig:ens-cm}
+\end{center}
+\end{figure}
# References