diff options
Diffstat (limited to 'report')
-rwxr-xr-x | report/paper.md | 39 |
1 files changed, 31 insertions, 8 deletions
diff --git a/report/paper.md b/report/paper.md index b9392e6..e67b3d0 100755 --- a/report/paper.md +++ b/report/paper.md @@ -399,7 +399,7 @@ So far we have established a combined PCA-LDA model which has good recognition w ## Committee Machine Design -Since each model in the ensemble outputs its own predicted labels, we need to defined a strategy for combining the predictions such that we obtain a combined response which is better than that of an individual model. For this project, we consider two committee machine designs. +Since each model in the ensemble outputs its own predicted labels, we need to define a strategy for combining the predictions such that we obtain a combined response which is better than that of an individual model. For this project, we consider two committee machine designs. ### Majority Voting @@ -422,14 +422,26 @@ The first strategy which we may use when using ensemble learning is randomisatio Bagging is performed by generating each dataset for the ensembles by randomly picking with replacement. We chose to perform bagging independently for each face such that we can maintain the split training and testing split ratio used with and without bagging. -![Ensemble size effect on accuracy with bagging\label{fig:bagging-e}](fig/bagging.pdf) +\begin{figure} +\begin{center} +\includegraphics[width=19em]{fig/bagging.pdf} +\caption{Ensemble size effect on accuracy with bagging} +\label{fig:bagging-e} +\end{center} +\end{figure} ## Feature Space Randomisation Feature space randomisations involves randomising the features which are analysed by the model. In the case of PCA-LDA this can be achieved by randomising the eigenvectors used when performing the PCA step. For instance, instead of choosing the most variant 120 eigenfaces, we may chose to use the 90 eigenvectors with biggest variance and picking 70 of the rest non-zero eigenvectors randomly. -![Ensemble size effect on accraucy with 160 eeigen values (m_c=90,m_r=70)\label{fig:random-e}](fig/random-ensemble.pdf) +\begin{figure} +\begin{center} +\includegraphics[width=19em]{fig/random-ensemble.pdf} +\caption{Ensemble size effect on accraucy with 160 eigenvalues (mc=90,mr=70)} +\label{fig:random-e} +\end{center} +\end{figure} In figure \ref{fig:random-e} we can see the effect of ensemble size when using the bigget 90 eigenvalues and 70 random eigenvalues. @@ -441,18 +453,29 @@ The randomness hyper-parameter regarding feature space randomsiation can be defi The optimal number of constant and random eigenvectors to use is therefore an interesting question. -![Optimal M and Randomness Hyperparameter\label{fig:opti-rand}](fig/vaskplot1.pdf) -![Optimal M and Randomness Hyperparameter\label{fig:opti-rand2}](fig/vaskplot3.pdf) +\begin{figure} +\begin{center} +\includegraphics[width=23em]{fig/vaskplot3.pdf} +\caption{Recognition accuracy varying M and Randomness Parameter} +\label{fig:opti-rand} +\end{center} +\end{figure} The optimal randomness after doing an exhaustive search as seen on figure \label{fig:opti-rand}peaks at 95 randomised eigenvalues out of 155 total eigenvalues, or 60 static and 95 random eigenvalues. The values of $M_{\textrm{lda}}$ in the figures is the maximum of 51. ## Comparison -Combining bagging and feature space we are able to achieve higher test accuracy then individual model. +Combining bagging and feature space randomization we are able to achieve higher test accuracy than the individual models. -### Ensemmble Confusion Matrix +### Ensemble Confusion Matrix -![Ensemble confusion matrix\label{fig:ens-cm}](fig/ensemble-cm.pdf) +\begin{figure} +\begin{center} +\includegraphics[width=19em]{fig/ensemble-cm.pdf} +\caption{Ensemble confusion matrix} +\label{fig:ens-cm} +\end{center} +\end{figure} # References |