From c97e60b3422007554e108919f9dd19b002541442 Mon Sep 17 00:00:00 2001 From: Vasil Zlatanov Date: Tue, 20 Nov 2018 18:06:54 +0000 Subject: Many space savings --- report/paper.md | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) (limited to 'report') diff --git a/report/paper.md b/report/paper.md index bd7ef71..734b001 100755 --- a/report/paper.md +++ b/report/paper.md @@ -28,7 +28,7 @@ as a sudden drop for eigenvalues after the 363rd. The mean image is calculated by averaging the features of the training data. Changing the randomisation seed gives -very similar values, since the vast majority of the training +similar values, since the majority of the training faces used for averaging are the same. Two mean faces obtained with different seeds for split can be seen in figure \ref{fig:mean_face}. @@ -69,7 +69,7 @@ and eigenvectors of the matrices A\textsuperscript{T}A (NxN) and AA\textsuperscr are shown in Appendix, table \ref{tab:eigen}. It can be proven that the eigenvalues obtained are mathematically the same [@lecture-notes], -and the there is a relation between the eigenvectors obtained: $\boldsymbol{u\textsubscript{i}} = A\boldsymbol{v\textsubscript{i}}$. (*Proof in appendix A*). +and the there is a relation between the eigenvectors obtained: $\boldsymbol{u\textsubscript{i}} = A\boldsymbol{v\textsubscript{i}}$. (*Proof: Appendix A*). Experimentally there is no consequential loss of data calculating the eigenvectors for PCA when using the low dimensional method. The main advantages of it are reduced computation time, @@ -282,8 +282,7 @@ In this section we will perform PCA-LDA recognition with NN classification. Varying the values of $M_{\textrm{pca}}$ and $M_{\textrm{lda}}$ we obtain the average recognition accuracies reported in figure \ref{fig:ldapca_acc}. Peak accuracy of 93% can be observed for $M_{\textrm{pca}}=115$, $M_{\textrm{lda}}=41$; -howeverer accuracies above 90% can be observed for $M_{\textrm{pca}}$ values between 90 and 130 and -$M_{\textrm{lda}}$ values between 30 and 50. +howeverer accuracies above 90% can be observed for $130 > M_{\textrm{pca}} 90$ and $ 50 > M_{\textrm{lda}} > 30$ values between 30 and 50. Recognition accuracy is significantly higher than PCA, and the run time is roughly the same, vaying between 0.11s(low $M_{\textrm{pca}}$) and 0.19s(high $M_{\textrm{pca}}$). Execution times @@ -368,7 +367,7 @@ Fusion rules may either take the label with the highest associated confidence, o This technique is reliant on the model producing a confidence score for the label(s) it guesses. For K-Nearest neighbours where $K > 1$ we may produce a confidence based on the proportion of the K nearest neighbours which are the same class. For instance if $K = 5$ and 3 out of the 5 nearest neighbours are of class "C" and the other two are class "B" and "D", then we may say that the predictions are classes C, B and D, with confidence of 60%, 20% and 20% respectively. Using this technique with a large K however may be detrimental, as distance is not considered. An alternative approach of generating confidence based on the distance to the nearest neighbour may yield better result. -In our testing we have elected to use a committee machine employing majority voting, as we identified that looking a nearest neighbour strategy with only **one** neighbour ($K=1$) performed best. Future research may attempt using weighted labeling based on neighbour distance based confidence. +In our testing we have elected to use a committee machine employing majority voting, as we identified that looking a nearest neighbour strategy with only **one** neighbour ($K=1$) performed best. Future work may investigate weighted labeling using neighbour distance based confidence. ## Data Randomisation (Bagging) @@ -401,9 +400,9 @@ use the 90 eigenvectors with biggest variance and picking 70 of the rest non-zer \end{figure} In figure \ref{fig:random-e} we can see the effect of ensemble size when using the biggest -90 eigenvectors and 70 random eigenvectors. Feature space randomisation is able to increase accuracy by approximately 2% for our data. However, this improvement is dependent on the number of eigenvectors used and the number of random eigenvectors. For example, using a small fully random set of eigenvectors is detrimental to the performance (seen on \ref{fig:vaskoplot3}). +90 constant and 70 random eigenvectors. Feature space randomisation is able to increase accuracy by approximately 2% for our data. This improvement is dependent on the number of eigenvectors used and the number of them which is random. I.e. using a small fully random set of eigenvectors is detrimental to the performance. -We noticed that an ensemble size of around 27 is the point where accuracy or error plateaus. We will use this number when performing an exhaustive search on the optimal randomness parameter. +An ensemble size of around 27 is where accuracy or error plateaus. We will use this number when performing an exhaustive search on the optimal randomness parameter. ### Optimal randomness hyper-parameter @@ -440,7 +439,7 @@ The red peaks on the 3d-plot represent the proportion of randomised eigenvectors \begin{figure} \begin{center} -\includegraphics[width=17em]{fig/ensemble-cm.pdf} +\includegraphics[width=15em]{fig/ensemble-cm.pdf} \caption{Ensemble confusion matrix (pre-comittee)} \label{fig:ens-cm} \end{center} -- cgit v1.2.3-54-g00ecf