From fe208b4e6615269422260fe87dab42cfba887498 Mon Sep 17 00:00:00 2001 From: Vasil Zlatanov Date: Tue, 20 Nov 2018 16:31:42 +0000 Subject: Lots of grammer improvements --- report/paper.md | 28 +++++++++++++--------------- 1 file changed, 13 insertions(+), 15 deletions(-) (limited to 'report') diff --git a/report/paper.md b/report/paper.md index bcb2386..809af3a 100755 --- a/report/paper.md +++ b/report/paper.md @@ -111,25 +111,24 @@ eigenvalues. ## Classification The analysed classification methods used for face recognition are Nearest Neighbor and -alternative method through reconstruction error. +alternative method utilising reconstruction error. Nearest Neighbor projects the test data onto the generated subspace and finds the closest -element to the projected test image, assigning the same class as the neighbor found. +training sample to the projected test image, assigning the same class as that of thenearest neighbor. Recognition accuracy of NN classification can be observed in figure \ref{fig:accuracy}. -A confusion matrix showing success and failure cases for Nearest Neighbor classfication -can be observed in figure \ref{fig:cm}: +A confusion matrix showing success and failure cases for Nearest Neighbor classfication when using PCA can be observed in figure \ref{fig:cm}: \begin{figure} \begin{center} \includegraphics[width=15em]{fig/pcacm.pdf} -\caption{Confusion Matrix NN, M=99} +\caption{Confusion Matrix PCA and NN, M=99} \label{fig:cm} \end{center} \end{figure} -Two examples of the outcome of Nearest Neighbor Classification are presented in figures \ref{fig:nn_fail} and \ref{fig:nn_succ}, +Two examples of the outcome of Nearest Neighbor classification are presented in figures \ref{fig:nn_fail} and \ref{fig:nn_succ}, respectively one example of classification failure and an example of successful classification. @@ -153,8 +152,8 @@ classification. It is possible to use a NN classification that takes into account majority voting. With this method recognition is based on the K closest neighbors of the projected -test image. Such method anyways showed the best recognition accuracies for PCA with -K=1, as it can be observed from figure \ref{fig:k-diff}. +test image. The method that showed highest recognition accuracies for PCA used +K=1, as visible in figure \ref{fig:k-diff}. \begin{figure} \begin{center} @@ -231,8 +230,8 @@ represents the mean of each class. It can be proven that when we have a singular S\textsubscript{W} we obtain [@lecture-notes]: $W\textsubscript{opt} = arg\underset{W}max\frac{|W\textsuperscript{T}S\textsubscript{B}W|}{|W\textsuperscript{T}S\textsubscript{W}W|} = S\textsubscript{W}\textsuperscript{-1}(\mu\textsubscript{1} - \mu\textsubscript{2})$. However S\textsubscript{W} is often singular since the rank of S\textsubscript{W} -is at most N-c and usually N is smaller than D. In such case it is possible to use -Fisherfaces. The optimal solution to such problem lays in W\textsuperscript{T}\textsubscript{opt} +is at most N-c and usually N is smaller than D. In this case it is possible to use +Fisherfaces. The optimal solution to this problem lays in W\textsuperscript{T}\textsubscript{opt} = W\textsuperscript{T}\textsubscript{lda}W\textsuperscript{T}\textsubscript{pca}, where W\textsubscript{pca} is chosen to maximize the determinant of the total scatter matrix @@ -267,7 +266,7 @@ Being $\nabla F\textsubscript{t}(e)= (1-t)Se+\frac{t}{ e>+\epsilon)\textsuperscript{2}S\textsubscript{W}e}$, we obtain that our goal is to find $\nabla F\textsubscript{t}(e)=\lambda e$, which means making $\nabla F\textsubscript{t}(e)$ parallel to *e*. Since S is positive semi-definite, $<\nabla F\textsubscript{t}(e),e> \geq 0$. -It means that $\lambda$ needs to be greater than zero. In such case, normalizing both sides we +It means that $\lambda$ needs to be greater than zero. Normalizing both sides we obtain $\frac{\nabla F\textsubscript{t}(e)}{||\nabla F\textsubscript{t}(e)||}=e$. We can express *T(e)* as $T(e) = \frac{\alpha e+ \nabla F\textsubscript{t}(e)}{||\alpha e+\nabla F\textsubscript{t}(e)||}$ (adding a positive multiple of *e*, $\alpha e$ to prevent $\lambda$ from vanishing). @@ -301,8 +300,7 @@ S\textsubscript{W}(within-class scatter matrix), respectively show ranks of at m N-c(312 maximum for our standard 70-30 split). The rank of S\textsubscript{W} will have the same value of $M_{\textrm{pca}}$ for $M_{\textrm{pca}}\leq N-c$. -Testing with $M_{\textrm{lda}}=50$ and $M_{\textrm{pca}}=115$ gives 92.9% accuracy. The results of such test can be -observed in the confusion matrix shown in figure \ref{fig:ldapca_cm}. +Testing with $M_{\textrm{lda}}=50$ and $M_{\textrm{pca}}=115$ gives 92.9% accuracy. The results of this test can be seen in the confusion matrix shown in figure \ref{fig:ldapca_cm}. \begin{figure} \begin{center} @@ -334,7 +332,7 @@ Two recognition examples are reported: success in figure \ref{fig:succ_ldapca} a The PCA-LDA method allows to obtain a much higher recognition accuracy compared to PCA. The achieved separation between classes and reduction between inner class-distance -that makes such results possible can be observed in figure \ref{fig:subspaces}, in which +that makes these results possible can be observed in figure \ref{fig:subspaces}, in which the 3 features of the subspaces obtained are graphed. \begin{figure} @@ -352,7 +350,7 @@ So far we have established a combined PCA-LDA model which has good recognition w ## Committee Machine Design -Since each model in the ensemble outputs its own predicted labels, we need to define a strategy for combining the predictions such that we obtain a combined response which is better than that of an individual model. For this project, we consider two committee machine designs. +Since each model in the ensemble outputs its own predicted labels, we need to define a strategy for combining the predictions such that we obtain a combined response which is better than that of an individual models. For this project, we consider two committee machine designs. ### Majority Voting -- cgit v1.2.3-54-g00ecf