aboutsummaryrefslogtreecommitdiff
path: root/report
diff options
context:
space:
mode:
Diffstat (limited to 'report')
-rwxr-xr-xreport/metadata.yaml2
-rwxr-xr-xreport/paper.md76
2 files changed, 40 insertions, 38 deletions
diff --git a/report/metadata.yaml b/report/metadata.yaml
index 30fd7fa..99de501 100755
--- a/report/metadata.yaml
+++ b/report/metadata.yaml
@@ -4,7 +4,7 @@ author:
- name: Vasil Zlatanov, Nunzio Pucci
affilation: Imperial College
location: London, UK
- email: vz215@ic.ac.uk, np@ic.ac.uk
+ email: vz215@ic.ac.uk, np1915@ic.ac.uk
numbersections: yes
lang: en
babel-lang: english
diff --git a/report/paper.md b/report/paper.md
index 4a5e90a..d887919 100755
--- a/report/paper.md
+++ b/report/paper.md
@@ -20,14 +20,14 @@ PCA is applied. The covariance matrix, S, of dimension
2576x2576 (features x features), will have 2576 eigenvalues
and eigenvectors. The amount of non-zero eigenvalues and
eigenvectors obtained will only be equal to the amount of
-training samples minus one. This can be observed in figure \ref{logeig}
+training samples minus one. This can be observed in figure \ref{fig:logeig}
as a sudden drop for eigenvalues after the 363rd.
\begin{figure}
\begin{center}
\includegraphics[width=20em]{fig/eigenvalues.pdf}
\caption{Log plot of all eigenvalues}
-\label{logeig}
+\label{fig:logeig}
\end{center}
\end{figure}
@@ -35,14 +35,14 @@ The mean image is calculated averaging the features of the
training data. Changing the randomization seed will give
very similar values, since the vast majority of the training
faces used for averaging will be the same. Two mean faces
-obtained with different seeds for split can be observed in figure \ref{mean_face}.
+obtained with different seeds for split can be observed in figure \ref{fig:mean_face}.
\begin{figure}
\begin{center}
\includegraphics[width=8em]{fig/mean_face.pdf}
\includegraphics[width=8em]{fig/mean2.pdf}
\caption{Mean Faces}
-\label{mean_face}
+\label{fig:mean_face}
\end{center}
\end{figure}
@@ -56,7 +56,7 @@ to flaten.
\begin{center}
\includegraphics[width=20em]{fig/accuracy.pdf}
\caption{NN Recognition Accuracy varying M}
-\label{accuracy}
+\label{fig:accuracy}
\end{center}
\end{figure}
@@ -73,7 +73,7 @@ and eigenvectors of the matrices A\textsuperscript{T}A (NxN) and AA\textsuperscr
(DxD)).
The first ten biggest eigenvalues obtained with each method
-are shown in table \ref{table_eigen}.
+are shown in table \ref{tab:eigen}.
\begin{table}[ht]
\centering
@@ -91,7 +91,7 @@ PCA &Fast PCA\\
2.4396E+04 &2.4339E+04\\
\end{tabular}
\caption{Comparison of eigenvalues obtain with the two computation methods}
-\label{table_eigen}
+\label{tab:eigen}
\end{table}
It can be proven that the eigenvalues obtained are mathematically the same,
@@ -126,13 +126,13 @@ the covariance matrix, whereas method 2 requires an additional projection step.
Using the computational method for fast PCA, face reconstruction is then performed.
The quality of reconstruction will depend on the amount of eigenvectors picked.
-The results of varying M can be observed in fig.\ref{face160rec}. Two faces from classes
-number 21 and 2 respectively, are reconstructed as shown in fig.\ref{face10rec} with respective M values
+The results of varying M can be observed in fig.\ref{fig:face160rec}. Two faces from classes
+number 21 and 2 respectively, are reconstructed as shown in fig.\ref{fig:face10rec} with respective M values
of M=10, M=100, M=200, M=300. The last picture is the original face.
-![Reconstructed Face C21\label{face160rec}](fig/face160rec.pdf)
+![Reconstructed Face C21\label{fig:face160rec}](fig/face160rec.pdf)
-![Reconstructed Face C2\label{face10rec}](fig/face10rec.pdf)
+![Reconstructed Face C2\label{fig:face10rec}](fig/face10rec.pdf)
It is already observable that the improvement in reconstruction is marginal for M=200
and M=300. For such reason choosing M close to 100 is good enough for such purpose.
@@ -140,14 +140,14 @@ Observing in fact the variance ratio of the principal components, the contributi
they'll have will be very low for values above 100, hence we will require a much higher
quantity of components to improve reconstruction quality. With M=100 we will be able to
use effectively 97% of the information from our initial training data for reconstruction.
-Refer to figure \ref{eigvariance} for the data variance associated with each of the M
+Refer to figure \ref{fig:eigvariance} for the data variance associated with each of the M
eigenvalues.
\begin{figure}
\begin{center}
\includegraphics[width=20em]{fig/variance.pdf}
\caption{Data variance carried by each of M eigenvalues}
-\label{eigvariance}
+\label{fig:eigvariance}
\end{center}
\end{figure}
@@ -159,20 +159,20 @@ alternative method through reconstruction error.
Nearest Neighbor projects the test data onto the generated subspace and finds the closest
element to the projected test image, assigning the same class as the neighbor found.
-Recognition accuracy of NN classification can be observed in figure \ref{accuracy}.
+Recognition accuracy of NN classification can be observed in figure \ref{fig:accuracy}.
A confusion matrix showing success and failure cases for Nearest Neighbor classfication
-can be observed in figure \label{cm}:
+can be observed in figure \label{fig:cm}:
\begin{figure}
\begin{center}
\includegraphics[width=20em]{fig/pcacm.pdf}
-\label{cm}
+\label{fig:cm}
\caption{Confusion Matrix NN, M=99}
\end{center}
\end{figure}
-Two examples of the outcome of Nearest Neighbor Classification are presented in figures \ref{nn_fail} and \ref{nn_succ},
+Two examples of the outcome of Nearest Neighbor Classification are presented in figures \ref{fig:nn_fail} and \ref{fig:nn_succ},
respectively one example of classification failure and an example of successful
classification.
@@ -181,7 +181,7 @@ classification.
\includegraphics[width=7em]{fig/face2.pdf}
\includegraphics[width=7em]{fig/face5.pdf}
\caption{Failure case for NN. Test face left. NN right}
-\label{nn_fail}
+\label{fig:nn_fail}
\end{center}
\end{figure}
@@ -190,20 +190,20 @@ classification.
\includegraphics[width=7em]{fig/success1.pdf}
\includegraphics[width=7em]{fig/success1t.pdf}
\caption{Success case for NN. Test face left. NN right}
-\label{nn_succ}
+\label{fig:nn_succ}
\end{center}
\end{figure}
It is possible to use a NN classification that takes into account majority voting.
With this method recognition is based on the K closest neighbors of the projected
test image. Such method anyways showed the best recognition accuracies for PCA with
-K=1, as it can be observed from figure \ref{k-diff}.
+K=1, as it can be observed from figure \ref{fig:k-diff}.
\begin{figure}
\begin{center}
\includegraphics[width=20em]{fig/kneighbors_diffk.pdf}
\caption{NN recognition accuracy varying K. Split: 80-20}
-\label{k-diff}
+\label{fig:k-diff}
\end{center}
\end{figure}
@@ -212,7 +212,7 @@ subspace is generated for each class. These subspaces are then used for reconstr
of the test image and the class of the subspace that generated the minimum reconstruction
error is assigned.
-The alternative method shows overall a better performance (see figure \ref{altacc}), with peak accuracy of 69%
+The alternative method shows overall a better performance (see figure \ref{fig:altacc}), with peak accuracy of 69%
for M=5. The maximum M non zero eigenvectors that can be used will in this case be at most
the amount of training samples per class minus one, since the same amount of eigenvectors
will be used for each generated class-subspace.
@@ -221,22 +221,22 @@ will be used for each generated class-subspace.
\begin{center}
\includegraphics[width=20em]{fig/alternative_accuracy.pdf}
\caption{Accuracy of Alternative Method varying M}
-\label{altacc}
+\label{fig:altacc}
\end{center}
\end{figure}
A confusion matrix showing success and failure cases for alternative method classfication
-can be observed in figure \ref{cm-alt}.
+can be observed in figure \ref{fig:cm-alt}.
\begin{figure}
\begin{center}
\includegraphics[width=20em]{fig/altcm.pdf}
\caption{Confusion Matrix for alternative method, M=5}
-\label{cm-alt}
+\label{fig:cm-alt}
\end{center}
\end{figure}
-Similarly to the NN case, we present two cases, respectively failure (figure \ref{altfail}) and success (figure \ref{altsucc}).
+Similarly to the NN case, we present two cases, respectively failure (figure \ref{fig:altfail}) and success (figure \ref{fig:altsucc}).
\begin{figure}
\begin{center}
@@ -244,7 +244,7 @@ Similarly to the NN case, we present two cases, respectively failure (figure \re
\includegraphics[width=7em]{fig/FR.JPG}
\includegraphics[width=7em]{fig/FL.JPG}
\caption{Alternative method failure. Respectively test image, reconstructed image, class assigned}
-\label{altfail}
+\label{fig:altfail}
\end{center}
\end{figure}
@@ -254,7 +254,7 @@ Similarly to the NN case, we present two cases, respectively failure (figure \re
\includegraphics[width=7em]{fig/SR.JPG}
\includegraphics[width=7em]{fig/SL.JPG}
\caption{Alternative method success. Respectively test image, reconstructed image, class assigned}
-\label{altsucc}
+\label{fig:altsucc}
\end{center}
\end{figure}
@@ -318,7 +318,7 @@ LDA and it improves recognition performances with respect to PCA and LDA.
In this section we will perform PCA-LDA recognition with NN classification.
Varying the values of M_pca and M_lda we obtain the average recognition accuracies
-reported in figure \ref{ldapca_acc}. Peak accuracy of 93% can be observed for M_pca=115, M_lda=41;
+reported in figure \ref{fig:ldapca_acc}. Peak accuracy of 93% can be observed for M_pca=115, M_lda=41;
howeverer accuracies above 90% can be observed for M_pca values between 90 and 130 and
M_lda values between 30 and 50.
@@ -329,7 +329,7 @@ vaying between 0.11s(low M_pca) and 0.19s(high M_pca).
\begin{center}
\includegraphics[width=20em]{fig/ldapca3dacc.pdf}
\caption{PCA-LDA NN Recognition Accuracy varying hyper-parameters}
-\label{ldapca_acc}
+\label{fig:ldapca_acc}
\end{center}
\end{figure}
@@ -341,24 +341,24 @@ The rank of S\textsubscript{W} will have the same value of M_pca for M_pca$\leq$
NEED MORE SCATTER MATRIX CONTENT
Testing with M_lda=50 and M_pca=115 gives 92.9% accuracy. The results of such test can be
-observed in the confusion matrix shown in figure \ref{ldapca_cm}.
+observed in the confusion matrix shown in figure \ref{fig:ldapca_cm}.
\begin{figure}
\begin{center}
\includegraphics[width=20em]{fig/cmldapca.pdf}
\caption{PCA-LDA NN Recognition Confusion Matrix Mlda=50, Mpca=115}
-\label{ldapca_cm}
+\label{fig:ldapca_cm}
\end{center}
\end{figure}
-Two recognition examples are reported: success in figure \ref{succ_ldapca} and failure in figure \ref{fail_ldapca}.
+Two recognition examples are reported: success in figure \ref{fig:succ_ldapca} and failure in figure \ref{fig:fail_ldapca}.
\begin{figure}
\begin{center}
\includegraphics[width=7em]{fig/ldapcaf2.pdf}
\includegraphics[width=7em]{fig/ldapcaf1.pdf}
\caption{Failure case for PCA-LDA. Test face left. NN right}
-\label{fail_ldapca}
+\label{fig:fail_ldapca}
\end{center}
\end{figure}
@@ -367,13 +367,13 @@ Two recognition examples are reported: success in figure \ref{succ_ldapca} and f
\includegraphics[width=7em]{fig/ldapcas1.pdf}
\includegraphics[width=7em]{fig/ldapcas2.pdf}
\caption{Success case for PCA-LDA. Test face left. NN right}
-\label{succ_ldapca}
+\label{fig:succ_ldapca}
\end{center}
\end{figure}
The PCA-LDA method allows to obtain a much higher recognition accuracy compared to PCA.
The achieved separation between classes and reduction between inner class-distance
-that makes such results possible can be observed in figure \ref{subspaces}, in which
+that makes such results possible can be observed in figure \ref{fig:subspaces}, in which
the 3 features of the subspaces obtained are graphed.
\begin{figure}
@@ -381,11 +381,13 @@ the 3 features of the subspaces obtained are graphed.
\includegraphics[width=12em]{fig/SubspaceQ1.pdf}
\includegraphics[width=12em]{fig/SubspaceQL1.pdf}
\caption{Generated Subspaces (3 features). PCA on the left. PCA-LDA on the right}
-\label{subspaces}
+\label{fig:subspaces}
\end{center}
\end{figure}
# Question 3, LDA Ensemble for Face Recognition, PCA-LDA Ensemble
+
+
# References