aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rwxr-xr-xreport/paper.md36
1 files changed, 22 insertions, 14 deletions
diff --git a/report/paper.md b/report/paper.md
index 11ed9fd..a31885c 100755
--- a/report/paper.md
+++ b/report/paper.md
@@ -17,6 +17,7 @@ test_data.
\begin{figure}
\begin{center}
\includegraphics[width=20em]{fig/partition.pdf}
+\label{accuracy}
\caption{NN Recognition Accuracies for different data partitions}
\end{center}
\end{figure}
@@ -26,14 +27,14 @@ PCA is applied. The covariance matrix, S, of dimension
2576x2576 (features x features), will have 2576 eigenvalues
and eigenvectors. The amount of non-zero eigenvalues and
eigenvectors obtained will only be equal to the amount of
-training samples minus one. This can be observed in the
-graph below as a sudden drop for eigenvalues after the
-363rd.
+training samples minus one. This can be observed in figure \ref{logeig}
+as a sudden drop for eigenvalues after the 363rd.
\begin{figure}
\begin{center}
\includegraphics[width=20em]{fig/eigenvalues.pdf}
\caption{Log plot of all eigenvalues}
+\label{logeig}
\end{center}
\end{figure}
@@ -41,12 +42,13 @@ The mean image is calculated averaging the features of the
training data. Changing the randomization seed will give
very similar values, since the vast majority of the training
faces used for averaging will be the same. The mean face
-for our standard seed can be observed below.
+for our standard seed can be observed in figure \ref{mean_face}.
\begin{figure}
\begin{center}
\includegraphics[width=8em]{fig/mean_face.pdf}
\caption{Mean Face}
+\label{mean_face}
\end{center}
\end{figure}
@@ -76,7 +78,7 @@ and eigenvectors of the matrices A\textsuperscript{T}A (NxN) and AA\textsuperscr
(DxD)).
The first ten biggest eigenvalues obtained with each method
-are shown in the table below.
+are shown in table \ref{table_eigen}.
\begin{table}[ht]
\centering
@@ -94,6 +96,7 @@ PCA &Fast PCA\\
2.4396E+04 &2.4339E+04\\
\end{tabular}
\caption{Comparison of eigenvalues obtain with the two computation methods}
+\label{table_eigen}
\end{table}
It can be proven that the eigenvalues obtained are mathematically the same,
@@ -128,12 +131,12 @@ the covariance matrix, whereas method 2 requires an additional projection step.
Using the computational method for fast PCA, face reconstruction is then performed.
The quality of reconstruction will depend on the amount of eigenvectors picked.
-The results of varying M can be observed in the picture below. Two faces from classes
-number 21 and 2 respectively, are reconstructed as shown below with respective M values
+The results of varying M can be observed in the picture in fig.\ref{face160rec}. Two faces from classes
+number 21 and 2 respectively, are reconstructed as shown in fig.\ref{face10rec} with respective M values
of M=10, M=100, M=200, M=300. The last picture is the original face.
-![Reconstructed Face C21](fig/face160rec.pdf)
-![Reconstructed Face C2](fig/face10rec.pdf)
+![Reconstructed Face C21\label{face160rec}](fig/face160rec.pdf)
+![Reconstructed Face C2\label{face10rec}](fig/face10rec.pdf)
It is already observable that the improvement in reconstruction is marginal for M=200
and M=300. For such reason choosing M close to 100 is good enough for such purpose.
@@ -157,19 +160,20 @@ alternative method through reconstruction error.
Nearest Neighbor projects the test data onto the generated subspace and finds the closest
element to the projected test image, assigning the same class as the neighbor found.
-Recognition accuracy of NN classification can be observed in Figure 4 (CHANGE TO ALWAYS POINT AT THE GRAPH, DUNNO HOW).
+Recognition accuracy of NN classification can be observed in figure \ref{accuracy}.
A confusion matrix showing success and failure cases for Nearest Neighbor classfication
-can be observed below:
+can be observed in figure \label{cm}:
\begin{figure}
\begin{center}
\includegraphics[width=20em]{fig/pcacm.pdf}
+\label{cm}
\caption{Confusion Matrix NN, M=99}
\end{center}
\end{figure}
-Two examples of the outcome of Nearest Neighbor Classification are presented below,
+Two examples of the outcome of Nearest Neighbor Classification are presented in figures \ref{nn-fail} and \ref{nn-succ},
respectively one example of classification failure and an example of successful
classification.
@@ -177,6 +181,7 @@ classification.
\begin{center}
\includegraphics[width=7em]{fig/face2.pdf}
\includegraphics[width=7em]{fig/face5.pdf}
+\label{nn-fail}
\caption{Failure case for NN. Test face left. NN right}
\end{center}
\end{figure}
@@ -185,6 +190,7 @@ classification.
\begin{center}
\includegraphics[width=7em]{fig/success1.pdf}
\includegraphics[width=7em]{fig/success1t.pdf}
+\label{nn-fail}
\caption{Success case for NN. Test face left. NN right}
\end{center}
\end{figure}
@@ -192,11 +198,12 @@ classification.
It is possible to use a NN classification that takes into account majority voting.
With this method recognition is based on the K closest neighbors of the projected
test image. Such method anyways showed the best recognition accuracies for PCA with
-K=1, as it can be observed from the graph below.
+K=1, as it can be observed from the figure \ref{k-diff}.
\begin{figure}
\begin{center}
\includegraphics[width=20em]{fig/kneighbors_diffk.pdf}
+\label{k-diff}
\caption{NN recognition accuracy varying K. Split: 80-20}
\end{center}
\end{figure}
@@ -219,11 +226,12 @@ will be used for each generated class-subspace.
\end{figure}
A confusion matrix showing success and failure cases for alternative method classfication
-can be observed below:
+can be observed in figure \ref{cm-alt}.
\begin{figure}
\begin{center}
\includegraphics[width=20em]{fig/altcm.pdf}
+\label{cm-alt}
\caption{Confusion Matrix for alternative method, M=5}
\end{center}
\end{figure}