aboutsummaryrefslogtreecommitdiff
path: root/report
diff options
context:
space:
mode:
authornunzip <np.scarh@gmail.com>2018-11-15 01:57:20 +0000
committernunzip <np.scarh@gmail.com>2018-11-15 01:57:20 +0000
commit8ec8cc9aa9a9910786327e8273fb12a8256ba32d (patch)
tree945be45b15968b814372499a9ac4c8f5f55857cd /report
parent70692e20466329b41ea73192e177f2fb5150f1aa (diff)
downloadvz215_np1915-8ec8cc9aa9a9910786327e8273fb12a8256ba32d.tar.gz
vz215_np1915-8ec8cc9aa9a9910786327e8273fb12a8256ba32d.tar.bz2
vz215_np1915-8ec8cc9aa9a9910786327e8273fb12a8256ba32d.zip
Captions added
Diffstat (limited to 'report')
-rwxr-xr-xreport/paper.md68
1 files changed, 55 insertions, 13 deletions
diff --git a/report/paper.md b/report/paper.md
index 65bacca..60e1f7f 100755
--- a/report/paper.md
+++ b/report/paper.md
@@ -12,9 +12,12 @@ for training as a standard. This will allow to give more than one
example of success and failure for each class when classifying the
test_data.
+\begin{figure}
\begin{center}
\includegraphics[width=20em]{fig/partition.pdf}
+\caption{NN Recognition Accuracies for different data partitions}
\end{center}
+\end{figure}
After partitioning the data into training and testing sets,
PCA is applied. The covariance matrix, S, of dimension
@@ -25,30 +28,38 @@ training samples minus one. This can be observed in the
graph below as a sudden drop for eigenvalues after the
363rd.
+\begin{figure}
\begin{center}
\includegraphics[width=20em]{fig/eigenvalues.pdf}
+\caption{Log plot of all eigenvalues}
\end{center}
+\end{figure}
The mean image is calculated averaging the features of the
training data. Changing the randomization seed will give
very similar values, since the vast majority of the training
faces used for averaging will be the same. The mean face
for our standard seed can be observed below.
-
+
+\begin{figure}
\begin{center}
\includegraphics[width=8em]{fig/mean_face.pdf}
+\caption{Mean Face}
\end{center}
+\end{figure}
To perform face recognition we choose the best M eigenvectors
associated with the largest eigenvalues. We tried
different values of M, and we found an optimal point for
M=99 with accuracy=57%. After such value the accuracy starts
-to flaten, with some exceptions for points at which accuracy decreases.
-WE NEED TO ADD PHYSICAL MEANINGS
+to flaten.
+\begin{figure}
\begin{center}
\includegraphics[width=20em]{fig/accuracy.pdf}
+\caption{NN Recognition Accuracy varying M}
\end{center}
+\end{figure}
# Question 1, Application of eigenfaces
@@ -100,6 +111,14 @@ We know that $S\boldsymbol{u\textsubscript{i}} = \lambda \textsubscript{i}\bolds
From here it follows that AA\textsuperscript{T} and A\textsuperscript{T}A have the same eigenvalues and their eigenvectors follow the relationship $\boldsymbol{u\textsubscript{i}} = A\boldsymbol{v\textsubscript{i}}$
+It can be noticed that we effectively don't lose any data calculating the eigenvectors
+for PCA with the second method. The main advantages of it are in terms of speed,
+(since the two methods require respectively 3.4s and 0.14s), and complexity of computation
+(since the eigenvectors found with the first method are extracted from a significantly
+bigger matrix).
+
+The only drawback is that with method 1 the eigenfaces are generated directly through
+the covariance matrix, whereas method 2 requires an additional projection step.
Using the computational method for fast PCA, face reconstruction is then performed.
The quality of reconstruction will depend on the amount of eigenvectors picked.
@@ -114,11 +133,15 @@ It is already observable that the improvement in reconstruction is marginal for
and M=300. For such reason choosing M close to 100 is good enough for such purpose.
Observing in fact the variance ratio of the principal components, the contribution
they'll have will be very low for values above 100, hence we will require a much higher
-quantity of components to improve reconstruction quality.
+quantity of components to improve reconstruction quality. With M=100 we will be able to
+use effectively 97% of the information from our initial training data for reconstruction.
+\begin{figure}
\begin{center}
\includegraphics[width=20em]{fig/variance.pdf}
+\caption{Data variance carried by each of M eigenvalues}
\end{center}
+\end{figure}
The analysed classification methods used for face recognition are **Nearest Neighbor** and
**alternative method** through reconstruction error.
@@ -129,46 +152,65 @@ REFER TO ACCURACY GRAPH 1 FOR NN. MAYBE WE CAN ALSO ADD SAME GRAPH WITH DIFFEREN
A confusion matrix showing success and failure cases for Nearest Neighbor classfication
can be observed below:
+\begin{figure}
\begin{center}
\includegraphics[width=20em]{fig/pcacm.pdf}
-%![Confusion Matrix NN, K=1](fig/pcacm.pdf)
+\caption{Confusion Matrix NN, M=99}
\end{center}
+\end{figure}
-An example of failed classification is a test face from class 2, wrongly labeled as class 5:
+Two examples of the outcome of Nearest Neighbor Classification are presented below.
+The first one is a failed classification of a test face from class 2, labeled as class 5.
+The second one is an example of success, in which a face from class 1 was classified
+correctly.
+\begin{figure}
\begin{center}
\includegraphics[width=7em]{fig/face2.pdf}
\includegraphics[width=7em]{fig/face5.pdf}
-%![Class 2 (left) labeled as class 5 (right)](fig/failure_2_5.pdf)
+\caption{Failure case for NN. Test face left. NN right}
+\end{center}
+\end{figure}
+
+\begin{figure}
+\begin{center}
+\includegraphics[width=7em]{fig/success1.pdf}
+\includegraphics[width=7em]{fig/success1t.pdf}
+\caption{Success case for NN. Test face left. NN right}
\end{center}
+\end{figure}
The alternative method shows overall a better performance, with peak accuracy of 69%
for M=5. The maximum M non zero eigenvectors that can be used will in this case be at most
the amount of training samples per class minus one, since the same amount of eigenvectors
will be used for each generated class-subspace.
+\begin{figure}
\begin{center}
\includegraphics[width=20em]{fig/alternative_accuracy.pdf}
-%![Accuracy of Alternative Method varying M](fig/alternative_accuracy.pdf)
+\caption{Accuracy of Alternative Method varying M}
\end{center}
+\end{figure}
A confusion matrix showing success and failure cases for alternative method classfication
can be observed below:
+\begin{figure}
\begin{center}
\includegraphics[width=20em]{fig/altcm.pdf}
-%![Confusion Matrix alternative method, M=3](fig/altcm.pdf)
+\caption{Confusion Matrix for alternative method, M=5}
\end{center}
+\end{figure}
-It can be observed that even with this more accurate classification, there is one
-instance of mislabel of the same face of class 2 as class 5. An additional classification
-failure of class 6 labeled as class 7 can be observed below:
+A classification failure of class 6 labeled as class 7 can be observed below:
+\begin{figure}
\begin{center}
\includegraphics[width=14em]{fig/failure_6_7.pdf}
-%![Class 6 (left) labeled as class 7 (right)](fig/failure_6_7.pdf)
\includegraphics[width=7em]{fig/rec_6.pdf}
+\caption{Alternative method failure. Respectively test image, labeled class, reconstructed image}
\end{center}
+\end{figure}
# Question 2, Generative and Discriminative Subspace Learning