aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rwxr-xr-xreport/paper.md18
1 files changed, 13 insertions, 5 deletions
diff --git a/report/paper.md b/report/paper.md
index 9cb14c7..7b46b53 100755
--- a/report/paper.md
+++ b/report/paper.md
@@ -1,5 +1,7 @@
# Question 1, Eigenfaces
+## Partition and Standard PCA
+
The data is partitioned to allow random selection of the
same amount of samples for each class.
In such way, each training vector space will be generated with
@@ -61,7 +63,7 @@ to flaten.
\end{center}
\end{figure}
-# Question 1, Application of eigenfaces
+## Low dimensional computation of eigenspace
Performing the low-dimensional computation of the
eigenspace for PCA we obtain the same accuracy results
@@ -113,13 +115,17 @@ From here it follows that AA\textsuperscript{T} and A\textsuperscript{T}A have t
It can be noticed that we effectively don't lose any data calculating the eigenvectors
for PCA with the second method. The main advantages of it are in terms of speed,
-(since the two methods require respectively 3.4s and 0.14s), and complexity of computation
+(since the two methods require on average respectively 3.4s and 0.14s), and complexity of computation
(since the eigenvectors found with the first method are extracted from a significantly
bigger matrix).
The only drawback is that with method 1 the eigenfaces are generated directly through
the covariance matrix, whereas method 2 requires an additional projection step.
+# Question 1, Application of eigenfaces
+
+## Image Reconstruction
+
Using the computational method for fast PCA, face reconstruction is then performed.
The quality of reconstruction will depend on the amount of eigenvectors picked.
The results of varying M can be observed in the picture below. Two faces from classes
@@ -143,10 +149,12 @@ use effectively 97% of the information from our initial training data for recons
\end{center}
\end{figure}
+## Classification
+
The analysed classification methods used for face recognition are Nearest Neighbor and
alternative method through reconstruction error.
-Nearest Neighbor projects the test data onto the generated subspace and find the closest
+Nearest Neighbor projects the test data onto the generated subspace and finds the closest
element to the projected test image, assigning the same class as the neighbor found.
Recognition accuracy of NN classification can be observed in Figure 4 (CHANGE TO ALWAYS POINT AT THE GRAPH, DUNNO HOW).
@@ -248,7 +256,7 @@ $$ J(W) = \frac{W\textsuperscript{T}S\textsubscript{B}W}{W\textsuperscript{T}S\t
With S\textsubscript{B} being the scatter matrix between classes, S\textsubscript{W}
being the within-class scatter matrix and W being the set of projection vectors. $\mu$
-represents the mean vector(???) of each class.
+represents the mean of each class.
$$ S\textsubscript{B} = \sum\limits_{c}(\mu\textsubscript{c} - \overline{x})(\mu\textsubscript{c} - \overline{x})\textsuperscript{T} $$
$$ S\textsubscript{W} = \sum\limits_{c}\sum\limits_{i\in c}(x\textsubscript{i} - \mu\textsubscript{c})(x\textsubscript{i} - \mu\textsubscript{c})\textsuperscript{T} $$
@@ -304,7 +312,7 @@ through the reduced space obtained through PCA without losing information
according to the Fisher's criterion.
In conclusion such method is theoretically better than LDA and PCA alone.
-The Fisherfaces method requires less computation complexity and less time than
+The Fisherfaces method requires less computation complexity, less time than
LDA and it improves recognition performances with respect to PCA and LDA.
Fisherfaces method is effective because it requires less computation