aboutsummaryrefslogtreecommitdiff
path: root/report
diff options
context:
space:
mode:
authornunzip <np.scarh@gmail.com>2018-11-08 14:51:56 +0000
committernunzip <np.scarh@gmail.com>2018-11-08 14:51:56 +0000
commiteec46cfafdfd221023a3e5314c1446d2d4a36b73 (patch)
tree124c6274d2e69dc20e89efce0f38238451808a3b /report
parent89f3737a721e3566f57a67eacb44099a37b5a3b6 (diff)
downloadvz215_np1915-eec46cfafdfd221023a3e5314c1446d2d4a36b73.tar.gz
vz215_np1915-eec46cfafdfd221023a3e5314c1446d2d4a36b73.tar.bz2
vz215_np1915-eec46cfafdfd221023a3e5314c1446d2d4a36b73.zip
Revised part 1
Diffstat (limited to 'report')
-rwxr-xr-xreport/paper.md37
1 files changed, 27 insertions, 10 deletions
diff --git a/report/paper.md b/report/paper.md
index 78e7191..a806a58 100755
--- a/report/paper.md
+++ b/report/paper.md
@@ -2,7 +2,7 @@
The data is partitioned to allow random selection of the
same amount of samples for each class. This is done to
-prevent overfitting (?) of some classes with respect to others. In
+provide the subspace with (???). In
such way, each training vector space will be generated with
the same amount of elements. The test data will instead
be taken from the remaining samples. Testing on accuracy
@@ -46,15 +46,17 @@ WE NEED TO ADD PHYSICAL MEANINGS
# Question 1, Application of eigenfaces
-rming the low-dimensional computation of the
+Performing the low-dimensional computation of the
eigenspace for PCA we obtain the same accuracy results
of the high-dimensional computation previously used. A
-comparison between eigenvalues and eigenvectors of the
+comparison between eigenvalues of the
two computation techniques used shows that the difference
-is very small. The difference we observed is due to rounding
+is very small (due to rounding
of the np.eigh function when calculating the eigenvalues
-and eigenvectors of the matrices ATA (DxD) and AAT
-(NxN).
+and eigenvectors of the matrices A\textsuperscript{T}A (NxN) and AA\textsuperscript{T}
+(DxD))
+
+I MIGHT HAVE SWAPPED THE DIMENSIONS, NOT SURE
The first ten biggest eigenvalues obtained with each method
are shown in the table below.
@@ -77,8 +79,23 @@ PCA &Fast PCA\\
\caption{Comparison of eigenvalues obtain with the two computation methods}
\end{table}
-It can be proven that the eigenvalues and eigenvectors
-obtain are the same: ##PROVE
+It can be proven that the eigenvalues obtained are mathematically the same,
+and the there is a relation between the eigenvectors obtained:
+
+Computing the eigenvectors **u\textsubscript{i}** for the DxD matrix AA\textsuperscript{T}
+we obtain a very large matrix. The computation process can get very expensive when D>>N.
+
+For such reason we compute the eigenvectors **v\textsubscript{i}** of the NxN
+matrix A\textsuperscript{T}A. From the computation it follows that $$ A\textsuperscript{T}A\boldsymbol{v\textsubscript{i}} = \lambda \textsubscript{i}\boldsymbol{v\textsubscript{i}} $$
+
+Multiplying both side by A we obtain: $$ AA\textsuperscript{T}A\boldsymbol{v\textsubscript{i}} = \lambda \textsubscript{i}A\boldsymbol{v\textsubscript{i}} $$
+
+$$ SA\boldsymbol{v\textsubscript{i}} = \lambda \textsubscript{i}A\boldsymbol{v\textsubscript{i}} $$
+
+We know that $$ S\boldsymbol{u\textsubscript{i}} = \lambda \textsubscript{i}\boldsymbol{u\textsubscript{i}} $$
+
+From here it follows that AA\textsuperscript{T} and A\textsuperscript{T}A have the same eigenvalues and their eigenvectors follow the relationship $$ \boldsymbol{u\textsubscript{i}} = A\boldsymbol{v\textsubscript{i}} $$
+
Using the computational method for fast PCA, face reconstruction is then performed.
The quality of reconstruction will depend on the amount of eigenvectors picked.
@@ -97,8 +114,8 @@ quantity of components to improve reconstruction quality.
![Variance Ratio](fig/variance.pdf)
-The analysed classification methods used for face recognition are *Nearest Neighbor* and
-*alternative method* through reconstruction error.
+The analysed classification methods used for face recognition are **Nearest Neighbor** and
+**alternative method** through reconstruction error.
EXPLAIN THE METHODS
REFER TO ACCURACY GRAPH 1 FOR NN. MAYBE WE CAN ALSO ADD SAME GRAPH WITH DIFFERENT K