aboutsummaryrefslogtreecommitdiff
path: root/report
diff options
context:
space:
mode:
authornunzip <np.scarh@gmail.com>2018-11-15 14:53:59 +0000
committernunzip <np.scarh@gmail.com>2018-11-15 14:53:59 +0000
commit0495464c57874e7a9ab68175f87f3dba7a6ed80d (patch)
tree273f97827a49105cf46f3f231a4132db14dbb563 /report
parent16bdf5149292f8f2a8c6e5957a1f644c3d81c9ab (diff)
downloadvz215_np1915-0495464c57874e7a9ab68175f87f3dba7a6ed80d.tar.gz
vz215_np1915-0495464c57874e7a9ab68175f87f3dba7a6ed80d.tar.bz2
vz215_np1915-0495464c57874e7a9ab68175f87f3dba7a6ed80d.zip
Fix Part 2 Eliminate useless comments
Diffstat (limited to 'report')
-rwxr-xr-xreport/paper.md33
1 files changed, 14 insertions, 19 deletions
diff --git a/report/paper.md b/report/paper.md
index de62e4f..9cb14c7 100755
--- a/report/paper.md
+++ b/report/paper.md
@@ -229,6 +229,8 @@ The pictures on the right show the reconstructed images.
\end{center}
\end{figure}
+From the failures and success cases analyzed it is noticeable that the parameters that
+affect recognition the most are: glasses, hair, sex and brightness of the picture.
# Question 2, Generative and Discriminative Subspace Learning
@@ -282,20 +284,13 @@ are respectively semi-positive definite and positive definite. $\widetilde{J}(X)
similarly to the original J(X), applies Fisher's criterion in a PCA generated subspace.
This enables to perform LDA minimizing loss of data.
-*Proof:*
+R\textsuperscript{n} contains the optimal discriminant vectors for LDA.
+However S\textsubscript{t} is singular and the vectors are found through
+an expensive computational process. The soultion to such issue is derivation
+from a lower space.
-REWRITE FROM HERE
-
-The set of optimal discriminant vectors can be found in R\textsuperscript{n}
-for LDA. But, this is a difficult computation because the dimension is very
-high. Besides, S\textsubscript{t} is always singular. Fortunately, it is possible
-to derive it to a lower dimensional space.
-
-Suppose **b\textsubscript{n}** are the eigenvectors of S\textsubscript{t}.
-The M biggest eigenvectors are positive: M = *rank*(S\textsubscript{t}).
-The other M+1 to compose the null space of S\textsubscript{t}.
-
-YO WTF(???)
+The M biggest eigenvectors of S\textsubscript{t} are positive and non zero and
+*rank*(S\textsubscript{t})=M.
**Theorem:**
*$$ \textrm{For any arbitrary } \varphi \in R\textsuperscript{n}, \varphi
@@ -304,14 +299,14 @@ $$ \textrm{ where, }X \in \phi\textsubscript{t}\textrm{ and } \xi \in
\phi\textsubscript{t}\textsuperscript{perp}\textrm{, and
satisfies }J(\varphi)=J(X). $$*
-According to the theorem, it is possible to find optimal discriminant
-vectors, reducing the problem dimension without any loss of information
-with respect to Fisher’s criterion.
+The theorem indicates that the optimal discriminant vectors can be derived
+through the reduced space obtained through PCA without losing information
+according to the Fisher's criterion.
+In conclusion such method is theoretically better than LDA and PCA alone.
+The Fisherfaces method requires less computation complexity and less time than
+LDA and it improves recognition performances with respect to PCA and LDA.
Fisherfaces method is effective because it requires less computation
-time than regular LDA. It also has lower error rates compared to
-Eigenfaces method. Thus, it combines the performances of discriminative
-and reconstructive tools.
# Question 3, LDA Ensemble for Face Recognition, PCA-LDA