diff options
author | nunzip <np.scarh@gmail.com> | 2018-11-12 19:18:26 +0000 |
---|---|---|
committer | nunzip <np.scarh@gmail.com> | 2018-11-12 19:18:26 +0000 |
commit | c18c0a099dbff0b7eba527cdcb31c86aba3de754 (patch) | |
tree | 98d71abea7d662102a968735dfd9f88098ec026f | |
parent | eec46cfafdfd221023a3e5314c1446d2d4a36b73 (diff) | |
download | vz215_np1915-c18c0a099dbff0b7eba527cdcb31c86aba3de754.tar.gz vz215_np1915-c18c0a099dbff0b7eba527cdcb31c86aba3de754.tar.bz2 vz215_np1915-c18c0a099dbff0b7eba527cdcb31c86aba3de754.zip |
Write part 2
-rwxr-xr-x | report/paper.md | 49 |
1 files changed, 35 insertions, 14 deletions
diff --git a/report/paper.md b/report/paper.md index a806a58..c84530e 100755 --- a/report/paper.md +++ b/report/paper.md @@ -1,9 +1,8 @@ # Question 1, Eigenfaces The data is partitioned to allow random selection of the -same amount of samples for each class. This is done to -provide the subspace with (???). In -such way, each training vector space will be generated with +same amount of samples for each class. +In such way, each training vector space will be generated with the same amount of elements. The test data will instead be taken from the remaining samples. Testing on accuracy with respect to data partition indicates that the maximum @@ -34,7 +33,6 @@ for our standard seed can be observed below. ![Mean Face](fig/mean_face.pdf){ width=1em } - To perform face recognition we choose the best M eigenvectors associated with the largest eigenvalues. We tried different values of M, and we found an optimal point for @@ -139,16 +137,39 @@ will be used for each generated class-subspace. A confusion matrix showing success and failure cases for alternative method classfication can be observed below: -![Confusion Matrix alternative method, M=3](fig/altcm.pdf) - -It can be observed that even with this more accurate classification, there is one instance -of mislabel of the same face of class 2 as class 5. An additional classification failure -of class 6 labeled as class 7 can be observed below: - -![Class 6 (left) labeled as class 7 (right)](fig/failure_6_7.pdf) - -# Cites - +![Confusion Matrix alternative method, M=3](fig/altcm.pdf) + +It can be observed that even with this more accurate classification, there is one +instance of mislabel of the same face of class 2 as class 5. An additional classification +failure of class 6 labeled as class 7 can be observed below: + +![Class 6 (left) labeled as class 7 (right)](fig/failure_6_7.pdf) + +# Part 2 + +Maximize function J(W) (Fisher's Criterion): +$$ J(W) = \frac{W\textsuperscript{T}S\textsubscript{B}W}{W\textsuperscript{T}S\textsubscript{W}W}\textrm{ or } J(W) = \frac{W\textsuperscript{T}S\textsubscript{B}W}{W\textsuperscript{T}S\textsubscript{t}W}$$ +With S\textsubscript{B} being the scatter matrix between classes, S\textsubscript{W} +being the within-class scatter matrix and W being the set of projection vectors. $\mu$ +represents the mean vector(???) of each class. +$$ S\textsubscript{B} = \sum\limits_{c}(\mu\textsubscript{c} - \overline{x})(\mu\textsubscript{c} - \overline{x})\textsuperscript{T} $$ +$$ S\textsubscript{W} = \sum\limits_{c}\sum\limits_{i\epsilon c}(x\textsubscript{i} - \mu\textsubscript{c})(x\textsubscript{i} - \mu\textsubscript{c})\textsuperscript{T} $$ +To maximize J(W) we differentiate with respect to W and equate to zero: +$$ \frac{d}{dW}J(W) = \frac{d}{dW}(\frac{W\textsuperscript{T}S\textsubscript{B}W}{W\textsuperscript{T}S\textsubscript{W}W}) = 0 $$ +$$ (W\textsuperscript{T}S\textsubscript{W}W)\frac{d(W\textsuperscript{T}S\textsubscript{B}W)}{dW} - (W\textsuperscript{T}S\textsubscript{B}W)\frac{d(W\textsuperscript{T}S\textsubscript{W}W)}{dW} = 0 $$ +$$ (W\textsuperscript{T}S\textsubscript{W}W)2S\textsubscript{B}W - (W\textsuperscript{T}S\textsubscript{B}W)2S\textsubscript{W}W = 0 $$ +$$ S\textsubscript{B}W - JS\textsubscript{W}W = 0 $$ +Multiplying by the inverse of S\textsubscript{W} we obtain: +$$ S\textsubscript{W}\textsuperscript{-1}S\textsubscript{B}W - JW = 0 $$ +From here it follows: +$$ W\textsuperscript{*} = argmax(\frac{W\textsuperscript{T}S\textsubscript{B}W}{W\textsuperscript{T}S\textsubscript{W}W}) = S\textsubscript{W}\textsuperscript{-1}(\mu\textsubscript{1} - \mu\textsubscript{2}) $$ +By isomorphic mapping where P are the eigenvectors generated through PCA: +$$ W = PX $$ +We can substitute for W in the J(W) expression, obtaining: +$$ J(W) = \frac{X\textsuperscript{T}P\textsuperscript{T}S\textsubscript{B}PX}{X\textsuperscript{T}P\textsuperscript{T}S\textsubscript{t}PX} $$ +We can rewrite such expression substituting for: +$$ P\textsuperscript{T}S\textsubscript{B}P = \widetilde{S}\textsubscript{B} \textrm{ and } P\textsuperscript{T}S\textsubscript{t}P = \widetilde{S}\textsubscript{t} $$ +$$ J(W) = \widetilde{J}(W) = \frac{X\textsuperscript{T}\widetilde{S}\textsubscript{B}X}{X\textsuperscript{T}\widetilde{S}\textsubscript{t}X} $$ # Conclusion |