aboutsummaryrefslogtreecommitdiff
path: root/report
diff options
context:
space:
mode:
Diffstat (limited to 'report')
-rwxr-xr-xreport/paper.md84
1 files changed, 46 insertions, 38 deletions
diff --git a/report/paper.md b/report/paper.md
index 94a0e7c..0d91bfe 100755
--- a/report/paper.md
+++ b/report/paper.md
@@ -70,46 +70,11 @@ two computation techniques used shows that the difference
is very small (due to rounding
of the np.eigh function when calculating the eigenvalues
and eigenvectors of the matrices A\textsuperscript{T}A (NxN) and AA\textsuperscript{T}
-(DxD)).
-
-The first ten biggest eigenvalues obtained with each method
-are shown in table \ref{tab:eigen}.
-
-\begin{table}[ht]
-\centering
-\begin{tabular}[t]{cc}
-PCA &Fast PCA\\
-2.9755E+05 &2.9828E+05\\
-1.4873E+05 &1.4856E+05\\
-1.2286E+05 &1.2259E+05\\
-7.5084E+04 &7.4950E+04\\
-6.2575E+04 &6.2428E+04\\
-4.7024E+04 &4.6921E+04\\
-3.7118E+04 &3.7030E+04\\
-3.2101E+04 &3.2046E+04\\
-2.7871E+04 &2.7814E+04\\
-2.4396E+04 &2.4339E+04\\
-\end{tabular}
-\caption{Comparison of eigenvalues obtain with the two computation methods}
-\label{tab:eigen}
-\end{table}
+(DxD)). The first ten biggest eigenvalues obtained with each method
+are shown in Appendix, table \ref{tab:eigen}.
It can be proven that the eigenvalues obtained are mathematically the same [@lecture-notes],
-and the there is a relation between the eigenvectors obtained:
-
-Computing the eigenvectors **u\textsubscript{i}** for the DxD matrix AA\textsuperscript{T}
-we obtain a very large matrix. The computation process can get very expensive when $D \gg N$.
-
-For such reason we compute the eigenvectors **v\textsubscript{i}** of the NxN
-matrix A\textsuperscript{T}A. From the computation it follows that $A\textsuperscript{T}A\boldsymbol{v\textsubscript{i}} = \lambda \textsubscript{i}\boldsymbol{v\textsubscript{i}}$.
-
-Multiplying both sides by A we obtain:
-
-$$ AA\textsuperscript{T}A\boldsymbol{v\textsubscript{i}} = \lambda \textsubscript{i}A\boldsymbol{v\textsubscript{i}} \rightarrow SA\boldsymbol{v\textsubscript{i}} = \lambda \textsubscript{i}A\boldsymbol{v\textsubscript{i}} $$
-
-We know that $S\boldsymbol{u\textsubscript{i}} = \lambda \textsubscript{i}\boldsymbol{u\textsubscript{i}}$.
-
-From here it follows that AA\textsuperscript{T} and A\textsuperscript{T}A have the same eigenvalues and their eigenvectors follow the relationship $\boldsymbol{u\textsubscript{i}} = A\boldsymbol{v\textsubscript{i}}$
+and the there is a relation between the eigenvectors obtained: $\boldsymbol{u\textsubscript{i}} = A\boldsymbol{v\textsubscript{i}}$. (*Proof in the appendix*).
It can be noticed that we effectively don't lose any data calculating the eigenvectors
for PCA with the second method. The main advantages of it are in terms of speed,
@@ -486,3 +451,46 @@ We can compute an ensemble confusion matrix before the committee machines as sho
Combining bagging and feature space randomization we are able to achieve higher test accuracy than the individual models. Here is a comparison for various splits.
# References
+
+#Appendix
+
+##Eigenvectors and Eigenvalues in fast PCA
+
+**Table showing eigenvalues obtained with each method**
+
+\begin{table}[ht]
+\centering
+\begin{tabular}[t]{cc}
+PCA &Fast PCA\\
+2.9755E+05 &2.9828E+05\\
+1.4873E+05 &1.4856E+05\\
+1.2286E+05 &1.2259E+05\\
+7.5084E+04 &7.4950E+04\\
+6.2575E+04 &6.2428E+04\\
+4.7024E+04 &4.6921E+04\\
+3.7118E+04 &3.7030E+04\\
+3.2101E+04 &3.2046E+04\\
+2.7871E+04 &2.7814E+04\\
+2.4396E+04 &2.4339E+04\\
+\end{tabular}
+\caption{Comparison of eigenvalues obtain with the two computation methods}
+\label{tab:eigen}
+\end{table}
+
+**Proof of relationship between eigenvalues and eigenvectors in the different methods**
+
+Computing the eigenvectors **u\textsubscript{i}** for the DxD matrix AA\textsuperscript{T}
+we obtain a very large matrix. The computation process can get very expensive when $D \gg N$.
+
+For such reason we compute the eigenvectors **v\textsubscript{i}** of the NxN
+matrix A\textsuperscript{T}A. From the computation it follows that $A\textsuperscript{T}A\boldsymbol{v\textsubscript{i}} = \lambda \textsubscript{i}\boldsymbol{v\textsubscript{i}}$.
+
+Multiplying both sides by A we obtain:
+
+$$ AA\textsuperscript{T}A\boldsymbol{v\textsubscript{i}} = \lambda \textsubscript{i}A\boldsymbol{v\textsubscript{i}} \rightarrow SA\boldsymbol{v\textsubscript{i}} = \lambda \textsubscript{i}A\boldsymbol{v\textsubscript{i}} $$
+
+We know that $S\boldsymbol{u\textsubscript{i}} = \lambda \textsubscript{i}\boldsymbol{u\textsubscript{i}}$.
+
+From here it follows that AA\textsuperscript{T} and A\textsuperscript{T}A have the same eigenvalues and their eigenvectors follow the relationship $\boldsymbol{u\textsubscript{i}} = A\boldsymbol{v\textsubscript{i}}$
+
+