aboutsummaryrefslogtreecommitdiff
path: root/report
diff options
context:
space:
mode:
authorVasil Zlatanov <v@skozl.com>2018-11-20 19:11:15 +0000
committerVasil Zlatanov <v@skozl.com>2018-11-20 19:11:15 +0000
commit057f26f539c2d2003be11d49793898daf2031d91 (patch)
tree13bf5b00eeb6ff40f15c67a8908a8cc5b21972ba /report
parent71a011acba32d2a184567ae12d66c6963ba7c3b9 (diff)
downloadvz215_np1915-057f26f539c2d2003be11d49793898daf2031d91.tar.gz
vz215_np1915-057f26f539c2d2003be11d49793898daf2031d91.tar.bz2
vz215_np1915-057f26f539c2d2003be11d49793898daf2031d91.zip
Cover Ms in $s
Diffstat (limited to 'report')
-rwxr-xr-xreport/paper.md18
1 files changed, 9 insertions, 9 deletions
diff --git a/report/paper.md b/report/paper.md
index 7c91d0f..531322e 100755
--- a/report/paper.md
+++ b/report/paper.md
@@ -93,10 +93,10 @@ in fig.\ref{fig:face10rec} with respective $M$ values of $M=10, M=100, M=200, M=
![Reconstructed Face C2\label{fig:face10rec}](fig/face10rec.pdf)
-It is visible that the improvement in reconstruction is marginal for M=200
-and M=300. For this reason choosing $M$ larger than 100 gives very marginal returns.
+It is visible that the improvement in reconstruction is marginal for $M=200$
+and $M=300$. For this reason choosing $M$ larger than 100 gives very marginal returns.
This is evident when looking at the variance ratio of the principal components, as the contribution they have is very low for values above 100.
-With M=100 we are be able to reconstruct effectively 97% of the information from our initial training data.
+With $M=100$ we are be able to reconstruct effectively 97% of the information from our initial training data.
Refer to figure \ref{fig:eigvariance} for the data variance associated with each of the M
eigenvalues.
@@ -122,7 +122,7 @@ A confusion matrix showing success and failure cases for Nearest Neighbor classi
\begin{figure}
\begin{center}
\includegraphics[width=15em]{fig/pcacm.pdf}
-\caption{Confusion Matrix PCA and NN, M=99}
+\caption{Confusion Matrix PCA and NN, $M=99$}
\label{fig:cm}
\end{center}
\end{figure}
@@ -168,7 +168,7 @@ of the test image and the class of the subspace that generated the minimum recon
error is assigned.
The alternative method shows overall a better performance (see figure \ref{fig:altacc}), with peak accuracy of 69%
-for M=5. The maximum M non zero eigenvectors that can be used will in this case be at most
+for $M=5$. The maximum $M$ non zero eigenvectors that can be used will in this case be at most
the amount of training samples per class minus one, since the same amount of eigenvectors
will be used for each generated class-subspace.
A major drawback is the increase in execution time (from table \ref{tab:time}, 1.1s on average). However the total memory used with the alternative
@@ -178,7 +178,7 @@ memory associated with storing the different eigenvectors is deallocated, the to
\begin{figure}
\begin{center}
\includegraphics[width=17em]{fig/alternative_accuracy.pdf}
-\caption{Accuracy of Alternative Method varying M}
+\caption{Accuracy of Alternative Method varying $M$}
\label{fig:altacc}
\end{center}
\end{figure}
@@ -189,7 +189,7 @@ can be observed in figure \ref{fig:cm-alt}.
\begin{figure}
\begin{center}
\includegraphics[width=15em]{fig/altcm.pdf}
-\caption{Confusion Matrix for alternative method, M=5}
+\caption{Confusion Matrix for alternative method,$M=5$}
\label{fig:cm-alt}
\end{center}
\end{figure}
@@ -281,7 +281,7 @@ In this section we will perform PCA-LDA recognition with NN classification.
Varying the values of $M_{\textrm{pca}}$ and $M_{\textrm{lda}}$ we obtain the average recognition accuracies
reported in figure \ref{fig:ldapca_acc}. Peak accuracy of 93% can be observed for $M_{\textrm{pca}}=115$, $M_{\textrm{lda}}=41$;
-howeverer accuracies above 90% can be observed for $130 > M_{\textrm{pca}} 90$ and $ 50 > M_{\textrm{lda}} > 30$ values between 30 and 50.
+howeverer accuracies above 90% can be observed for $130 > M_{\textrm{pca}} 90$ and $50 > M_{\textrm{lda}} > 30$ values between 30 and 50.
Recognition accuracy is significantly higher than PCA, and the run time is roughly the same,
vaying between 0.11s(low $M_{\textrm{pca}}$) and 0.19s(high $M_{\textrm{pca}}$). Execution times
@@ -415,7 +415,7 @@ The optimal number of constant and random eigenvectors to use is therefore an in
\begin{figure}
\begin{center}
\includegraphics[width=19em]{fig/vaskplot3.pdf}
-\caption{Accuracy when varying M and Randomness Parameter}
+\caption{Accuracy when varying $M$ and Randomness Parameter}
\label{fig:opti-rand}
\end{center}
\end{figure}