aboutsummaryrefslogtreecommitdiff
path: root/report/paper.md
diff options
context:
space:
mode:
authornunzip <np.scarh@gmail.com>2018-11-20 17:53:14 +0000
committernunzip <np.scarh@gmail.com>2018-11-20 17:53:14 +0000
commitbf63217316c979ec234fc51a87e08cc50dde045b (patch)
tree41be77e8e87da66a90a08ff7ae8b568927ae9cd0 /report/paper.md
parent1baddbf59d71015c3314af78345c194b1e14352d (diff)
downloadvz215_np1915-bf63217316c979ec234fc51a87e08cc50dde045b.tar.gz
vz215_np1915-bf63217316c979ec234fc51a87e08cc50dde045b.tar.bz2
vz215_np1915-bf63217316c979ec234fc51a87e08cc50dde045b.zip
Reference to table of time
Diffstat (limited to 'report/paper.md')
-rwxr-xr-xreport/paper.md9
1 files changed, 6 insertions, 3 deletions
diff --git a/report/paper.md b/report/paper.md
index 01984cc..a01c9b2 100755
--- a/report/paper.md
+++ b/report/paper.md
@@ -73,7 +73,7 @@ and the there is a relation between the eigenvectors obtained: $\boldsymbol{u\te
Experimentally there is no consequential loss of data calculating the eigenvectors
for PCA when using the low dimensional method. The main advantages of it are reduced computation time,
-(since the two methods require on average respectively 3.7s and 0.11s), and complexity of computation
+(since the two methods require on average respectively 3.7s and 0.11s from table \ref{tab:time}), and complexity of computation
(since the eigenvectors found with the first method are extracted from a significantly
bigger matrix).
@@ -172,6 +172,7 @@ The alternative method shows overall a better performance (see figure \ref{fig:a
for M=5. The maximum M non zero eigenvectors that can be used will in this case be at most
the amount of training samples per class minus one, since the same amount of eigenvectors
will be used for each generated class-subspace.
+A major drawback is the increase in execution time (from table \ref{tab:time}, 1.1s on average).
\begin{figure}
\begin{center}
@@ -285,7 +286,8 @@ howeverer accuracies above 90% can be observed for $M_{\textrm{pca}}$ values bet
$M_{\textrm{lda}}$ values between 30 and 50.
Recognition accuracy is significantly higher than PCA, and the run time is roughly the same,
-vaying between 0.11s(low $M_{\textrm{pca}}$) and 0.19s(high $M_{\textrm{pca}}$).
+vaying between 0.11s(low $M_{\textrm{pca}}$) and 0.19s(high $M_{\textrm{pca}}$). Execution times
+are displayed in table \ref{tab:time}.
\begin{figure}
\begin{center}
@@ -456,6 +458,7 @@ Seed & Individual$(M=120)$ & Bag + Feature Ens.$(M=60+95)$\\ \hline
1 & 0.929 & 0.942 \\
5 & 0.897 & 0.910 \\ \hline
\end{tabular}
+\caption{Comparison of individual and ensemble}
\label{tab:compare}
\end{table}
@@ -508,7 +511,7 @@ From here it follows that AA\textsuperscript{T} and A\textsuperscript{T}A have t
\begin{table}[ht]
\centering
-\begin{tabular}[t]{l|lll}
+\begin{tabular}[t]{llll}
\hline
& Best(s) & Worst(s) & Average(s) \\ \hline
PCA & 3.5 & 3.8 & 3.7 \\