aboutsummaryrefslogtreecommitdiff
path: root/report
diff options
context:
space:
mode:
authornunzip <np.scarh@gmail.com>2019-02-12 23:51:38 +0000
committernunzip <np.scarh@gmail.com>2019-02-12 23:51:38 +0000
commit1eee30a72578bc3983b4122b16da8f6c37529303 (patch)
tree8b19745a4735f0c3580f7f1d8ac2bf778fc44cb5 /report
parent84e0d358af61d42d9d39ac0eabf2f7a5b5c1c703 (diff)
downloade4-vision-1eee30a72578bc3983b4122b16da8f6c37529303.tar.gz
e4-vision-1eee30a72578bc3983b4122b16da8f6c37529303.tar.bz2
e4-vision-1eee30a72578bc3983b4122b16da8f6c37529303.zip
Fit to 3 pages
Diffstat (limited to 'report')
-rw-r--r--report/paper.md53
1 files changed, 24 insertions, 29 deletions
diff --git a/report/paper.md b/report/paper.md
index 9c7e5bc..57c54f4 100644
--- a/report/paper.md
+++ b/report/paper.md
@@ -54,7 +54,7 @@ Figure \ref{fig:km-tree-param} shows the effect of tree depth and number of tree
\begin{center}
\includegraphics[width=12em]{fig/error_depth_kmean100.pdf}
\includegraphics[width=12em]{fig/trees_kmean.pdf}
-\caption{Classification error varying tree depth (left) and forest size (right)}
+\caption{K-means Classification error varying tree depth (left) and forest size (right)}
\label{fig:km-tree-param}
\end{center}
\end{figure}
@@ -63,8 +63,9 @@ Random forests will select a random number of features on which to apply a weak
\begin{figure}[H]
\begin{center}
-\includegraphics[width=18em]{fig/new_kmean_random.pdf}
-\caption{Classification error for different number of random features}
+\includegraphics[width=12em]{fig/new_kmean_random.pdf}
+\includegraphics[width=12em]{fig/p3_rand.pdf}
+\caption{Classification error for different number of random features; K-means left, RF-codebooks right}
\label{fig:kmeanrandom}
\end{center}
\end{figure}
@@ -80,7 +81,7 @@ more. This is due to the complexity added by the two-pixels test, since it adds
\begin{figure}[H]
\begin{center}
-\includegraphics[width=18em]{fig/2pixels_kmean.pdf}
+\includegraphics[width=14em]{fig/2pixels_kmean.pdf}
\caption{K-means classification accuracy changing the type of weak learners}
\label{fig:2pt}
\end{center}
@@ -88,7 +89,7 @@ more. This is due to the complexity added by the two-pixels test, since it adds
\begin{figure}[H]
\begin{center}
-\includegraphics[width=18em]{fig/e100k256d5_cm.pdf}
+\includegraphics[width=14em]{fig/e100k256d5_cm.pdf}
\caption{Confusion Matrix: K=256, ClassifierForestSize=100, Depth=5}
\label{fig:km_cm}
\end{center}
@@ -96,8 +97,8 @@ more. This is due to the complexity added by the two-pixels test, since it adds
\begin{figure}[H]
\begin{center}
-\includegraphics[width=10em]{fig/success_km.pdf}
-\includegraphics[width=10em]{fig/fail_km.pdf}
+\includegraphics[width=8em]{fig/success_km.pdf}
+\includegraphics[width=8em]{fig/fail_km.pdf}
\caption{K-means + RF Classifier: Success (left); Failure (right)}
\label{fig:km_succ}
\end{center}
@@ -110,7 +111,7 @@ which is $O(\sqrt{D} N \log K)$ compared to $O(DNK)$ for K-means. Codebook mappi
\begin{figure}[H]
\begin{center}
-\includegraphics[width=18em]{fig/256t1_e200D5_cm.pdf}
+\includegraphics[width=14em]{fig/256t1_e200D5_cm.pdf}
\caption{Confusion Matrix: CodeBookForestSize=256; ClassifierForestSize=200; Depth=5}
\label{fig:p3_cm}
\end{center}
@@ -118,9 +119,9 @@ which is $O(\sqrt{D} N \log K)$ compared to $O(DNK)$ for K-means. Codebook mappi
\begin{figure}[H]
\begin{center}
-\includegraphics[width=10em]{fig/success_3.pdf}
-\includegraphics[width=10em]{fig/fail_3.pdf}
-\caption{Part3: Success (left) and Failure (right)}
+\includegraphics[width=8em]{fig/success_3.pdf}
+\includegraphics[width=8em]{fig/fail_3.pdf}
+\caption{RF Codebooks + RF Classifier: Success (left); Failure (right)}
\label{fig:p3_succ}
\end{center}
\end{figure}
@@ -129,36 +130,20 @@ which is $O(\sqrt{D} N \log K)$ compared to $O(DNK)$ for K-means. Codebook mappi
\begin{center}
\includegraphics[width=12em]{fig/error_depth_p3.pdf}
\includegraphics[width=12em]{fig/trees_p3.pdf}
-\caption{Classification error varying trees depth (left) and numbers of trees (right)}
+\caption{RF-codebooks Classification error varying trees depth (left) and numbers of trees (right)}
\label{fig:p3_trees}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
-\includegraphics[width=18em]{fig/p3_rand.pdf}
-\caption{Effect of randomness parameter on classification error}
-\label{fig:p3_rand}
-\end{center}
-\end{figure}
-
-\begin{figure}[H]
-\begin{center}
\includegraphics[width=12em]{fig/p3_vocsize.pdf}
\includegraphics[width=12em]{fig/p3_time.pdf}
-\caption{Effect of vocabulary size; classification error (left) and time (right)}
+\caption{RF-codebooks Effect of vocabulary size; classification error (left) and time (right)}
\label{fig:p3_voc}
\end{center}
\end{figure}
-\begin{figure}[H]
-\begin{center}
-\includegraphics[width=18em]{fig/p3_colormap.pdf}
-\caption{Varying leaves and estimators: effect on accuracy}
-\label{fig:p3_colormap}
-\end{center}
-\end{figure}
-
# Comparison of methods and conclusions
Overall we observe slightly higher accuracy when using K-means codebooks compared to RF codebook at the expense of a higher execution time for training and testing.
@@ -174,6 +159,16 @@ is the one that gets misclassified the most, both in k-means and RF-codebook (re
from this class do not guarantee very discriminative splits, hence the first splits in the trees
will prioritize features taken from other classes.
+# Appendix
+
+\begin{figure}[H]
+\begin{center}
+\includegraphics[width=14em]{fig/p3_colormap.pdf}
+\caption{Varying leaves and estimators: effect on accuracy}
+\label{fig:p3_colormap}
+\end{center}
+\end{figure}
+
# References