aboutsummaryrefslogtreecommitdiff
path: root/report
diff options
context:
space:
mode:
authornunzip <np.scarh@gmail.com>2019-02-13 16:44:59 +0000
committernunzip <np.scarh@gmail.com>2019-02-13 16:44:59 +0000
commitcf75eb15eb17ab2b55caeba1e29f877fdb3eef3f (patch)
tree4dc4933c896cba22c4e1a37bc902522ea369a632 /report
parent1eee30a72578bc3983b4122b16da8f6c37529303 (diff)
downloade4-vision-cf75eb15eb17ab2b55caeba1e29f877fdb3eef3f.tar.gz
e4-vision-cf75eb15eb17ab2b55caeba1e29f877fdb3eef3f.tar.bz2
e4-vision-cf75eb15eb17ab2b55caeba1e29f877fdb3eef3f.zip
Add graphs explaination to part 2
Diffstat (limited to 'report')
-rw-r--r--report/paper.md10
1 files changed, 7 insertions, 3 deletions
diff --git a/report/paper.md b/report/paper.md
index 57c54f4..63e20d0 100644
--- a/report/paper.md
+++ b/report/paper.md
@@ -49,6 +49,8 @@ We use a random forest classifier to label images based on the bag-of-words hist
## Hyperparameters tuning
Figure \ref{fig:km-tree-param} shows the effect of tree depth and number of trees, when classifying a bag-of-words created by K-means with 100 cluster centers.
+Optimal values for tree depth and number of trees were found to be respectively 5 and 100 as shown in figure \ref{fig:km-tree-param}. Running for multiple seeds instances shows an average accuracy of 80% for these two parameters, peaking at 84% in very specific cases.
+We expect a large tree depth to lead into overfitting. However for the data analysed it is only possible to observe a plateau in classification performance.
\begin{figure}[H]
\begin{center}
@@ -60,6 +62,7 @@ Figure \ref{fig:km-tree-param} shows the effect of tree depth and number of tree
\end{figure}
Random forests will select a random number of features on which to apply a weak learner (such as axis aligned split) and then chose the best feature of the sampled ones to perform the split on, based on a given criteria (our results use the *Gini index*). The fewer features that are compared for each split the quicker the trees are built and the more random they are. Therefore the randomness parameter can be considered the number of features used when making splits. We evaluate accuracy given different randomness when using a K-means vocabulary in figure \ref{fig:kmeanrandom}. The results in the figure \ref{fig:kmeanrandom} use a forest size of 100 as we infered that this is the estimatator count for which performance gains tend to plateau (when selecting $\sqrt{n}$ random features).
+This parameter also affects correlation between trees. We expect in fact trees to be more correlated when using a large number of features for splits.
\begin{figure}[H]
\begin{center}
@@ -74,10 +77,9 @@ Changing the randomness parameter had no significant effect on execution time. T
## Weak Learner comparison
-In figure \ref{fig:2pt} it is possible to notice an improvement in recognition accuracy by 1%,
+In figure \ref{fig:2pt} it is possible to notice an improvement in recognition accuracy by 2%,
with the two pixels test, achieving better results than the axis-aligned counterpart. The two-pixels
-test however brings a slight deacrease in time performance which has been measured to be on average 1 second
-more. This is due to the complexity added by the two-pixels test, since it adds one dimension to the computation.
+test theoretically brings a slight deacrease in time performance due to complexity, since it adds one dimension to the computation. It is difficult to measure in our case since it should be less than a second.
\begin{figure}[H]
\begin{center}
@@ -87,6 +89,8 @@ more. This is due to the complexity added by the two-pixels test, since it adds
\end{center}
\end{figure}
+Figure \ref{fig:km_cm} shows a confusion matrix for K-means+RF CLassifier with 256 centroids, a forest size of 100 and trees depth of 5. The reported accuracy for this case is 82%. Figure \ref{fig:km_succ} reports examples of failure and success cases obtained from this test, with the top performing classes being *trilobite* and *windsor_chair*. *Water_lilly* was the one that on average performed worst.
+
\begin{figure}[H]
\begin{center}
\includegraphics[width=14em]{fig/e100k256d5_cm.pdf}