diff options
author | Vasil Zlatanov <v@skozl.com> | 2019-02-12 18:05:25 +0000 |
---|---|---|
committer | Vasil Zlatanov <v@skozl.com> | 2019-02-12 18:05:25 +0000 |
commit | ba8a1b942685ce6eb1a85d2594ada107dc2b888c (patch) | |
tree | b6fbb0de40d0353b94987aed0a619b3e1d39ab8f /report | |
parent | 4d92df7d253d0262eb8dbb854cd0afbfff4969f7 (diff) | |
download | e4-vision-ba8a1b942685ce6eb1a85d2594ada107dc2b888c.tar.gz e4-vision-ba8a1b942685ce6eb1a85d2594ada107dc2b888c.tar.bz2 e4-vision-ba8a1b942685ce6eb1a85d2594ada107dc2b888c.zip |
Add random forest text
Diffstat (limited to 'report')
-rw-r--r-- | report/paper.md | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/report/paper.md b/report/paper.md index a75a7c9..1333c31 100644 --- a/report/paper.md +++ b/report/paper.md @@ -59,7 +59,7 @@ for K-means 100 cluster centers. \end{center} \end{figure} -Figure \ref{fig:kmeanrandom} shows randomness parameter for K-means 100. +Random forests will select a random number of features on which to apply a weak learner (such as axis aligned split) and then select the best feature of the selected ones to perform the split on, based on some criteria (our results use the *Gini index*). The fewer features that are compared for each split the quicker the trees are built and the more random they are. Therefore the randomness parameter can be considered the number of features used when making splits. We evaluate accuracy given different randomness when using a K-means vocabulary in \ref{fig:kmeanrandom}. \begin{figure}[H] \begin{center} |