aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorVasil Zlatanov <v@skozl.com>2019-02-12 17:34:41 +0000
committerVasil Zlatanov <v@skozl.com>2019-02-12 17:34:41 +0000
commit63e908514ce57e1bc03301c950ffb360976dace9 (patch)
tree98a439c781dd12977487ae17cf61da0669313129
parent801b30cc67dee6071101320bc9ac1f6edde655f9 (diff)
downloade4-vision-63e908514ce57e1bc03301c950ffb360976dace9.tar.gz
e4-vision-63e908514ce57e1bc03301c950ffb360976dace9.tar.bz2
e4-vision-63e908514ce57e1bc03301c950ffb360976dace9.zip
Write section for RF codebook
-rw-r--r--report/paper.md7
1 files changed, 1 insertions, 6 deletions
diff --git a/report/paper.md b/report/paper.md
index bae8979..b6d56dd 100644
--- a/report/paper.md
+++ b/report/paper.md
@@ -110,12 +110,7 @@ more. This is due to the complexity added by the two-pixels test, since it adds
# RF codebook
-In Q1, replace the K-means with the random forest codebook, i.e. applying RF to 128 dimensional
-descriptor vectors with their image category labels, and using the RF leaves as the visual
-vocabulary. With the bag-of-words representations of images obtained by the RF codebook, train
-and test Random Forest classifier similar to Q2. Try different parameters of the RF codebook and
-RF classifier, and show/discuss the results in comparison with the results of Q2, including the
-vector quantisation complexity.
+An alternative to codebook creation via *K-means* involves using an ensemble of totally random trees. We code each decriptor according to which leaf of each tree in the ensemble it is sorted. This effectively performs and unsupervised transformation of our dataset to a high-dimensional sparse representation. The dimension of the vocubulary size is determined by the number of leaves in each random tree and the ensemble size.
\begin{figure}[H]
\begin{center}