aboutsummaryrefslogtreecommitdiff
path: root/report/paper.md
diff options
context:
space:
mode:
authornunzip <np.scarh@gmail.com>2019-02-12 17:37:06 +0000
committernunzip <np.scarh@gmail.com>2019-02-12 17:37:06 +0000
commit72d0a8f45a08088d0f2b490d2458982079320f61 (patch)
tree7801247c7376cc8f08296ff056205a111af2c5c3 /report/paper.md
parentfe86f4392872a3e11dfb7e975762bd8e0ea616f0 (diff)
parent63e908514ce57e1bc03301c950ffb360976dace9 (diff)
downloade4-vision-72d0a8f45a08088d0f2b490d2458982079320f61.tar.gz
e4-vision-72d0a8f45a08088d0f2b490d2458982079320f61.tar.bz2
e4-vision-72d0a8f45a08088d0f2b490d2458982079320f61.zip
Merge branch 'master' of skozl.com:e4-vision
Diffstat (limited to 'report/paper.md')
-rw-r--r--report/paper.md7
1 files changed, 1 insertions, 6 deletions
diff --git a/report/paper.md b/report/paper.md
index a696cd6..e8d9709 100644
--- a/report/paper.md
+++ b/report/paper.md
@@ -116,12 +116,7 @@ more. This is due to the complexity added by the two-pixels test, since it adds
# RF codebook
-In Q1, replace the K-means with the random forest codebook, i.e. applying RF to 128 dimensional
-descriptor vectors with their image category labels, and using the RF leaves as the visual
-vocabulary. With the bag-of-words representations of images obtained by the RF codebook, train
-and test Random Forest classifier similar to Q2. Try different parameters of the RF codebook and
-RF classifier, and show/discuss the results in comparison with the results of Q2, including the
-vector quantisation complexity.
+An alternative to codebook creation via *K-means* involves using an ensemble of totally random trees. We code each decriptor according to which leaf of each tree in the ensemble it is sorted. This effectively performs and unsupervised transformation of our dataset to a high-dimensional sparse representation. The dimension of the vocubulary size is determined by the number of leaves in each random tree and the ensemble size.
\begin{figure}[H]
\begin{center}