From 63e908514ce57e1bc03301c950ffb360976dace9 Mon Sep 17 00:00:00 2001 From: Vasil Zlatanov Date: Tue, 12 Feb 2019 17:34:41 +0000 Subject: Write section for RF codebook --- report/paper.md | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) (limited to 'report') diff --git a/report/paper.md b/report/paper.md index bae8979..b6d56dd 100644 --- a/report/paper.md +++ b/report/paper.md @@ -110,12 +110,7 @@ more. This is due to the complexity added by the two-pixels test, since it adds # RF codebook -In Q1, replace the K-means with the random forest codebook, i.e. applying RF to 128 dimensional -descriptor vectors with their image category labels, and using the RF leaves as the visual -vocabulary. With the bag-of-words representations of images obtained by the RF codebook, train -and test Random Forest classifier similar to Q2. Try different parameters of the RF codebook and -RF classifier, and show/discuss the results in comparison with the results of Q2, including the -vector quantisation complexity. +An alternative to codebook creation via *K-means* involves using an ensemble of totally random trees. We code each decriptor according to which leaf of each tree in the ensemble it is sorted. This effectively performs and unsupervised transformation of our dataset to a high-dimensional sparse representation. The dimension of the vocubulary size is determined by the number of leaves in each random tree and the ensemble size. \begin{figure}[H] \begin{center} -- cgit v1.2.3-54-g00ecf