aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authornunzip <np.scarh@gmail.com>2019-02-12 17:37:06 +0000
committernunzip <np.scarh@gmail.com>2019-02-12 17:37:06 +0000
commit72d0a8f45a08088d0f2b490d2458982079320f61 (patch)
tree7801247c7376cc8f08296ff056205a111af2c5c3
parentfe86f4392872a3e11dfb7e975762bd8e0ea616f0 (diff)
parent63e908514ce57e1bc03301c950ffb360976dace9 (diff)
downloade4-vision-72d0a8f45a08088d0f2b490d2458982079320f61.tar.gz
e4-vision-72d0a8f45a08088d0f2b490d2458982079320f61.tar.bz2
e4-vision-72d0a8f45a08088d0f2b490d2458982079320f61.zip
Merge branch 'master' of skozl.com:e4-vision
-rw-r--r--report/fig/2pixels_kmean.pdfbin14662 -> 15084 bytes
-rw-r--r--report/fig/km-histogram.pdfbin13076 -> 13511 bytes
-rw-r--r--report/fig/km-histtest.pdfbin13919 -> 14352 bytes
-rw-r--r--report/paper.md7
4 files changed, 1 insertions, 6 deletions
diff --git a/report/fig/2pixels_kmean.pdf b/report/fig/2pixels_kmean.pdf
index 1a95bad..f75d200 100644
--- a/report/fig/2pixels_kmean.pdf
+++ b/report/fig/2pixels_kmean.pdf
Binary files differ
diff --git a/report/fig/km-histogram.pdf b/report/fig/km-histogram.pdf
index f459978..99d1658 100644
--- a/report/fig/km-histogram.pdf
+++ b/report/fig/km-histogram.pdf
Binary files differ
diff --git a/report/fig/km-histtest.pdf b/report/fig/km-histtest.pdf
index c7da428..ccc09f0 100644
--- a/report/fig/km-histtest.pdf
+++ b/report/fig/km-histtest.pdf
Binary files differ
diff --git a/report/paper.md b/report/paper.md
index a696cd6..e8d9709 100644
--- a/report/paper.md
+++ b/report/paper.md
@@ -116,12 +116,7 @@ more. This is due to the complexity added by the two-pixels test, since it adds
# RF codebook
-In Q1, replace the K-means with the random forest codebook, i.e. applying RF to 128 dimensional
-descriptor vectors with their image category labels, and using the RF leaves as the visual
-vocabulary. With the bag-of-words representations of images obtained by the RF codebook, train
-and test Random Forest classifier similar to Q2. Try different parameters of the RF codebook and
-RF classifier, and show/discuss the results in comparison with the results of Q2, including the
-vector quantisation complexity.
+An alternative to codebook creation via *K-means* involves using an ensemble of totally random trees. We code each decriptor according to which leaf of each tree in the ensemble it is sorted. This effectively performs and unsupervised transformation of our dataset to a high-dimensional sparse representation. The dimension of the vocubulary size is determined by the number of leaves in each random tree and the ensemble size.
\begin{figure}[H]
\begin{center}