aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorVasil Zlatanov <v@skozl.com>2019-02-12 20:22:21 +0000
committerVasil Zlatanov <v@skozl.com>2019-02-12 20:22:21 +0000
commita6e5ced5c4dddd76af5e8ffd6950b1b2b7836fd0 (patch)
tree0b6d60e0cc46c12ddd227d5b797343c7d91ea653
parent288e28070d27c496d6ac4af5676734451f8430e9 (diff)
downloade4-vision-a6e5ced5c4dddd76af5e8ffd6950b1b2b7836fd0.tar.gz
e4-vision-a6e5ced5c4dddd76af5e8ffd6950b1b2b7836fd0.tar.bz2
e4-vision-a6e5ced5c4dddd76af5e8ffd6950b1b2b7836fd0.zip
RF Class intro
-rw-r--r--report/paper.md11
1 files changed, 5 insertions, 6 deletions
diff --git a/report/paper.md b/report/paper.md
index 06d8357..4ab5924 100644
--- a/report/paper.md
+++ b/report/paper.md
@@ -16,13 +16,10 @@ The number of clusters or the number of centroids determines the vocabulary size
An example histograms for training and testing images is shown on figure \ref{fig:histo_tr}, computed with a vocubulary size of 100. The histograms of the same class appear to have comparable magnitudes for their respective keywords, demonstrating they had a similar number of descriptors which mapped to each of the clusters. The effect of the vocubalary size (as determined by the number of K-means centroids) on the classificaiton accuracy is shown in figure \ref{fig:km_vocsize}. A small vocabulary size tends to misrepresent the information contained in the different patches, resulting in poor classification accuracy. Conversly a large vocabulary size (many K-mean centroids), may display overfitting. In our tests, we observe a plateau after a cluster count of 60 on figure \ref{fig:km_vocsize}.
-The time complexity of quantisation with a K-means codebooks is $O(DNK)$, where N is the number of entities to be clustered (descriptors), D is the dimension (of the descriptors) and K is the cluster count [@km-complexity]. As the computation time is high, the tests we use a subsample of descriptors to compute the centroids (a random selection of 100 thousand descriptors). An alternative method we tried is applying PCA to the descriptors vectors to improve time performance. However in this case the descriptors' size is relatively small, and for such reason we opted to avoid PCA for further training.
+The time complexity of quantisation with a K-means codebooks is $O(DNK)$, where N is the number of entities to be clustered (descriptors), D is the dimension (of the descriptors) and K is the cluster count [@km-complexity]. As the computation time is high, the tests we use a subsample of descriptors to compute the centroids (a random selection of 100 thousand descriptors). An alternative method we tried is applying PCA to the descriptors vectors to improve time performance. However, the descriptor dimension of 128 is relatiely small and as such we found PCA to be unnecessary.
-K-means is a process that converges to local optima and heavilly depends on the initialization values of the centroids.
-Initializing k-means is an expensive process, based on sequential attempts of centroids placement.
-Running for multiple instances significantly affects the computation process, leading to a linear increase in execution time.
-Attempting centroid initialization more than once didn't cause significant improvements in terms of accuracy for the data analysed in
-this coursework, only leading to an increase in execution time.
+K-means is a process that converges to local optima and heavily depends on the initialization values of the centroids.
+Initializing K-means is an expensive process, based on sequential attempts of centroids placement. Running for multiple instances significantly affects the computation process, leading to a linear increase in execution time. We did not observe increase in accuracy with K-means estimator size larger than one, and therefore present results accuracy and execution time results with a single K-Mean estimator.
\begin{figure}[H]
\begin{center}
@@ -35,6 +32,8 @@ this coursework, only leading to an increase in execution time.
# RF classifier
+We use a Random Forest Classifier to label images based on the bag-of-words histograms. Random forests are an ensemble of randomly generated decision trees. Random Forest classifier performance depends on the ensemble size, tree depth, randomness and weak learner used.
+
## Hyperparameters tuning
Figure \ref{fig:km-tree-param} shows the effect of tree depth and number of trees