aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authornunzip <np.scarh@gmail.com>2019-02-12 20:04:36 +0000
committernunzip <np.scarh@gmail.com>2019-02-12 20:04:36 +0000
commit4046ef55a352bdfaa238f0499a280f4844c705f0 (patch)
tree24001277738ce94e42fb15a937cbf29152455d19
parentc7c97e8ce1e6281c1c16213e1b4b7dfd7ba7d3a4 (diff)
parent17528eb77d8f7c888186a85f17fa60ad0ffd6cf8 (diff)
downloade4-vision-4046ef55a352bdfaa238f0499a280f4844c705f0.tar.gz
e4-vision-4046ef55a352bdfaa238f0499a280f4844c705f0.tar.bz2
e4-vision-4046ef55a352bdfaa238f0499a280f4844c705f0.zip
Merge branch 'master' of skozl.com:e4-vision
-rw-r--r--report/metadata.yaml2
-rw-r--r--report/paper.md9
2 files changed, 5 insertions, 6 deletions
diff --git a/report/metadata.yaml b/report/metadata.yaml
index d54434c..4c84d71 100644
--- a/report/metadata.yaml
+++ b/report/metadata.yaml
@@ -8,7 +8,5 @@ lang: en
babel-lang: english
fontsize: 10pt
-abstract: |
- This is a very good abstract.
...
diff --git a/report/paper.md b/report/paper.md
index b4429f2..2fa93e7 100644
--- a/report/paper.md
+++ b/report/paper.md
@@ -17,7 +17,7 @@ The number of clusters or the number of centroids determines the vocabulary size
An example histograms for training and testing images is shown on figure \ref{fig:histo_tr}, computed with a vocubulary size of 100. The histograms appear to have similar counts for the same words, demonstrating they had a descriptors which matched the *keywowrds* in similar proportions. We later look at the effect of the vocubalary size (as determined by the number of K-means centroids) on the classificaiton accuracy in figure \ref{fig:km_vocsize}. A small vocabulary size turns out to misrepresent the information contained in the different patches, resulting in poor classification accuracy. When the vocabulary size gets too big (too many k-mean centroids), the result is instead overfitting. Figure \ref{fig:km_vocsize} shows a plateau after 60 cluster centers.
-The time complexity of quantisation with a K-means codebooks is $O(n^{dk+1})$ , where n is the number of entities to be clustered, d is the dimension and k is the cluster count @cite[km-complexity]. As the computation time is high, the tests we use a subsample of descriptors to compute the centroids. An alternative method we tried is applying PCA to the descriptors vecotrs to improve time performance. However in this case the descriptors' size is relatively small, and for such reason we opted to avoid PCA for further training.
+The time complexity of quantisation with a K-means codebooks is $O(DNK)$ , where N is the number of entities to be clustered (descriptors), D is the dimension (of the descriptors) and K is the cluster count @cite[km-complexity]. As the computation time is high, the tests we use a subsample of descriptors to compute the centroids. An alternative method we tried is applying PCA to the descriptors vectors to improve time performance. However in this case the descriptors' size is relatively small, and for such reason we opted to avoid PCA for further training.
K-means is a process that converges to local optima and heavilly depends on the initialization values of the centroids.
Initializing k-means is an expensive process, based on sequential attempts of centroids placement.
@@ -50,7 +50,7 @@ for K-means 100 cluster centers.
\end{center}
\end{figure}
-Random forests will select a random number of features on which to apply a weak learner (such as axis aligned split) and then select the best feature of the selected ones to perform the split on, based on some criteria (our results use the *Gini index*). The fewer features that are compared for each split the quicker the trees are built and the more random they are. Therefore the randomness parameter can be considered the number of features used when making splits. We evaluate accuracy given different randomness when using a K-means vocabulary in \ref{fig:kmeanrandom}.
+Random forests will select a random number of features on which to apply a weak learner (such as axis aligned split) and then select the best feature of the selected ones to perform the split on, based on some criteria (our results use the *Gini index*). The fewer features that are compared for each split the quicker the trees are built and the more random they are. Therefore the randomness parameter can be considered the number of features used when making splits. We evaluate accuracy given different randomness when using a K-means vocabulary in figure \ref{fig:kmeanrandom}. The results in the figure use a forest size of 100 as we found that this estimatator count the improvement for $\sqrt{n}$ performance gains tend to plateau.
\begin{figure}[H]
\begin{center}
@@ -77,7 +77,7 @@ more. This is due to the complexity added by the two-pixels test, since it adds
\end{center}
\end{figure}
-## Impact of the vocabulary size on classification accuracy.
+## Impact of RF vocabulary size on classification accuracy.
\begin{figure}[H]
\begin{center}
@@ -109,7 +109,8 @@ more. This is due to the complexity added by the two-pixels test, since it adds
# RF codebook
-An alternative to codebook creation via K-means involves using an ensemble of totally random trees. We code each decriptor according to which leaf of each tree in the ensemble it is sorted. This effectively performs and unsupervised transformation of our dataset to a high-dimensional spare representation. The dimension of the vocubulary size is determined by the number of leaves in each random tree and the ensemble size.
+An alternative to codebook creation via K-means involves using an ensemble of totally random trees. We code each decriptor according to which leaf of each tree in the ensemble it is sorted. This effectively performs and unsupervised transformation of our dataset to a high-dimensional spare representation. The dimension of the vocabulary size is determined by the number of leaves in each random tree and the ensemble size. From comparing execution times of K-means in figure \ref{fig:km_vocsize} and the RF codebook in \ref{fig:p3_voc} we observe considerable speed gains from utilising the RF codebook. This may be attributed to the reduce complexity of RF Codebook creation,
+which is $O(\sqrt{D} N \log K)$ compared to $O(DNK)$ for K-means. Codebook mapping given a created vocabulary is also quicker than K-means, $O(\log K)$ (assuming a balanced tree) vs $O(KD)$.
\begin{figure}[H]
\begin{center}