diff options
author | Vasil Zlatanov <v@skozl.com> | 2019-02-12 17:19:18 +0000 |
---|---|---|
committer | Vasil Zlatanov <v@skozl.com> | 2019-02-12 17:19:18 +0000 |
commit | 66010e7c039bd5cc7879b1beb80ba860188fcbe9 (patch) | |
tree | 2af2dcf36d4b3013c95d517677d65dd682086344 | |
parent | 333d158dd0bac1e1fee86c6399f763dea22a90ea (diff) | |
download | e4-vision-66010e7c039bd5cc7879b1beb80ba860188fcbe9.tar.gz e4-vision-66010e7c039bd5cc7879b1beb80ba860188fcbe9.tar.bz2 e4-vision-66010e7c039bd5cc7879b1beb80ba860188fcbe9.zip |
Typo fixes
-rw-r--r-- | report/paper.md | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/report/paper.md b/report/paper.md index ac72f2b..1b03992 100644 --- a/report/paper.md +++ b/report/paper.md @@ -15,9 +15,9 @@ The number of clusters or the number of centroids determine the vocabulary size ## Bag-of-words histogram quantisation of descriptor vectors -An example histogram for training image shown on figure {fig:histo_tr}, computed with a vocubulary size of 100. A corresponding testing image of the same class is shown in figure \ref{fig:histo_te}. The histograms appear to have similar counts for the same words, demonstrating they had a descriptors which matched the *keywowrds* in similar proportions. We later look at the effect of the vocubalary size (as determined by the number of K-mean centroids) on the classificaiton accuracy in figure \ref{fig:km_vocsize}. +An example histogram for training image shown on figure \ref{fig:histo_tr}, computed with a vocubulary size of 100. A corresponding testing image of the same class is shown in figure \ref{fig:histo_te}. The histograms appear to have similar counts for the same words, demonstrating they had a descriptors which matched the *keywowrds* in similar proportions. We later look at the effect of the vocubalary size (as determined by the number of K-mean centroids) on the classificaiton accuracy in figure \ref{fig:km_vocsize}. -The time complexity of quantisation with a K-means codebooks is $O(n^{dk+1))$ , where n is the number of entities to be clustered, d is the dimension and k is the cluster count @cite[km-complexity]. As the computation time is high, the tests we use a subsample of descriptors to compute the centroids. An alternative method is NUNZIO PUCCI WRITE HERE +The time complexity of quantisation with a K-means codebooks is $O(n^{dk+1})$ , where n is the number of entities to be clustered, d is the dimension and k is the cluster count @cite[km-complexity]. As the computation time is high, the tests we use a subsample of descriptors to compute the centroids. An alternative method is NUNZIO PUCCI WRITE HERE \begin{figure}[H] \begin{center} |