From f8f7f8f692bc960ef5c5be74936a6d1de5a5caac Mon Sep 17 00:00:00 2001 From: Vasil Zlatanov Date: Tue, 12 Feb 2019 17:22:22 +0000 Subject: Cite correctly --- report/paper.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'report/paper.md') diff --git a/report/paper.md b/report/paper.md index 1b03992..3e5ec48 100644 --- a/report/paper.md +++ b/report/paper.md @@ -17,7 +17,7 @@ The number of clusters or the number of centroids determine the vocabulary size An example histogram for training image shown on figure \ref{fig:histo_tr}, computed with a vocubulary size of 100. A corresponding testing image of the same class is shown in figure \ref{fig:histo_te}. The histograms appear to have similar counts for the same words, demonstrating they had a descriptors which matched the *keywowrds* in similar proportions. We later look at the effect of the vocubalary size (as determined by the number of K-mean centroids) on the classificaiton accuracy in figure \ref{fig:km_vocsize}. -The time complexity of quantisation with a K-means codebooks is $O(n^{dk+1})$ , where n is the number of entities to be clustered, d is the dimension and k is the cluster count @cite[km-complexity]. As the computation time is high, the tests we use a subsample of descriptors to compute the centroids. An alternative method is NUNZIO PUCCI WRITE HERE +The time complexity of quantisation with a K-means codebooks is $O(n^{dk+1})$ , where n is the number of entities to be clustered, d is the dimension and k is the cluster count [@km-complexity]. As the computation time is high, the tests we use a subsample of descriptors to compute the centroids. An alternative method is NUNZIO PUCCI WRITE HERE \begin{figure}[H] \begin{center} -- cgit v1.2.3-54-g00ecf