From 46a277bf740ca2631255a7344ef8736a6c7ba34b Mon Sep 17 00:00:00 2001 From: nunzip Date: Tue, 12 Feb 2019 17:58:25 +0000 Subject: Fix histogram figures --- report/fig/testhist.pdf | Bin 0 -> 28694 bytes report/fig/trainhist.pdf | Bin 0 -> 23744 bytes report/paper.md | 17 ++++------------- 3 files changed, 4 insertions(+), 13 deletions(-) create mode 100644 report/fig/testhist.pdf create mode 100644 report/fig/trainhist.pdf (limited to 'report') diff --git a/report/fig/testhist.pdf b/report/fig/testhist.pdf new file mode 100644 index 0000000..271ee29 Binary files /dev/null and b/report/fig/testhist.pdf differ diff --git a/report/fig/trainhist.pdf b/report/fig/trainhist.pdf new file mode 100644 index 0000000..3872951 Binary files /dev/null and b/report/fig/trainhist.pdf differ diff --git a/report/paper.md b/report/paper.md index e8d9709..439ab37 100644 --- a/report/paper.md +++ b/report/paper.md @@ -15,7 +15,7 @@ The number of clusters or the number of centroids determines the vocabulary size ## Bag-of-words histogram quantisation of descriptor vectors -An example histogram for training image shown on figure \ref{fig:histo_tr}, computed with a vocubulary size of 100. A corresponding testing image of the same class is shown in figure \ref{fig:histo_te}. The histograms appear to have similar counts for the same words, demonstrating they had a descriptors which matched the *keywowrds* in similar proportions. We later look at the effect of the vocubalary size (as determined by the number of K-mean centroids) on the classificaiton accuracy in figure \ref{fig:km_vocsize}. A small vocabulary size turns out to misrepresent the information contained in the different patches, resulting in poor classification accuracy. When the vocabulary size gets too big (too many k-mean centroids), the result is instead overfitting. Figure \ref{fig:km_vocsize} shows a plateau after 60 cluster centers. +An example histograms for training and testing images is shown on figure \ref{fig:histo_tr}, computed with a vocubulary size of 100. The histograms appear to have similar counts for the same words, demonstrating they had a descriptors which matched the *keywowrds* in similar proportions. We later look at the effect of the vocubalary size (as determined by the number of K-mean centroids) on the classificaiton accuracy in figure \ref{fig:km_vocsize}. A small vocabulary size turns out to misrepresent the information contained in the different patches, resulting in poor classification accuracy. When the vocabulary size gets too big (too many k-mean centroids), the result is instead overfitting. Figure \ref{fig:km_vocsize} shows a plateau after 60 cluster centers. The time complexity of quantisation with a K-means codebooks is $O(n^{dk+1})$ , where n is the number of entities to be clustered, d is the dimension and k is the cluster count @cite[km-complexity]. As the computation time is high, the tests we use a subsample of descriptors to compute the centroids. An alternative method we tried is applying PCA to the descriptors vecotrs to improve time performance. However in this case the descriptors' size is relatively small, and for such reason we opted to avoid PCA for further training. @@ -27,22 +27,13 @@ this coursework, only leading to an increase in execution time. \begin{figure}[H] \begin{center} -\includegraphics[height=4em]{fig/hist_test.jpg} -\includegraphics[width=20em]{fig/km-histogram.pdf} -\caption{Bag-of-words Training histogram} +\includegraphics[width=12em]{fig/trainhist.pdf} +\includegraphics[width=12em]{fig/testhist.pdf} +\caption{Bag-of-words histograms; Training left, Testing right} \label{fig:histo_tr} \end{center} \end{figure} -\begin{figure}[H] -\begin{center} -\includegraphics[height=4em]{fig/hist_train.jpg} -\includegraphics[width=20em]{fig/km-histtest.pdf} -\caption{Bag-of-words Testing histogram} -\label{fig:histo_te} -\end{center} -\end{figure} - # RF classifier ## Hyperparameters tuning -- cgit v1.2.3-54-g00ecf