aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--report/paper.md8
1 files changed, 2 insertions, 6 deletions
diff --git a/report/paper.md b/report/paper.md
index b0c29bc..a45f2c1 100644
--- a/report/paper.md
+++ b/report/paper.md
@@ -14,7 +14,7 @@ The number of clusters or the number of centroids determines the vocabulary size
## Bag-of-words histogram quantisation of descriptor vectors
-An example histograms for training and testing images is shown on figure \ref{fig:histo_tr}, computed with a vocubulary size of 100. The histograms of the same class appear to have comparable magnitudes for their respective keywords, demonstrating they had a similar number of descriptors which mapped to each of the clusters. The effect of the vocubalary size (as determined by the number of K-means centroids) on the classificaiton accuracy is shown in figure \ref{fig:km_vocsize}. A small vocabulary size tends to misrepresent the information contained in the different patches, resulting in poor classification accuracy. Conversly a large vocabulary size (many K-mean centroids), may display overfitting. In our tests, we observe a plateau after a cluster count of 60 on figure \ref{fig:km_vocsize}.
+An example histograms for training and testing images is shown on figure \ref{fig:histo_tr}, computed with a vocubulary size of 100. The histograms of the same class appear to have comparable magnitudes for their respective keywords, demonstrating they had a similar number of descriptors which mapped to each of the clusters. The effect of the vocubalary size (as determined by the number of K-means centroids) on the classificaiton accuracy is shown in figure \ref{fig:km_vocsize}. A small vocabulary size tends to misrepresent the information contained in the different patches, resulting in poor classification accuracy. Conversly a large vocabulary size (many K-mean centroids), may display overfitting. In our tests, we observe a plateau after a cluster count of 60 on figure \ref{fig:km_vocsize}. This proccess of partitioning the input space into K distinct clusters is a form of **vector quantisation**.
\begin{figure}
\begin{center}
@@ -89,7 +89,7 @@ test theoretically brings a slight deacrease in time performance due to complexi
\end{center}
\end{figure}
-Figure \ref{fig:km_cm} shows a confusion matrix for K-means+RF CLassifier with 256 centroids, a forest size of 100 and trees depth of 5. The reported accuracy for this case is 82%. Figure \ref{fig:km_succ} reports examples of failure and success cases obtained from this test, with the top performing classes being `trilobite` and `windsor_chair`. `Water_lilly` was the one that on average performed worst.
+Figure \ref{fig:km_cm} shows a confusion matrix for RF Classification on K-means coded descriptors with 256 centroids, a forest size of 100 and trees depth of 5. The reported accuracy for this case is 82%. Figure \ref{fig:km_succ} reports examples of failure and success cases obtained from this test, with the top performing classes being `trilobite` and `windsor_chair`. `Water_lilly` was the one that on average performed worst.
\begin{figure}
\begin{center}
@@ -150,10 +150,6 @@ For the Caltech_101 dataset, a RF codebook seems to be the most suitable method
The `water_lilly` is the most misclassified class, both for K-means and RF codebook (refer to figures \ref{fig:km_cm} and \ref{fig:p3_cm}). This indicates that the features obtained from the class do not provide for very discriminative splits, resulting in the prioritsation of other features in the first nodes of the decision trees.
-All code/graphs and configurable scripts can be found on our repository:
-
-``git clone https://git.skozl.com/e4-vision/``
-
# References
<div id="refs"></div>