From 28e951c4e7c590cfeded709539a59cfae8519e81 Mon Sep 17 00:00:00 2001 From: Vasil Zlatanov Date: Tue, 12 Feb 2019 20:45:18 +0000 Subject: Slight changes to conclusion --- report/paper.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/report/paper.md b/report/paper.md index 112bd6e..4c146c8 100644 --- a/report/paper.md +++ b/report/paper.md @@ -159,7 +159,7 @@ which is $O(\sqrt{D} N \log K)$ compared to $O(DNK)$ for K-means. Codebook mappi # Comparison of methods and conclusions -Overall K-means achieves slightly better accuracy that the RF-codebook at the expense of a higher execution time for training **(and testing???)**. +Overall we observe slightly higher accuracy when using K-means codebooks compared to RF codebook at the expense of a higher execution time for training and testing. As discussed in section I, due to the initialization process for optimal centroids placements, K-means can result unpreferable for large descriptors' sizes (in absence of methods for dimensionality reduction), @@ -168,7 +168,7 @@ and in many cases the increase in training time would not justify the minimum in For Caltech_101 RF-codebook seems to be the most suitable method to perform RF-classification. It is observable that for the particular dataset we are analysing the class *water_lilly* -is the one that gets misclassified the most, both in k-means and RF-codebook (refer to figures \ref{fig:km_cm} and \ref{fig:p3_cm}. This means that the features obtained +is the one that gets misclassified the most, both in K-means and RF codebooks (refer to figures \ref{fig:km_cm} and \ref{fig:p3_cm}. This means that the features obtained from this class do not guarantee very discriminative splits, hence the first splits in the trees will prioritize features taken from other classes. -- cgit v1.2.3