aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorVasil Zlatanov <v@skozl.com>2019-02-12 20:12:36 +0000
committerVasil Zlatanov <v@skozl.com>2019-02-12 20:12:36 +0000
commit5ebf5cafe3e6b5ab711ddb3b95299f04c0314333 (patch)
tree11a85bb572a63f64c091ae968acbac7d29c747f7
parent5465197675ff82c54b477cebff3be7beccfec560 (diff)
parent4046ef55a352bdfaa238f0499a280f4844c705f0 (diff)
downloade4-vision-5ebf5cafe3e6b5ab711ddb3b95299f04c0314333.tar.gz
e4-vision-5ebf5cafe3e6b5ab711ddb3b95299f04c0314333.tar.bz2
e4-vision-5ebf5cafe3e6b5ab711ddb3b95299f04c0314333.zip
Fixes to intro
-rw-r--r--report/paper.md17
1 files changed, 15 insertions, 2 deletions
diff --git a/report/paper.md b/report/paper.md
index bbd7d73..af3f8d3 100644
--- a/report/paper.md
+++ b/report/paper.md
@@ -6,7 +6,7 @@ A common technique for codebook generation involves utilising K-means clustering
image descriptors. In this way descriptors may be mapped to *visual* words which lend themselves to
binning and therefore the creation of bag-of-words histograms for the use of classification.
-In this courseworok 100-thousand random SIFT descriptors of the Caltech dataset are used to build the K-means visual vocabulary.
+In this courseworok 100-thousand random SIFT descriptors of the Caltech_101 dataset are used to build the K-means visual vocabulary.
## Vocabulary size
@@ -65,7 +65,7 @@ Changing the randomness parameter had no significant effect on execution time. T
In figure \ref{fig:2pt} it is possible to notice an improvement in recognition accuracy by 1%,
with the two pixels test, achieving better results than the axis-aligned counterpart. The two-pixels
-test however brings a slight deacrease in time performance which has been measured to be on average 3 seconds
+test however brings a slight deacrease in time performance which has been measured to be on average 1 second
more. This is due to the complexity added by the two-pixels test, since it adds one dimension to the computation.
\begin{figure}[H]
@@ -164,6 +164,19 @@ which is $O(\sqrt{D} N \log K)$ compared to $O(DNK)$ for K-means. Codebook mappi
# Comparison of methods and conclusions
+Overall K-means achieves slightly better accuracy that the RF-codebook at the expense of a higher execution time for training **(and testing???)**.
+
+As discussed in section I, due to the initialization process for optimal centroids placements, K-means can result unpreferable for large
+descriptors' sizes (in absence of methods for dimensionality reduction),
+and in many cases the increase in training time would not justify the minimum increase in classification performance.
+
+For Caltech_101 RF-codebook seems to be the most suitable method to perform RF-classification.
+
+It is observable that for the particular dataset we are analysing the class *water_lilly*
+is the one that gets misclassified the most, both in k-means and RF-codebook (refer to figures \ref{fig:km_cm} and \ref{fig:p3_cm}. This means that the features obtained
+from this class do not guarantee very discriminative splits, hence the first splits in the trees
+will prioritize features taken from other classes.
+
# References