diff options
author | nunzip <np.scarh@gmail.com> | 2019-02-12 20:04:20 +0000 |
---|---|---|
committer | nunzip <np.scarh@gmail.com> | 2019-02-12 20:04:20 +0000 |
commit | c7c97e8ce1e6281c1c16213e1b4b7dfd7ba7d3a4 (patch) | |
tree | 6659b39f58fac1684eee7e04ea22b36028fa00cb /report | |
parent | 0f1da531317cfab7479fef0fed0166387f0fa13e (diff) | |
download | e4-vision-c7c97e8ce1e6281c1c16213e1b4b7dfd7ba7d3a4.tar.gz e4-vision-c7c97e8ce1e6281c1c16213e1b4b7dfd7ba7d3a4.tar.bz2 e4-vision-c7c97e8ce1e6281c1c16213e1b4b7dfd7ba7d3a4.zip |
Add comparison
Diffstat (limited to 'report')
-rw-r--r-- | report/paper.md | 17 |
1 files changed, 15 insertions, 2 deletions
diff --git a/report/paper.md b/report/paper.md index fbac659..b4429f2 100644 --- a/report/paper.md +++ b/report/paper.md @@ -7,7 +7,7 @@ image descriptors. In this way descriptors may be mapped to *visual* words which binning and therefore the creation of bag-of-words histograms for the use of classification. In this courseworok 100-thousand descriptors have been extracted through SIFT to build the visual vocabulary from the -Caltech dataset. +Caltech_101 dataset. ## Vocabulary size @@ -66,7 +66,7 @@ Changing the randomness parameter had no significant effect on execution time. T In figure \ref{fig:2pt} it is possible to notice an improvement in recognition accuracy by 1%, with the two pixels test, achieving better results than the axis-aligned counterpart. The two-pixels -test however brings a slight deacrease in time performance which has been measured to be on average 3 seconds +test however brings a slight deacrease in time performance which has been measured to be on average 1 second more. This is due to the complexity added by the two-pixels test, since it adds one dimension to the computation. \begin{figure}[H] @@ -164,6 +164,19 @@ An alternative to codebook creation via K-means involves using an ensemble of to # Comparison of methods and conclusions +Overall K-means achieves slightly better accuracy that the RF-codebook at the expense of a higher execution time for training **(and testing???)**. + +As discussed in section I, due to the initialization process for optimal centroids placements, K-means can result unpreferable for large +descriptors' sizes (in absence of methods for dimensionality reduction), +and in many cases the increase in training time would not justify the minimum increase in classification performance. + +For Caltech_101 RF-codebook seems to be the most suitable method to perform RF-classification. + +It is observable that for the particular dataset we are analysing the class *water_lilly* +is the one that gets misclassified the most, both in k-means and RF-codebook (refer to figures \ref{fig:km_cm} and \ref{fig:p3_cm}. This means that the features obtained +from this class do not guarantee very discriminative splits, hence the first splits in the trees +will prioritize features taken from other classes. + # References |