aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorVasil Zlatanov <v@skozl.com>2019-02-14 17:10:01 +0000
committerVasil Zlatanov <v@skozl.com>2019-02-14 17:10:01 +0000
commitda43c2140963a1369ba0565b5d434df8b88fe78e (patch)
treecba09946585546da03d95cd5bc11716d9dd28f3f
parentddb42abe861dc88215f28fc5ec7528b906f250b1 (diff)
downloade4-vision-da43c2140963a1369ba0565b5d434df8b88fe78e.tar.gz
e4-vision-da43c2140963a1369ba0565b5d434df8b88fe78e.tar.bz2
e4-vision-da43c2140963a1369ba0565b5d434df8b88fe78e.zip
Some small touches
-rw-r--r--report/paper.md12
1 files changed, 6 insertions, 6 deletions
diff --git a/report/paper.md b/report/paper.md
index 4d9afdf..dc7345a 100644
--- a/report/paper.md
+++ b/report/paper.md
@@ -128,9 +128,9 @@ Similarly to K-means codebook, we find that for the RF codebook the optimal tree
\end{center}
\end{figure}
-Varying the randomness parameter of the RF classifier (as in figure \ref{fig:kmeanrandom}) when using a RF codebook gives similar results as using the K-Means codebook.
+Varying the randomness parameter of the RF classifier (as in figure \ref{fig:kmeanrandom}) when using a RF codebook gives similar results to using the K-Means codebook.
-Figure \ref{fig:p3_cm} shows the confusion matrix for results with Codebook Forest Size=256, Classifier Forest Size=100, Trees Depth=5 (examples of success and failure in figure \ref{fig:p3_succ}). The classification accuracy for this case is 79%, with the top performing class being `windsor_chair`. In our tests, we observed poorest performance with the `water_lilly` class. The per class accuracy of classification with the RF codebook is similar to that of K-Means coded data, but we observe a significant speedup in training performance when building RF tree based vocabulary.
+Figure \ref{fig:p3_cm} shows the confusion matrix for results with Codebook Forest Size=256, Classifier Forest Size=100, Classifier Depth=5 (examples of success and failure in figure \ref{fig:p3_succ}). The classification accuracy for this case is 79%, with the top performing class being `windsor_chair`. In our tests, we observed poorest performance with the `water_lilly` class. The per class accuracy of classification with the RF codebook is similar to that of K-Means coded data, but we observe a significant speedup in training performance when building RF tree based vocabulary.
\begin{figure}
\begin{center}
@@ -142,15 +142,15 @@ Figure \ref{fig:p3_cm} shows the confusion matrix for results with Codebook Fore
# Comparison of methods and conclusions
-Overall we observe marginally higher accuracy when using K-means codebooks compared to RF codebook at the expense of a higher training execution time. Testing time is similar in both methods, with RF-codebooks being slightly faster as explained in section III.
+Overall we observe marginally higher accuracy when using a K-means codebook compared to RF codebook at the expense of a higher training execution time. Testing time is similar in both methods, with RF-codebooks being slightly faster as explained in section III.
As discussed in section I, due to the initialization process for optimal centroids placements, K-means can be unpreferable for large
-descriptors' counts (and in absence of methods for dimensionality reduction).
-In many applications the increase in training time would not justify the minimum increase in classification performance.
+descriptor counts (and in absence of methods for dimensionality reduction).
+In many applications the increase in training time would not justify the small increase in classification performance.
For the Caltech_101 dataset, a RF codebook seems to be the most suitable method to perform RF-classification.
-The `water_lilly` is the most misclassified class, both in k-means and RF codebook (refer to figures \ref{fig:km_cm} and \ref{fig:p3_cm}). This indicates that the features obtained from the class do not provide for very discriminative splits, resulting in the prioritsation of other features in the first nodes of the decision trees.
+The `water_lilly` is the most misclassified class, both for K-means and RF codebook (refer to figures \ref{fig:km_cm} and \ref{fig:p3_cm}). This indicates that the features obtained from the class do not provide for very discriminative splits, resulting in the prioritsation of other features in the first nodes of the decision trees.
# References