aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authornunzip <np.scarh@gmail.com>2019-02-15 16:38:22 +0000
committernunzip <np.scarh@gmail.com>2019-02-15 16:38:22 +0000
commitda7d061e29bd62de42f9b7e8e7cc8e3a24e9240f (patch)
tree0e62f6e89270cd3cc255567b137f75d2a0034acf
parent8fd410bcadd8b3fa3cb0896784b1b3beac542d01 (diff)
downloade4-vision-da7d061e29bd62de42f9b7e8e7cc8e3a24e9240f.tar.gz
e4-vision-da7d061e29bd62de42f9b7e8e7cc8e3a24e9240f.tar.bz2
e4-vision-da7d061e29bd62de42f9b7e8e7cc8e3a24e9240f.zip
Grammar Adjustments
-rw-r--r--report/paper.md12
1 files changed, 6 insertions, 6 deletions
diff --git a/report/paper.md b/report/paper.md
index 81800cb..e5f3ee7 100644
--- a/report/paper.md
+++ b/report/paper.md
@@ -10,7 +10,7 @@ Both training and testing use 15 randomly selected images from the 10 available
## Vocabulary size
-The number of clusters or the number of centroids determines the vocabulary size when creating the codebook with the K-means the method. Each descriptor is mapped to the nearest centroid, and each descriptor belonging to that cluster is mapped to the same *visual word*. This allows similar descriptors to be mapped to the same word, allowing for comparison through bag-of-words techniques.
+The number of clusters or the number of centroids determines the vocabulary size when creating the codebook with the K-means method. Each descriptor is mapped to the nearest centroid, and each descriptor belonging to that cluster is mapped to the same *visual word*. This allows similar descriptors to be mapped to the same word, allowing for comparison through bag-of-words techniques.
## Bag-of-words histogram quantisation of descriptor vectors
@@ -71,7 +71,7 @@ This parameter also affects correlation between trees. We expect in fact trees t
\end{center}
\end{figure}
-Changing the randomness parameter had no significant effect on execution time. This may be attributed to increased required tree depth to purify the training set.
+Changing the randomness parameter had no significant effect on execution time. This may be attributed to the increased required tree depth to purify the training set.
Effects of vocabulary size on accuracy and time performance are shown in section I, figure \ref{fig:km_vocsize}. Time increases linearly with vocabulary size. Optimal number of cluster centers was found to be around 100, giving a good tradeoff between time and accuracy performance. As shown in figure \ref{fig:km_vocsize} the classification error in fact does no plateau completely, despite experiencing a significant decrease in gradient.
@@ -101,7 +101,7 @@ Figure \ref{fig:km_cm} shows a confusion matrix for RF Classification on K-means
# RF codebook
-An alternative to codebook creation via K-means involves using an ensemble of totally random trees. We code each decriptor according to which leaf of each tree in the ensemble it is sorted. This effectively performs an unsupervised quantization of our descriptors. The vocabulary size is determined by the number of leaves in each random tree multiplied by the ensemble size. From comparing execution times of K-means in figure \ref{fig:km_vocsize} and the RF codebook in \ref{fig:p3_voc} we observe considerable speed gains from utilising the RF codebook. This may be attributed to the reduced complexity of RF Codebook creation,
+An alternative to codebook creation via K-means involves using an ensemble of totally random trees. We code each decriptor according to which leaf of each tree in the ensemble it is sorted. This effectively performs an unsupervised quantization of our descriptors. The vocabulary size is determined by the number of leaves in each random tree multiplied by the ensemble size. From comparing execution times of K-means in figure \ref{fig:km_vocsize} and the RF codebook in figure \ref{fig:p3_voc} we observe considerable speed gains from utilising the RF codebook. This may be attributed to the reduced complexity of RF Codebook creation,
which is $O(\sqrt{D} N \log K)$ compared to $O(DNK)$ for K-means. Codebook mapping given a created vocabulary is also quicker than K-means, $O(\log K)$ (assuming a balanced tree) vs $O(KD)$.
The effect of vocabulary size on classification accuracy can be observed both in figure \ref{fig:p3_voc}, in which we independently vary number of leaves and ensemble size, and figure \ref{fig:p3_colormap}, in which both parameters are varied simultaneously. It is possible to notice that these two parameters make classification accuracy plateau for *leaves*$>80$ and *estimators*$>100$. The peaks of 82% accuracy visible on the heatmap in figure \ref{fig:p3_colormap} are highly dependent on the seed and indicate the range of *good* hyperparametres.
@@ -126,7 +126,7 @@ Similarly to K-means codebook, we find that for the RF codebook the optimal tree
\end{center}
\end{figure}
-Varying the randomness parameter of the RF classifier (as in figure \ref{fig:kmeanrandom}) when using a RF codebook gives similar results to using the K-Means codebook.
+Varying the randomness parameter of the RF classifier (as seen in figure \ref{fig:kmeanrandom}) when using a RF codebook gives similar results to using the K-Means codebook.
Figure \ref{fig:p3_cm} shows the confusion matrix for results with Codebook Forest Size=256, Classifier Forest Size=100, Classifier Depth=5 (examples of success and failure in figure \ref{fig:p3_succ}). The classification accuracy for this case is 79%, with the top performing class being `windsor_chair`. In our tests, we observed poorest performance with the `water_lilly` class. The per class accuracy of classification with the RF codebook is similar to that of K-Means coded data, but we observe a significant speedup in training performance when building RF tree based vocabulary.
@@ -140,13 +140,13 @@ Figure \ref{fig:p3_cm} shows the confusion matrix for results with Codebook Fore
# Comparison of methods and conclusions
-Overall we observe marginally higher accuracy when using a K-means codebook compared to RF codebook at the expense of a higher training execution time. Testing time is similar in both methods, with RF-codebooks being slightly faster as explained in section III.
+Overall we observe marginally higher accuracy when using a K-means codebook compared to RF codebook at the expense of a higher training execution time. Testing time is similar in both methods, with RF codebooks being slightly faster as explained in section III.
As discussed in section I, due to the initialization process for optimal centroids placements, K-means can be unpreferable for large
descriptor counts (and in absence of methods for dimensionality reduction).
In many applications the increase in training time would not justify the small increase in classification performance.
-For the Caltech_101 dataset, a RF codebook seems to be the most suitable method to perform RF-classification.
+For the Caltech_101 dataset, a RF codebook seems to be the most suitable method to perform RF classification.
The `water_lilly` is the most misclassified class, both for K-means and RF codebook (refer to figures \ref{fig:km_cm} and \ref{fig:p3_cm}). This indicates that the features obtained from the class do not provide for very discriminative splits, resulting in the prioritsation of other features in the first nodes of the decision trees.