aboutsummaryrefslogtreecommitdiff
path: root/report/paper.md
blob: af3f8d3bb664df281a44b373fac43a6168ed8b8e (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
# Codebooks

## K-means codebook 

A common technique for codebook generation involves utilising K-means clustering on a sample of the
image descriptors. In this way descriptors may be mapped to *visual* words which lend themselves to
binning and therefore the creation of bag-of-words histograms for the use of classification.

In this courseworok 100-thousand random SIFT descriptors of the Caltech_101 dataset are used to build the K-means visual vocabulary.

## Vocabulary size 

The number of clusters or the number of centroids determines the vocabulary size when creating the codebook with the K-means the method. Each descriptor is mapped to the nearest centroid, and each descriptor belonging to that cluster is mapped to the same *visual word*. This allows similar descriptors to be mapped to the same word, allowing for comparison through bag-of-words techniques.

## Bag-of-words histogram quantisation of descriptor vectors

An example histograms for training and testing images is shown on figure \ref{fig:histo_tr}, computed with a vocubulary size of 100. The histograms of the same class appear to have comparable magnitudes for their respective keywords, demonstrating they had a similar number of descriptors which mapped to each of the clusters. The effect of the vocubalary size (as determined by the number of K-means centroids) on the classificaiton accuracy is shown in figure \ref{fig:km_vocsize}. A small vocabulary size tends to misrepresent the information contained in the different patches, resulting in poor classification accuracy. Conversly a large vocabulary size (many K-mean centroids), may display overfitting. In our tests, we observe a plateau after a cluster count of 60 on figure \ref{fig:km_vocsize}.

The time complexity of quantisation with a K-means codebooks is $O(DNK)$, where N is the number of entities to be clustered (descriptors), D is the dimension (of the descriptors) and K is the cluster count @cite[km-complexity]. As the computation time is high, the tests we use a subsample of descriptors to compute the centroids (a random selection of 100 thousand descriptors). An alternative method we tried is applying PCA to the descriptors vectors to improve time performance. However in this case the descriptors' size is relatively small, and for such reason we opted to avoid PCA for further training. 

K-means is a process that converges to local optima and heavilly depends on the initialization values of the centroids.
Initializing k-means is an expensive process, based on sequential attempts of centroids placement.
Running for multiple instances significantly affects the computation process, leading to a linear increase in execution time.
Attempting centroid initialization more than once didn't cause significant improvements in terms of accuracy for the data analysed in 
this coursework, only leading to an increase in execution time. 

\begin{figure}[H]
\begin{center}
\includegraphics[width=12em]{fig/trainhist.pdf}
\includegraphics[width=12em]{fig/testhist.pdf}
\caption{Bag-of-words histograms; Training left, Testing right}
\label{fig:histo_tr}
\end{center}
\end{figure}

# RF classifier 

## Hyperparameters tuning

Figure \ref{fig:km-tree-param} shows the effect of tree depth and number of trees
for K-means 100 cluster centers.

\begin{figure}[H]
\begin{center}
\includegraphics[width=12em]{fig/error_depth_kmean100.pdf}
\includegraphics[width=12em]{fig/trees_kmean.pdf}
\caption{Classification error varying trees depth(left) and numbers of trees(right)}
\label{fig:km-tree-param}
\end{center}
\end{figure}

Random forests will select a random number of features on which to apply a weak learner (such as axis aligned split) and then chose the best feature of the sampled ones to perform the split on, based on a given criteria (our results use the *Gini index*). The fewer features that are compared for each split the quicker the trees are built and the more random they are. Therefore the randomness parameter can be considered the number of features used when making splits. We evaluate accuracy given different randomness when using a K-means vocabulary in figure \ref{fig:kmeanrandom}. The results in the figure use a forest size of 100 as we found that this estimatator count the improvement for $\sqrt{n}$ performance gains tend to plateau.

\begin{figure}[H]
\begin{center}
\includegraphics[width=18em]{fig/new_kmean_random.pdf}
\caption{newkmeanrandom}
\label{fig:kmeanrandom}
\end{center}
\end{figure}

Changing the randomness parameter had no significant effect on execution time. This can partly be explained by the increased required tree depth to purify the training set.

## Weak Learner comparison

In figure \ref{fig:2pt} it is possible to notice an improvement in recognition accuracy by 1%,
with the two pixels test, achieving better results than the axis-aligned counterpart. The two-pixels
test however brings a slight deacrease in time performance which has been measured to be on average 1 second
more. This is due to the complexity added by the two-pixels test, since it adds one dimension to the computation.

\begin{figure}[H]
\begin{center}
\includegraphics[width=18em]{fig/2pixels_kmean.pdf}
\caption{K-means classification accuracy changing the type of weak learners}
\label{fig:2pt}
\end{center}
\end{figure}

## Impact of RF vocabulary size on classification accuracy. 

\begin{figure}[H]
\begin{center}
\includegraphics[width=12em]{fig/kmeans_vocsize.pdf}
\includegraphics[width=12em]{fig/time_kmeans.pdf}
\caption{Effect of vocabulary size; classification error left, time right}
\label{fig:km_vocsize}
\end{center}
\end{figure}

## Confusion matrix for case XXX, with examples of failure and success 

\begin{figure}[H]
\begin{center}
\includegraphics[width=18em]{fig/e100k256d5_cm.pdf}
\caption{e100k256d5cm K-means Confusion Matrix}
\label{fig:km_cm}
\end{center}
\end{figure}

\begin{figure}[H]
\begin{center}
\includegraphics[width=10em]{fig/success_km.pdf}
\includegraphics[width=10em]{fig/fail_km.pdf}
\caption{K-means: Success on the left; Failure on the right}
\label{fig:km_succ}
\end{center}
\end{figure}

# RF codebook

An alternative to codebook creation via K-means involves using an ensemble of totally random trees. We code each decriptor according to which leaf of each tree in the ensemble it is sorted. This effectively performs and unsupervised transformation of our dataset to a high-dimensional spare representation. The dimension of the vocabulary size is determined by the number of leaves in each random tree and the ensemble size. From comparing execution times of K-means in figure \ref{fig:km_vocsize} and the RF codebook in \ref{fig:p3_voc} we observe considerable speed gains from utilising the RF codebook. This may be attributed to the reduce complexity of RF Codebook creation, 
which is $O(\sqrt{D} N \log K)$ compared to $O(DNK)$ for K-means. Codebook mapping given a created vocabulary is also quicker than K-means, $O(\log K)$ (assuming a balanced tree) vs $O(KD)$.

\begin{figure}[H]
\begin{center}
\includegraphics[width=18em]{fig/256t1_e200D5_cm.pdf}
\caption{Part 3 confusion matrix e100k256d5cm}
\label{fig:p3_cm}
\end{center}
\end{figure}

\begin{figure}[H]
\begin{center}
\includegraphics[width=10em]{fig/success_3.pdf}
\includegraphics[width=10em]{fig/fail_3.pdf}
\caption{Part3: Success on the left; Failure on the right}
\label{fig:p3_succ}
\end{center}
\end{figure}

\begin{figure}[H]
\begin{center}
\includegraphics[width=12em]{fig/error_depth_p3.pdf}
\includegraphics[width=12em]{fig/trees_p3.pdf}
\caption{Classification error varying trees depth(left) and numbers of trees(right)}
\label{fig:p3_trees}
\end{center}
\end{figure}

\begin{figure}[H]
\begin{center}
\includegraphics[width=18em]{fig/p3_rand.pdf}
\caption{Effect of randomness parameter on classification error}
\label{fig:p3_rand}
\end{center}
\end{figure}

\begin{figure}[H]
\begin{center}
\includegraphics[width=12em]{fig/p3_vocsize.pdf}
\includegraphics[width=12em]{fig/p3_time.pdf}
\caption{Effect of vocabulary size; classification error left, time right}
\label{fig:p3_voc}
\end{center}
\end{figure}

\begin{figure}[H]
\begin{center}
\includegraphics[width=18em]{fig/p3_colormap.pdf}
\caption{Varying leaves and estimators: effect on accuracy}
\label{fig:p3_colormap}
\end{center}
\end{figure}

# Comparison of methods and conclusions

Overall K-means achieves slightly better accuracy that the RF-codebook at the expense of a higher execution time for training **(and testing???)**.

As discussed in section I, due to the initialization process for optimal centroids placements, K-means can result unpreferable for large 
descriptors' sizes (in absence of methods for dimensionality reduction),
and in many cases the increase in training time would not justify the minimum increase in classification performance. 

For Caltech_101 RF-codebook seems to be the most suitable method to perform RF-classification.

It is observable that for the particular dataset we are analysing the class *water_lilly*
is the one that gets misclassified the most, both in k-means and RF-codebook (refer to figures \ref{fig:km_cm} and \ref{fig:p3_cm}. This means that the features obtained
from this class do not guarantee very discriminative splits, hence the first splits in the trees
will prioritize features taken from other classes.

# References