aboutsummaryrefslogtreecommitdiff
path: root/report/paper.md
blob: 0f385c183d04067ee5d45f50e75e897cee67a301 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
# Question 1, Eigenfaces 

## Partition and Standard PCA

The data is partitioned such that there is an equal amount of training samples in each class. As each class has an identical number of samples.
In this way, each training vector space is generated with
the same number elements. The test data is taken from the remaining samples. 
We will be using 70% of the data for training, as 80% and 90% splits give misleadingly large and variant accuracies based on the random seed used.
This also allows the observation of more than one
success and failure case for each class when classifying the 
test data. 

After partitioning the data into training and testing sets,
PCA is applied. The covariance matrix, S, of dimension
2576x2576 (features x features), has 2576 eigenvalues
and eigenvectors. The amount of non-zero eigenvalues and
eigenvectors obtained will only be equal to the amount of
training samples minus one. This can be observed in figure \ref{fig:logeig}
as a sudden drop for eigenvalues after the 363rd.

\begin{figure}
\begin{center}
\includegraphics[width=17em]{fig/eigenvalues.pdf}
\caption{Log plot of all eigenvalues}
\label{fig:logeig}
\end{center}
\end{figure}

The mean image is calculated by averaging the features of the
training data. Changing the randomisation seed gives
similar values, since the majority of the training
faces used for averaging are the same. Two mean faces
obtained with different seeds for split can be seen in 
figure \ref{fig:mean_face}.

\begin{figure}
\begin{center}
\includegraphics[width=5em]{fig/mean_face.pdf}
\includegraphics[width=5em]{fig/mean2.pdf}
\caption{Mean Faces}
\label{fig:mean_face}
\end{center}
\end{figure}

To perform face recognition the best M eigenvectors associated with the 
largest eigenvalues (carrying the largest data variance, fig. \ref{fig:eigvariance}) are chosen. We found that the opimal value for M
when when performing PCA is $M=99$ with an accuracy of 57%. For larger M
the accuracy plateaus.

\begin{figure}
\begin{center}
\includegraphics[width=17em]{fig/accuracy.pdf}
\caption{NN Recognition Accuracy varying M}
\label{fig:accuracy}
\end{center}
\end{figure}

## Low dimensional computation of eigenspace

Performing the low-dimensional computation of the
eigenspace for PCA we obtain the same accuracy results
as the high-dimensional computation previously used. A
comparison between eigenvalues of the
two computation techniques used shows that the difference
is very small (due to rounding
of the `numpy.eigh` function when calculating the eigenvalues
and eigenvectors of the matrices A\textsuperscript{T}A (NxN) and AA\textsuperscript{T}
(DxD)). The first ten biggest eigenvalues obtained with each method
are shown in Appendix, table \ref{tab:eigen}.

It can be proven that the eigenvalues obtained are mathematically the same [@lecture-notes],
and the there is a relation between the eigenvectors obtained: $\boldsymbol{u\textsubscript{i}} = A\boldsymbol{v\textsubscript{i}}$. (*Proof: Appendix A*). 

Experimentally there is no consequential loss of data calculating the eigenvectors
for PCA when using the low dimensional method. The main advantages of it are reduced computation time,
(since the two methods require on average respectively 3.7s and 0.11s from table \ref{tab:time}), and complexity of computation
(since the eigenvectors found with the first method are extracted from a significantly 
bigger matrix).

The drawback of the low-dimensional computation technique is that we include and extra left multiplication step with the training data, but it is almost always computationally much quicker than performing eigen-decomposition for large number of features.

# Question 1, Application of eigenfaces

## Image Reconstruction

Face reconstruction is performed with the faster low-dimensional PCA computation.
The quality of reconstruction depends on the amount of eigenvectors used.
The results of varying the number of eigenvectors $M$ can be observed in fig.\ref{fig:face160rec}. 
Two faces from classes number 21 and 2 respectively, are reconstructed as shown 
in fig.\ref{fig:face10rec} with respective $M$ values of $M=10, M=100, M=200, M=300$. The rightmost picture is the original face.

![Reconstructed Face C21\label{fig:face160rec}](fig/face160rec.pdf)

![Reconstructed Face C2\label{fig:face10rec}](fig/face10rec.pdf)

It is visible that the improvement in reconstruction is marginal for M=200 
and M=300. For this reason choosing $M$ larger than 100 gives very marginal returns.
This is evident when looking at the variance ratio of the principal components, as the contribution they have is very low for values above 100.
With M=100 we are be able to reconstruct effectively 97% of the information from our initial training data.
Refer to figure \ref{fig:eigvariance} for the data variance associated with each of the M
eigenvalues.

\begin{figure}
\begin{center}
\includegraphics[width=17em]{fig/variance.pdf}
\caption{Data variance carried by each of $M$ eigenvalues}
\label{fig:eigvariance}
\end{center}
\end{figure}

## Classification

The analysed classification methods used for face recognition are Nearest Neighbor and
alternative method utilising reconstruction error. 

Nearest Neighbor projects the test data onto the generated subspace and finds the closest 
training sample to the projected test image, assigning the same class as that of the nearest neighbor.

Recognition accuracy of NN classification can be observed in figure \ref{fig:accuracy}.

A confusion matrix showing success and failure cases for Nearest Neighbor classification when using PCA can be observed in figure \ref{fig:cm}:

\begin{figure}
\begin{center}
\includegraphics[width=15em]{fig/pcacm.pdf}
\caption{Confusion Matrix PCA and NN, M=99}
\label{fig:cm}
\end{center}
\end{figure}

Two examples of the outcome of Nearest Neighbor classification are presented in figures \ref{fig:nn_fail} and \ref{fig:nn_succ},
respectively one example of classification failure and an example of successful 
classification.

\begin{figure}
\begin{center}
\includegraphics[width=5em]{fig/face2.pdf}
\includegraphics[width=5em]{fig/face5.pdf}
\caption{Failure case for NN. Test face left. NN right}
\label{fig:nn_fail}
\end{center}
\end{figure}

\begin{figure}
\begin{center}
\includegraphics[width=5em]{fig/success1.pdf}
\includegraphics[width=5em]{fig/success1t.pdf}
\caption{Success case for NN. Test face left. NN right}
\label{fig:nn_succ}
\end{center}
\end{figure}

It is possible to use a NN classification that takes into account majority voting.
With this method recognition is based on the K closest neighbors of the projected
test image. The method that showed highest recognition accuracies for PCA used
K=1, as visible in figure \ref{fig:k-diff}.

\begin{figure}
\begin{center}
\includegraphics[width=17em]{fig/kneighbors_diffk.pdf}
\caption{NN recognition accuracy varying K. Split: 80-20}
\label{fig:k-diff}
\end{center}
\end{figure}

The process for alternative method is somewhat similar to LDA. One different
subspace is generated for each class. These subspaces are then used for reconstruction
of the test image and the class of the subspace that generated the minimum reconstruction
error is assigned.

The alternative method shows overall a better performance (see figure \ref{fig:altacc}), with peak accuracy of 69%
for M=5. The maximum M non zero eigenvectors that can be used will in this case be at most
the amount of training samples per class minus one, since the same amount of eigenvectors
will be used for each generated class-subspace.
A major drawback is the increase in execution time (from table \ref{tab:time}, 1.1s on average).

\begin{figure}
\begin{center}
\includegraphics[width=17em]{fig/alternative_accuracy.pdf}
\caption{Accuracy of Alternative Method varying M}
\label{fig:altacc}
\end{center}
\end{figure}

A confusion matrix showing success and failure cases for alternative method classification
can be observed in figure \ref{fig:cm-alt}.

\begin{figure}
\begin{center}
\includegraphics[width=15em]{fig/altcm.pdf}
\caption{Confusion Matrix for alternative method, M=5}
\label{fig:cm-alt}
\end{center}
\end{figure}

Similarly to the NN case, we present two cases, respectively failure (figure \ref{fig:altfail}) and success (figure \ref{fig:altsucc}).

\begin{figure}
\begin{center}
\includegraphics[width=5em]{fig/FO.JPG}
\includegraphics[width=5em]{fig/FR.JPG}
\includegraphics[width=5em]{fig/FL.JPG}
\caption{Alternative method failure. Respectively test image, reconstructed image, class assigned}
\label{fig:altfail}
\end{center}
\end{figure}

\begin{figure}
\begin{center}
\includegraphics[width=5em]{fig/SO.JPG}
\includegraphics[width=5em]{fig/SR.JPG}
\includegraphics[width=5em]{fig/SL.JPG}
\caption{Alternative method success. Respectively test image, reconstructed image, class assigned}
\label{fig:altsucc}
\end{center}
\end{figure}

From the failures and success cases analyzed it is noticeable that the parameters that
affect recognition the most are: glasses, hair, sex and brightness of the picture.

# Question 2, Generative and Discriminative Subspace Learning

One way to combine generative and discriminative learning is made possible by performing LDA on a generative subspace created by PCA. In order to
maximize class separation and minimize the distance between elements of the same class it is necessary to 
maximize the function J(W) (generalized Rayleigh quotient): $J(W) = \frac{W\textsuperscript{T}S\textsubscript{B}W}{W\textsuperscript{T}S\textsubscript{W}W}$. 

With S\textsubscript{B} being the scatter matrix between classes, S\textsubscript{W} 
being the within-class scatter matrix and W being the set of projection vectors. $\mu$ 
represents the mean of each class.

It can be proven that when we have a singular S\textsubscript{W} we obtain [@lecture-notes]: $W\textsubscript{opt} = arg\underset{W}max\frac{|W\textsuperscript{T}S\textsubscript{B}W|}{|W\textsuperscript{T}S\textsubscript{W}W|} = S\textsubscript{W}\textsuperscript{-1}(\mu\textsubscript{1} - \mu\textsubscript{2})$.

However S\textsubscript{W} is often singular since the rank of S\textsubscript{W}
is at most N-c and usually N is smaller than D. In this case it is possible to use 
Fisherfaces. The optimal solution to this problem lays in W\textsuperscript{T}\textsubscript{opt} 
=  W\textsuperscript{T}\textsubscript{lda}W\textsuperscript{T}\textsubscript{pca}, 

where W\textsubscript{pca} is chosen to maximize the determinant of the total scatter matrix
of the projected samples: $W\textsuperscript{T}\textsubscript{pca} = arg\underset{W}max|W\textsuperscript{T}S\textsubscript{T}W|$. And $W\textsubscript{lda} 
= arg\underset{W}max\frac{|W\textsuperscript{T}W\textsuperscript{T}
\textsubscript{pca}S\textsubscript{B}W\textsubscript{pca}W|}{|W\textsuperscript{T}W\textsuperscript{T}\textsubscript{pca}S\textsubscript{W}W\textsubscript{pca}W|}$.

Performing PCA followed by LDA carries a loss of discriminative information. This problem can
be avoided through a linear combination of the two [@pca-lda]. In the following section we will use a 
1-dimensional subspace *e*. The cost functions associated with PCA and LDA (with $\epsilon$ being a very 
small number) are H\textsubscript{pca}(*e*)=
<*e*, S\textsubscript{e}> and $H\textsubscript{lda}(e)=\frac{<e, S\textsubscript{B}e>}
{<e,(S\textsubscript{W} + \epsilon I)e>}=
\frac{<e, S\textsubscript{B}e>}{<e,S\textsubscript{W}e> + \epsilon}$. 

Through linear interpolation, for $0\leq t \leq 1$: $F\textsubscript{t}(e)=\frac{1-t}{2}
H\textsubscript{pca}(e)+\frac{t}{2}H\textsubscript{lda}(e)=
\frac{1-t}{2}<e,S\textsubscript{e}>+\frac{t}{2}\frac{<e, S\textsubscript{B}e>}{<e,S\textsubscript{W}e> + \epsilon}$. 

The objective is to find a unit vector *e\textsubscript{t}* in **R**\textsuperscript{n} 
(with n being the number of samples) such that: $e\textsubscript{t}=arg\underset{et}min F\textsubscript{t}(e)$.

We can model the Lagrange optimization problem under the constraint of ||*e*||
\textsuperscript{2}=1 as $L(e\lambda)=F\textsubscript{t}(e)+\lambda(||e||\textsuperscript{2}-1)$.

To minimize we take the derivative with respect to *e* and equate L to zero: $\frac
{\partial L(e\lambda)}{\partial e}=\frac{\partial F\textsubscript{t}(e)}{\partial e}
+\frac{\partial\lambda(||e||\textsuperscript{2}-1)}{\partial e}=0$.

Being $\nabla F\textsubscript{t}(e)= (1-t)Se+\frac{t}{<e,S\textsubscript{W}e>
+\epsilon}S\textsubscript{B}e-t\frac{<e,S\textsubscript{B}e>}{(<e,S\textsubscript{W}
e>+\epsilon)\textsuperscript{2}S\textsubscript{W}e}$, we obtain that our goal is to 
find $\nabla F\textsubscript{t}(e)=\lambda e$, which means making $\nabla F\textsubscript{t}(e)$
parallel to *e*. Since S is positive semi-definite, $<\nabla F\textsubscript{t}(e),e> \geq 0$.
It means that $\lambda$ needs to be greater than zero. Normalizing both sides we 
obtain $\frac{\nabla F\textsubscript{t}(e)}{||\nabla F\textsubscript{t}(e)||}=e$.

We can express *T(e)* as $T(e) = \frac{\alpha e+ \nabla F\textsubscript{t}(e)}{||\alpha e+\nabla F\textsubscript{t}(e)||}$ (adding a positive multiple of *e*, $\alpha e$ to prevent $\lambda$ from vanishing).

It is then possible to use the gradient descent optimization method to perform an iterative procedure
that solves our optimization problem, using e\textsubscript{n+1}=T(e\textsubscript{n}) and updating 
after each step.

# Question 3, LDA Ensemble for Face Recognition, PCA-LDA 

In this section we will perform PCA-LDA recognition with NN classification.

Varying the values of $M_{\textrm{pca}}$ and $M_{\textrm{lda}}$ we obtain the average recognition accuracies 
reported in figure \ref{fig:ldapca_acc}. Peak accuracy of 93% can be observed for $M_{\textrm{pca}}=115$, $M_{\textrm{lda}}=41$;
howeverer accuracies above 90% can be observed for $130 > M_{\textrm{pca}} 90$ and $ 50 > M_{\textrm{lda}} > 30$ values between 30 and 50.

Recognition accuracy is significantly higher than PCA, and the run time is roughly the same,
vaying between 0.11s(low $M_{\textrm{pca}}$) and 0.19s(high $M_{\textrm{pca}}$). Execution times
are displayed in table \ref{tab:time}.

\begin{figure}
\begin{center}
\includegraphics[width=17em]{fig/ldapca3dacc.pdf}
\caption{PCA-LDA NN Recognition Accuracy varying hyper-parameters}
\label{fig:ldapca_acc}
\end{center}
\end{figure}

The scatter matrices obtained, S\textsubscript{B}(scatter matrix between classes) and 
S\textsubscript{W}(within-class scatter matrix), respectively show ranks of at most c-1(51) and
N-c(312 maximum for our standard 70-30 split). 
The rank of S\textsubscript{W} will have the same value of $M_{\textrm{pca}}$ for $M_{\textrm{pca}}\leq N-c$.

Testing with $M_{\textrm{lda}}=50$ and $M_{\textrm{pca}}=115$ gives 92.9% accuracy. The results of this test can be seen in the confusion matrix shown in figure \ref{fig:ldapca_cm}.

\begin{figure}
\begin{center}
\includegraphics[width=17em]{fig/cmldapca.pdf}
\caption{PCA-LDA NN Recognition Confusion Matrix Mlda=50, Mpca=115}
\label{fig:ldapca_cm}
\end{center}
\end{figure}

Two recognition examples are reported: success in figure \ref{fig:succ_ldapca} and failure in figure \ref{fig:fail_ldapca}.

\begin{figure}
\begin{center}
\includegraphics[width=5em]{fig/ldapcaf2.pdf}
\includegraphics[width=5em]{fig/ldapcaf1.pdf}
\caption{Failure case for PCA-LDA. Test face left. NN right}
\label{fig:fail_ldapca}
\end{center}
\end{figure}

\begin{figure}
\begin{center}
\includegraphics[width=5em]{fig/ldapcas1.pdf}
\includegraphics[width=5em]{fig/ldapcas2.pdf}
\caption{Success case for PCA-LDA. Test face left. NN right}
\label{fig:succ_ldapca}
\end{center}
\end{figure}

The PCA-LDA method allows to obtain a much higher recognition accuracy compared to PCA.
The achieved separation between classes and reduction between inner class-distance 
that makes these results possible can be observed in figure \ref{fig:subspaces}, in which 
the 3 features of the subspaces obtained are graphed.

\begin{figure}
\begin{center}
\includegraphics[width=12em]{fig/SubspaceQ1.pdf}
\includegraphics[width=12em]{fig/SubspaceQL1.pdf}
\caption{Generated Subspaces (3 features). PCA on the left. PCA-LDA on the right}
\label{fig:subspaces}
\end{center}
\end{figure}

# Question 3, LDA Ensemble for Face Recognition, PCA-LDA Ensemble 

So far we have established a combined PCA-LDA model which has good recognition while maintaining relatively low execution times and looked at varying hyperparameters. We look to further reduce testing error, through the use of ensemble learning.

## Committee Machine Design and Fusion Rules

As each model in the ensemble outputs its own predicted labels, we need to define a strategy for joining the predictions such that we obtain a combined response which is better than that of the individual models. For this project, we consider two committee machine designs.

### Majority Voting

In simple majority voting the committee label is the most popular label given by the models. This can be achieved by binning all labels produced by the ensemble and classifying the test case as the class with the most bins. 

This technique is not biased towards statistically better models and values all models in the ensemble equally. It is useful when models have similar accuracies and are not specialised in their classification.

### Confidence and Weighted labels

Given that the model can output confidences about the labels it predicts, we can factor the confidence of the model towards the final output of the committee machine. For instance, if a specialised model says with 95% confidence the label for the test case is "A", and two other models only classify it as "B" with 40% confidence, we would be inclined to trust the first model and classify the result as "A".

Fusion rules may either take the label with the highest associated confidence, or otherwise look at the sum of all produced confidences for a given label and trust the label with the highest confidence sum.

This technique is reliant on the model producing a confidence score for the label(s) it guesses. For K-Nearest neighbours where $K > 1$ we may produce a confidence based on the proportion of the K nearest neighbours which are the same class. For instance if $K = 5$ and 3 out of the 5 nearest neighbours are of class "C" and the other two are class "B" and "D", then we may say that the predictions are classes C, B and D, with confidence of 60%, 20% and 20% respectively. Using this technique with a large K however may be detrimental, as distance is not considered. An alternative approach of generating confidence based on the distance to the nearest neighbour may yield better result.

In our testing we have elected to use a committee machine employing majority voting, as we identified that looking a nearest neighbour strategy with only **one** neighbour ($K=1$) performed best. Future work may investigate weighted labeling using neighbour distance based confidence.

## Data Randomisation (Bagging)

The first strategy which we may use when using ensemble learning is randomisation of the data, while maintaining the model static.

Bagging is performed by generating each dataset for the ensembles by randomly picking from the class training set with replacement. We chose to perform bagging independently for each face such that we can maintain the split training and testing split ratio used with and without bagging. The performance of ensemble classification via a majority voting committee machine for various ensemble sizes is evaluated in figure \ref{fig:bagging-e}. We find that for our dataset bagging tends to reach the same accuracy as an individual non-bagged model after an ensemble size of around 30 and achieves marginally better testing error, improving accuracy by approximately 1%.

\begin{figure}
\begin{center}
\includegraphics[width=22em]{fig/bagging.pdf}
\caption{Ensemble size effect on accuracy with bagging}
\label{fig:bagging-e}
\end{center}
\end{figure}


## Feature Space Randomisation

Feature space randomisation involves randomising the features which are analysed by the model. 
In the case of PCA-LDA this can be achieved by randomising the eigenvectors used when performing 
the PCA step. For instance, instead of choosing the most variant 120 eigenfaces, we may chose to 
use the 90 eigenvectors with biggest variance and picking 70 of the rest non-zero eigenvectors randomly.

\begin{figure}
\begin{center}
\includegraphics[width=23em]{fig/random-ensemble.pdf}
\caption{Ensemble size - feature randomisation ($m_c=90$,$m_r=70$)}
\label{fig:random-e}
\end{center}
\end{figure}

In figure \ref{fig:random-e} we can see the effect of ensemble size when using the biggest 
90 constant and 70 random eigenvectors. Feature space randomisation is able to increase accuracy by approximately 2% for our data. This improvement is dependent on the number of eigenvectors used and the number of them which is random. I.e. using a small fully random set of eigenvectors is detrimental to the performance.

An ensemble size of around 27 is where accuracy or error plateaus. We will use this number when performing an exhaustive search on the optimal randomness parameter.

### Optimal randomness hyper-parameter

The randomness hyper-parameter regarding feature space randomisation can be defined as the number of 
features we chose to randomise. For instance the figure \ref{fig:random-e} we chose 70 out of 160 
eigenvectors to be random. We could chose to use more than 70 random eigenvectors, thereby increasing 
the randomness. Conversely we could decrease the randomness parameter, randomising less of the eigenvectors.

The optimal number of constant and random eigenvectors to use is therefore an interesting question.

\begin{figure}
\begin{center}
\includegraphics[width=19em]{fig/vaskplot3.pdf}
\caption{Recognition accuracy varying M and Randomness Parameter}
\label{fig:opti-rand}
\end{center}
\end{figure}

The optimal randomness after doing an exhaustive search as seen on figure \ref{fig:opti-rand}peaks at 
95 randomised eigenvectors out of 155 total eigenvectors, or 60 static and 95 random eigenvectors. The values of $M_{\textrm{lda}}$ in the figures is 51. 

The red peaks on the 3d-plot represent the proportion of randomised eigenvectors which achieve the optimal accuracy, which have been further plotted in figure \ref{fig:opt-2d}. We found that for our data, the optimal ratio of random eigenvectors for a given $M$ is between $0.6$ and $0.9$.

\begin{figure}
\begin{center}
\includegraphics[width=17em]{fig/nunzplot1.pdf}
\caption{Optimal randomness ratio}
\label{fig:opt-2d}
\end{center}
\end{figure}


### Ensemble Confusion Matrix

\begin{figure}
\begin{center}
\includegraphics[width=15em]{fig/ensemble-cm.pdf}
\caption{Ensemble confusion matrix (pre-comittee)}
\label{fig:ens-cm}
\end{center}
\end{figure}

We can compute an ensemble confusion matrix before the committee machines as shown in figure \ref{fig:ens-cm}. This confusion matrix combines the output of all the models in the ensemble. As can be seen from the figure, models in the ensemble usually make more mistakes than an individual model. When the ensemble size is large enough, the errors are rectified by the committee machine, resulting in low error as observed in figure \ref{fig:random-e}.

## Comparison

Combining bagging and feature space randomization we are able to consistently achieve higher test accuracy than the individual models.

\begin{table}[ht]
\begin{tabular}{lrr} \hline
Seed & Individual$(M=120)$ & Bag + Feature Ens.$(M=60+95)$\\ \hline
0    & 0.916      & 0.923                  \\
1    & 0.929      & 0.942                  \\
5    & 0.897      & 0.910                  \\ \hline
\end{tabular}
\label{tab:compare}
\end{table}

# References

<div id="refs"></div>

# Appendix

## Eigenvectors and Eigenvalues in fast PCA

### Table showing eigenvalues obtained with each method**

\begin{table}[ht]
\centering
\begin{tabular}[t]{cc} \hline
PCA &Fast PCA\\ \hline
2.9755E+05 &2.9828E+05\\
1.4873E+05 &1.4856E+05\\
1.2286E+05 &1.2259E+05\\
7.5084E+04 &7.4950E+04\\
6.2575E+04 &6.2428E+04\\
4.7024E+04 &4.6921E+04\\
3.7118E+04 &3.7030E+04\\
3.2101E+04 &3.2046E+04\\
2.7871E+04 &2.7814E+04\\
2.4396E+04 &2.4339E+04\\ \hline
\end{tabular}
\caption{Comparison of eigenvalues obtain with the two computation methods}
\label{tab:eigen}
\end{table}

### Proof of relationship between eigenvalues and eigenvectors in the different methods

Computing the eigenvectors **u\textsubscript{i}** for the DxD matrix AA\textsuperscript{T} 
we obtain a very large matrix. The computation process can get very expensive when $D \gg N$.

For such reason we compute the eigenvectors **v\textsubscript{i}** of the NxN
matrix A\textsuperscript{T}A. From the computation it follows that $A\textsuperscript{T}A\boldsymbol{v\textsubscript{i}} = \lambda \textsubscript{i}\boldsymbol{v\textsubscript{i}}$.

Multiplying both sides by A we obtain: 

$$ AA\textsuperscript{T}A\boldsymbol{v\textsubscript{i}} = \lambda \textsubscript{i}A\boldsymbol{v\textsubscript{i}} \rightarrow SA\boldsymbol{v\textsubscript{i}} = \lambda \textsubscript{i}A\boldsymbol{v\textsubscript{i}} $$

We know that $S\boldsymbol{u\textsubscript{i}} = \lambda \textsubscript{i}\boldsymbol{u\textsubscript{i}}$.
 
From here it follows that AA\textsuperscript{T} and A\textsuperscript{T}A have the same eigenvalues and their eigenvectors follow the relationship $\boldsymbol{u\textsubscript{i}} = A\boldsymbol{v\textsubscript{i}}$ 

### Table of execution times of different methods

\begin{table}[ht]
\centering
\begin{tabular}[t]{llll} 
\hline
	& Best(s)	& Worst(s) 	& Average(s)	\\ \hline
PCA 	& 3.5		& 3.8		& 3.7		\\
PCA-F 	& 0.10		& 0.24		& 0.11		\\
PCA-ALT & 1.0		& 1.3		& 1.1		\\
LDA 	& 5.0		& 5.8		& 5.2		\\
LDA-PCA & 0.11		& 0.19		& 0.13		\\ \hline
\end{tabular}
\caption{Comparison of execution times between different methods}
\label{tab:time}
\end{table}

## Code

All code and \LaTeX sources are available at:

[https://git.skozl.com/e4-pattern/](https://git.skozl.com/e4-pattern/).