aboutsummaryrefslogtreecommitdiff
path: root/report
diff options
context:
space:
mode:
Diffstat (limited to 'report')
-rwxr-xr-xreport/paper.md25
1 files changed, 12 insertions, 13 deletions
diff --git a/report/paper.md b/report/paper.md
index 6442b74..94a0e7c 100755
--- a/report/paper.md
+++ b/report/paper.md
@@ -8,7 +8,7 @@ In such way, each training vector space will be generated with
the same amount of elements. The test data will instead
be taken from the remaining samples. Testing on accuracy
with respect to data partition indicates that the maximum
-accuracy is obtained when using a 90% of the data for
+accuracy is obtained when using 90% of the data for
training. Despite such results we will be using 70% of the data
for training as a standard. This will allow to give more than one
example of success and failure for each class when classifying the
@@ -399,9 +399,9 @@ Since each model in the ensemble outputs its own predicted labels, we need to de
### Majority Voting
-In simple majority voting the comitee label is the most pouplar label given by them models. This can be achieved by binning all labels produced by the ensemble and classifying the test case as the class with the most bins.
+In simple majority voting the comitee label is the most pouplar label given by the models. This can be achieved by binning all labels produced by the ensemble and classifying the test case as the class with the most bins.
-This technique is not bias towards statistically better models and values all models in the ensemble equally. It is useful when models have similar accuracies and are not specialised in their classification.
+This technique is not biased towards statistically better models and values all models in the ensemble equally. It is useful when models have similar accuracies and are not specialised in their classification.
### Confidence Weighted Averaging
@@ -416,7 +416,7 @@ In our testing we have elected to use a committee machine employing majority vot
The first strategy which we may use when using ensemble learning is randomisation of the data, while maintaining the model static.
-Bagging is performed by generating each dataset for the ensembles by randomly picking with replacement. We chose to perform bagging independently for each face such that we can maintain the split training and testing split ratio used with and without bagging. The performance of ensemble classificatioen via a majority voting comittee machine for various ensemble sizes is evaluated in figure \label{fig:bagging-e}. We find that for our dataset bagging tends to reach the same accuracy as an indivudual non-bagged model after an ensemble size of around 30 and achieves marginally better testing error, improving accuracy by approximately 1%.
+Bagging is performed by generating each dataset for the ensembles by randomly picking with replacement. We chose to perform bagging independently for each face such that we can maintain the split training and testing split ratio used with and without bagging. The performance of ensemble classificatioen via a majority voting comittee machine for various ensemble sizes is evaluated in figure \ref{fig:bagging-e}. We find that for our dataset bagging tends to reach the same accuracy as an indivudual non-bagged model after an ensemble size of around 30 and achieves marginally better testing error, improving accuracy by approximately 1%.
\begin{figure}
\begin{center}
@@ -429,7 +429,7 @@ Bagging is performed by generating each dataset for the ensembles by randomly pi
## Feature Space Randomisation
-Feature space randomisations involves randomising the features which are analysed by the model.
+Feature space randomisation involves randomising the features which are analysed by the model.
In the case of PCA-LDA this can be achieved by randomising the eigenvectors used when performing
the PCA step. For instance, instead of choosing the most variant 120 eigenfaces, we may chose to
use the 90 eigenvectors with biggest variance and picking 70 of the rest non-zero eigenvectors randomly.
@@ -443,7 +443,7 @@ use the 90 eigenvectors with biggest variance and picking 70 of the rest non-zer
\end{figure}
In figure \ref{fig:random-e} we can see the effect of ensemble size when using the biggest
-90 eigenvectors and 70 random eigenvectors. As can be seen from the graph, feature space randomisation is able to increase accuracy by approximately 2% for our data. However Thes improvement is dependent on the number of eigenvectors used and the number of random eigenvectors. For example, using a small fully random set of eigenvectors is detrimental to the performance.
+90 eigenvectors and 70 random eigenvectors. As can be seen from the graph, feature space randomisation is able to increase accuracy by approximately 2% for our data. However, this improvement is dependent on the number of eigenvectors used and the number of random eigenvectors. For example, using a small fully random set of eigenvectors is detrimental to the performance.
We noticed that an ensemble size of around 27 is the point where accuracy or error plateaus. We will use this number when performing an exhaustive search on the optimal randomness parameter.
@@ -469,12 +469,6 @@ The optimal randomness after doing an exhaustive search as seen on figure \label
The red peaks on the 3d-plot represent the proportion of randomised eigenvectors which achieve the optimal accuracy, which have been further plotted in figure \label{opt-2d}
-## Comparison
-
-Combining bagging and feature space randomization we are able to achieve higher test accuracy than the individual models.
-
-### Various Splits/Seeds
-
### Ensemble Confusion Matrix
\begin{figure}
@@ -485,5 +479,10 @@ Combining bagging and feature space randomization we are able to achieve higher
\end{center}
\end{figure}
-# References
+We can compute an ensemble confusion matrix before the committee machines as shown in figure \ref{fig:ens-cm}. This confusion matrix combines the output of all the models in the ensemble. As can be seen from the figure, different models make different mistakes.
+## Comparison
+
+Combining bagging and feature space randomization we are able to achieve higher test accuracy than the individual models. Here is a comparison for various splits.
+
+# References