aboutsummaryrefslogtreecommitdiff
path: root/report/paper.md
diff options
context:
space:
mode:
authorVasil Zlatanov <v@skozl.com>2018-11-20 17:44:10 +0000
committerVasil Zlatanov <v@skozl.com>2018-11-20 17:44:10 +0000
commit09655a8cb2d92ec7cba319f43084b9c3f9cde380 (patch)
tree6a5d178ebc469b4e1346f015c1f87d885bdcc9d4 /report/paper.md
parentffc01248088d7b41d61bf83b04d6ab3552b79cef (diff)
downloadvz215_np1915-09655a8cb2d92ec7cba319f43084b9c3f9cde380.tar.gz
vz215_np1915-09655a8cb2d92ec7cba319f43084b9c3f9cde380.tar.bz2
vz215_np1915-09655a8cb2d92ec7cba319f43084b9c3f9cde380.zip
Fixes in part 3
Diffstat (limited to 'report/paper.md')
-rwxr-xr-xreport/paper.md12
1 files changed, 6 insertions, 6 deletions
diff --git a/report/paper.md b/report/paper.md
index 7e0704b..f6d90b5 100755
--- a/report/paper.md
+++ b/report/paper.md
@@ -350,7 +350,7 @@ So far we have established a combined PCA-LDA model which has good recognition w
## Committee Machine Design and Fusion Rules
-As each model in the ensemble outputs its own predicted labels, we need to define a strategy for joining the predictions such that we obtain a combined response which is better than that of an individual models. For this project, we consider two committee machine designs.
+As each model in the ensemble outputs its own predicted labels, we need to define a strategy for joining the predictions such that we obtain a combined response which is better than that of the individual models. For this project, we consider two committee machine designs.
### Majority Voting
@@ -372,7 +372,7 @@ In our testing we have elected to use a committee machine employing majority vot
The first strategy which we may use when using ensemble learning is randomisation of the data, while maintaining the model static.
-Bagging is performed by generating each dataset for the ensembles by randomly picking with replacement. We chose to perform bagging independently for each face such that we can maintain the split training and testing split ratio used with and without bagging. The performance of ensemble classification via a majority voting committee machine for various ensemble sizes is evaluated in figure \ref{fig:bagging-e}. We find that for our dataset bagging tends to reach the same accuracy as an individual non-bagged model after an ensemble size of around 30 and achieves marginally better testing error, improving accuracy by approximately 1%.
+Bagging is performed by generating each dataset for the ensembles by randomly picking from the class training set with replacement. We chose to perform bagging independently for each face such that we can maintain the split training and testing split ratio used with and without bagging. The performance of ensemble classification via a majority voting committee machine for various ensemble sizes is evaluated in figure \ref{fig:bagging-e}. We find that for our dataset bagging tends to reach the same accuracy as an individual non-bagged model after an ensemble size of around 30 and achieves marginally better testing error, improving accuracy by approximately 1%.
\begin{figure}
\begin{center}
@@ -393,13 +393,13 @@ use the 90 eigenvectors with biggest variance and picking 70 of the rest non-zer
\begin{figure}
\begin{center}
\includegraphics[width=23em]{fig/random-ensemble.pdf}
-\caption{Ensemble size effect with feature randomisation ($m_c=90$,$m_r=70$)}
+\caption{Ensemble size - feature randomisation ($m_c=90$,$m_r=70$)}
\label{fig:random-e}
\end{center}
\end{figure}
In figure \ref{fig:random-e} we can see the effect of ensemble size when using the biggest
-90 eigenvectors and 70 random eigenvectors. As can be seen from the graph, feature space randomisation is able to increase accuracy by approximately 2% for our data. However, this improvement is dependent on the number of eigenvectors used and the number of random eigenvectors. For example, using a small fully random set of eigenvectors is detrimental to the performance (seen on \ref{fig:vaskoplot3}).
+90 eigenvectors and 70 random eigenvectors. Feature space randomisation is able to increase accuracy by approximately 2% for our data. However, this improvement is dependent on the number of eigenvectors used and the number of random eigenvectors. For example, using a small fully random set of eigenvectors is detrimental to the performance (seen on \ref{fig:vaskoplot3}).
We noticed that an ensemble size of around 27 is the point where accuracy or error plateaus. We will use this number when performing an exhaustive search on the optimal randomness parameter.
@@ -420,8 +420,8 @@ The optimal number of constant and random eigenvectors to use is therefore an in
\end{center}
\end{figure}
-The optimal randomness after doing an exhaustive search as seen on figure \label{fig:opti-rand}peaks at
-95 randomised eigenvectors out of 155 total eigenvectors, or 60 static and 95 random eigenvectors. The values of $M_{\textrm{lda}}$ in the figures is the maximum of 51.
+The optimal randomness after doing an exhaustive search as seen on figure \ref{fig:opti-rand}peaks at
+95 randomised eigenvectors out of 155 total eigenvectors, or 60 static and 95 random eigenvectors. The values of $M_{\textrm{lda}}$ in the figures is 51.
The red peaks on the 3d-plot represent the proportion of randomised eigenvectors which achieve the optimal accuracy, which have been further plotted in figure \ref{fig:opt-2d}. We found that for our data, the optimal ratio of random eigenvectors for a given $M$ is between $0.6$ and $0.9$.