diff options
Diffstat (limited to 'report')
-rwxr-xr-x | report/paper.md | 26 |
1 files changed, 22 insertions, 4 deletions
diff --git a/report/paper.md b/report/paper.md index 94a0e7c..dc22600 100755 --- a/report/paper.md +++ b/report/paper.md @@ -467,7 +467,16 @@ The optimal number of constant and random eigenvectors to use is therefore an in The optimal randomness after doing an exhaustive search as seen on figure \label{fig:opti-rand}peaks at 95 randomised eigenvectors out of 155 total eigenvectors, or 60 static and 95 random eigenvectors. The values of $M_{\textrm{lda}}$ in the figures is the maximum of 51. -The red peaks on the 3d-plot represent the proportion of randomised eigenvectors which achieve the optimal accuracy, which have been further plotted in figure \label{opt-2d} +The red peaks on the 3d-plot represent the proportion of randomised eigenvectors which achieve the optimal accuracy, which have been further plotted in figure \ref{opt-2d}. We found that for our data, the optimal ratio of random eigenvectors for a given $M$ is between $0.6$ and $0.9$. + +\begin{figure} +\begin{center} +\includegraphics[width=19em]{fig/nunzplot1.pdf} +\caption{Optimal randomness ratio} +\label{fig:opt-2d} +\end{center} +\end{figure} + ### Ensemble Confusion Matrix @@ -479,10 +488,19 @@ The red peaks on the 3d-plot represent the proportion of randomised eigenvectors \end{center} \end{figure} -We can compute an ensemble confusion matrix before the committee machines as shown in figure \ref{fig:ens-cm}. This confusion matrix combines the output of all the models in the ensemble. As can be seen from the figure, different models make different mistakes. - +We can compute an ensemble confusion matrix before the committee machines as shown in figure \ref{fig:ens-cm}. This confusion matrix combines the output of all the models in the ensemble. As can be seen from the figure, models in the ensemble usually make more mistakes than an individual model. When the ensemble size is large enough, the errors are rectified by the committee machine, resulting in low error as observed in figure \ref{fig:random-e}. ## Comparison -Combining bagging and feature space randomization we are able to achieve higher test accuracy than the individual models. Here is a comparison for various splits. +Combining bagging and feature space randomization we are able to consistently achieve higher test accuracy than the individual models. In table \ref{tab:compare} $70/30$ splits. + +\begin{table}[] +\begin{tabular}{lrr} \hline +Seed & Individual$(M=120)$ & Bag + Feature Ens.$(M=60+95)$\\ \hline +0 & 0.916 & 0.923 \\ +1 & 0.929 & 0.942 \\ +5 & 0.897 & 0.910 \\ \hline +\end{tabular} +\label{tab:compare} +\end{table} # References |