aboutsummaryrefslogtreecommitdiff
path: root/report/paper.md
diff options
context:
space:
mode:
Diffstat (limited to 'report/paper.md')
-rw-r--r--report/paper.md12
1 files changed, 6 insertions, 6 deletions
diff --git a/report/paper.md b/report/paper.md
index e8d9709..5c538e9 100644
--- a/report/paper.md
+++ b/report/paper.md
@@ -48,7 +48,7 @@ this coursework, only leading to an increase in execution time.
## Hyperparameters tuning
Figure \ref{fig:km-tree-param} shows the effect of tree depth and number of trees
-for kmean 100 cluster centers.
+for K-means 100 cluster centers.
\begin{figure}[H]
\begin{center}
@@ -59,7 +59,7 @@ for kmean 100 cluster centers.
\end{center}
\end{figure}
-Figure \ref{fig:kmeanrandom} shows randomness parameter for kmean 100.
+Figure \ref{fig:kmeanrandom} shows randomness parameter for K-means 100.
\begin{figure}[H]
\begin{center}
@@ -79,7 +79,7 @@ more. This is due to the complexity added by the two-pixels test, since it adds
\begin{figure}[H]
\begin{center}
\includegraphics[width=18em]{fig/2pixels_kmean.pdf}
-\caption{Kmean classification accuracy changing the type of weak learners}
+\caption{K-means classification accuracy changing the type of weak learners}
\label{fig:2pt}
\end{center}
\end{figure}
@@ -100,7 +100,7 @@ more. This is due to the complexity added by the two-pixels test, since it adds
\begin{figure}[H]
\begin{center}
\includegraphics[width=18em]{fig/e100k256d5_cm.pdf}
-\caption{e100k256d5cm Kmean Confusion Matrix}
+\caption{e100k256d5cm K-means Confusion Matrix}
\label{fig:km_cm}
\end{center}
\end{figure}
@@ -109,14 +109,14 @@ more. This is due to the complexity added by the two-pixels test, since it adds
\begin{center}
\includegraphics[width=10em]{fig/success_km.pdf}
\includegraphics[width=10em]{fig/fail_km.pdf}
-\caption{Kmean: Success on the left; Failure on the right}
+\caption{K-means: Success on the left; Failure on the right}
\label{fig:km_succ}
\end{center}
\end{figure}
# RF codebook
-An alternative to codebook creation via *K-means* involves using an ensemble of totally random trees. We code each decriptor according to which leaf of each tree in the ensemble it is sorted. This effectively performs and unsupervised transformation of our dataset to a high-dimensional sparse representation. The dimension of the vocubulary size is determined by the number of leaves in each random tree and the ensemble size.
+An alternative to codebook creation via *K-means* involves using an ensemble of totally random trees. We code each decriptor according to which leaf of each tree in the ensemble it is sorted. This effectively performs and unsupervised transformation of our dataset to a high-dimensional spare representation. The dimension of the vocubulary size is determined by the number of leaves in each random tree and the ensemble size.
\begin{figure}[H]
\begin{center}