aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorVasil Zlatanov <v@skozl.com>2018-12-13 13:17:11 +0000
committerVasil Zlatanov <v@skozl.com>2018-12-13 13:17:11 +0000
commit7229b1be92ad7adf681235c5e48032172e461853 (patch)
treebebda82a26c979da0bbdf4ec209d696f418d52ad
parent34ef39354a48146fff99d9fcbb1882ae50f9a627 (diff)
downloadvz215_np1915-7229b1be92ad7adf681235c5e48032172e461853.tar.gz
vz215_np1915-7229b1be92ad7adf681235c5e48032172e461853.tar.bz2
vz215_np1915-7229b1be92ad7adf681235c5e48032172e461853.zip
Fix inconsistencies
-rwxr-xr-xreport/paper.md35
1 files changed, 20 insertions, 15 deletions
diff --git a/report/paper.md b/report/paper.md
index a961be0..9255920 100755
--- a/report/paper.md
+++ b/report/paper.md
@@ -1,4 +1,3 @@
-
# Forulation of the Addresssed Machine Learning Problem
## Probelm Definition
@@ -55,7 +54,7 @@ Identification accuracies at top1, top5 and top10 are respectively 47%, 67% and
\begin{figure}
\begin{center}
\includegraphics[width=20em]{fig/baseline.pdf}
-\caption{Recognition accuracy of baseline Nearest Neighbor @rank k}
+\caption{Top k identification accuracy of baseline Nearest Neighbor}
\label{fig:baselineacc}
\end{center}
\end{figure}
@@ -67,7 +66,7 @@ identification is shown in red.
\begin{figure}
\begin{center}
\includegraphics[width=22em]{fig/eucranklist.png}
-\caption{Ranklist @rank10 generated for 5 query images}
+\caption{Top 10 ranklist generated for 5 query images}
\label{fig:eucrank}
\end{center}
\end{figure}
@@ -111,11 +110,17 @@ We find that for the query and gallery set clustering does not seem to improve i
## Comment on Mahalnobis Distance as a metric
We were not able to achieve significant improvements using mahalanobis for
-original distance ranking compared to square euclidiaen metrics. Results can
-be observed using the `-m|--mahalanobis` when running evalution with the
-repository complimenting this paper.
+original distance ranking compared to square euclidiaen metrics.
+
+The mahalanobis distance metric was used to create the ranklist as an alternative to euclidean distance.
+When performing mahalanobis with the training set as the covariance matrix, reported accuracy is reduced to
+**18%** .
+
+We also attempted to perform the same mahalanobis metric on a reduced PCA featureset. This allowed for significant execution
+time improvements due to the greatly reduced computation requierments for smaller featurespace, but nevertheless demonstrated no
+improvements over an euclidean metric.
-**COMMENT ON VARIANCE AND MAHALANOBIS RESULTS**
+These results are likely due to the **extremely** low covariance of features in the training set. This is evident when looking at the Covariance matrix of the training data, and is also visible in figure \ref{fig:subspace}. This is likely the result of the feature transformations performed the the ResNet-50 convolution model the features were extracted from.
\begin{figure}
\begin{center}
@@ -126,10 +131,10 @@ repository complimenting this paper.
\end{center}
\end{figure}
-## k-reciprocal Reranking Formulation
+## k-reciprocal Re-ranking Formulation
The approach addressed to improve the identification performance is based on
-k-reciprocal reranking. The following section summarizes the idea behind
+k-reciprocal re-ranking. The following section summarizes the idea behind
the method illustrated in reference @rerank-paper.
We define $N(p,k)$ as the top k elements of the ranklist generated through NN,
@@ -197,15 +202,15 @@ training are close to the ones for the local maximum of gallery and query.
\begin{center}
\includegraphics[width=12em]{fig/lambda_acc.pdf}
\includegraphics[width=12em]{fig/lambda_acc_tr.pdf}
-\caption{Top 1 Identification Accuracy with Rerank varying lambda(gallery-query left, train right) K1=9, K2=3}
+\caption{Top 1 Identification Accuracy with Re-rank varying lambda(gallery-query left, train right) K1=9, K2=3}
\label{fig:lambda}
\end{center}
\end{figure}
-## k-reciprocal Reranking Evaluation
+## k-reciprocal Re-ranking Evaluation
-Reranking achieves better results than the other baseline methods analyzed both as $top k$
+Re-ranking achieves better results than the other baseline methods analyzed both as $top k$
accuracy and mean average precision.
It is also necessary to estimate how precise the ranklist generated is.
For this reason an additional method of evaluation is introduced: mAP. See reference @mAP.
@@ -216,20 +221,20 @@ has improved for the fifth query. The mAP improves from 47% to 61.7%.
\begin{figure}
\begin{center}
\includegraphics[width=24em]{fig/ranklist.png}
-\caption{Ranklist (improved method) @rank10 generated for 5 query images}
+\caption{Top 10 Ranklist (improved method) generated for 5 query images}
\label{fig:ranklist2}
\end{center}
\end{figure}
Figure \ref{fig:compare} shows a comparison between $top k$ identification accuracies
-obtained with the two methods. It is noticeable that the k-reciprocal reranking method significantly
+obtained with the two methods. It is noticeable that the k-reciprocal re-ranking method significantly
improves the results even for $top1$, boosting identification accuracy from 47% to 56.5%.
The difference between the $top k$ accuracies of the two methods gets smaller as we increase k.
\begin{figure}
\begin{center}
\includegraphics[width=20em]{fig/comparison.pdf}
-\caption{Comparison of recognition accuracy @rank k (KL=0.3,K1=9,K2=3)}
+\caption{Top K (@rank) Identification accuracy (KL=0.3,K1=9,K2=3)}
\label{fig:compare}
\end{center}
\end{figure}