aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authornunzip <np.scarh@gmail.com>2018-12-10 16:33:11 +0000
committernunzip <np.scarh@gmail.com>2018-12-10 16:33:11 +0000
commit62a695d43a2b64295ec718826cc7a33e39e983e2 (patch)
treec8a6429de3db4ee9875a5b49da4c882e8170b529
parentb9bc3e045e1244183b76682a5f4be2c3e693d517 (diff)
parent8874aec6c05402f05b2b01b8b907dd8f8468719d (diff)
downloadvz215_np1915-62a695d43a2b64295ec718826cc7a33e39e983e2.tar.gz
vz215_np1915-62a695d43a2b64295ec718826cc7a33e39e983e2.tar.bz2
vz215_np1915-62a695d43a2b64295ec718826cc7a33e39e983e2.zip
Merge branch 'master' of git.skozl.com:e4-pattern
-rwxr-xr-xevaluate.py3
-rwxr-xr-xreport2/metadata.yaml7
-rwxr-xr-xreport2/paper.md44
3 files changed, 27 insertions, 27 deletions
diff --git a/evaluate.py b/evaluate.py
index 7808c2e..3b420db 100755
--- a/evaluate.py
+++ b/evaluate.py
@@ -159,7 +159,8 @@ def test_model(gallery_data, probe_data, gallery_label, probe_label, gallery_cam
max_level_precision[i][j] = np.max(precision[i][np.where(recall[i]>=(j/10))])
#print(mAP[i])
for i in range(probe_label.shape[0]):
- mAP[i] = sum(max_level_precision[i])/11
+ #mAP[i] = sum(max_level_precision[i])/11
+ mAP[i] = sum(precision[i])/args.neighbors
print('mAP:',np.mean(mAP))
return target_pred
diff --git a/report2/metadata.yaml b/report2/metadata.yaml
index 467efb6..f35d6aa 100755
--- a/report2/metadata.yaml
+++ b/report2/metadata.yaml
@@ -7,6 +7,11 @@ numbersections: yes
lang: en
babel-lang: english
abstract: |
-
+ This report analyses distance metrics learning techniques with regards to
+ identification accuracy for the dataset CUHK03. The baseline method used for
+ identification is Eucdidian based Nearest Neighbors based on Euclidean distance.
+ The improved approach we propose utilises Jaccardian metrics to rearrange the NN
+ ranklist based on reciprocal neighbours. While this approach is more complex and introduced new hyperparameter, significant accuracy improvements are observed -
+ approximately 10% increased Top-1 identifications, and good improvements for Top-$N$ accuracy with low $N$.
...
diff --git a/report2/paper.md b/report2/paper.md
index 0ceeedd..e4c3ec0 100755
--- a/report2/paper.md
+++ b/report2/paper.md
@@ -1,17 +1,13 @@
-# Summary
-In this report we analysed how distance metrics learning affects classification
-accuracy for the dataset CUHK03. The baseline method used for classification is
-Nearest Neighbors based on Euclidean distance. The improved approach we propose
-mixes Jaccardian and Mahalanobis metrics to obtain a ranklist that takes into
-account also the reciprocal neighbors. This approach is computationally more
-complex, since the matrices representing distances are effectively calculated
-twice. However it is possible to observe a significant accuracy improvement of
-around 10% for the $@rank1$ case. Accuracy improves overall, especially for
-$@rankn$ cases with low n.
-
# Formulation of the Addresssed Machine Learning Problem
-## CUHK03
+## Probelm Definition
+
+The problem to solve is to create a ranklist for each image of the query set
+by finding the nearest neighbor(s) within a gallery set. However gallery images
+with the same label and taken from the same camera as the query image should
+not be considered when forming the ranklist.
+
+## Dataset - CUHK03
The dataset CUHK03 contains 14096 pictures of people captured from two
different cameras. The feature vectors used come from passing the
@@ -23,13 +19,6 @@ on a training set (train_idx, adequately split between test, train and
validation keeping the same number of identities). This prevents overfitting
the algorithm to the specific data associated with query_idx and gallery_idx.
-## Probelm to solve
-
-The problem to solve is to create a ranklist for each image of the query set
-by finding the nearest neighbor(s) within a gallery set. However gallery images
-with the same label and taken from the same camera as the query image should
-not be considered when forming the ranklist.
-
## Nearest Neighbor ranklist
Nearest Neighbor aims to find the gallery image whose feature are the closest to
@@ -46,7 +35,7 @@ EXPLAIN KNN BRIEFLY
\begin{figure}
\begin{center}
\includegraphics[width=20em]{fig/baseline.pdf}
-\caption{Recognition accuracy of baseline Nearest Neighbor @rank k}
+\caption{Top K Accuracy for Nearest Neighbour classification}
\label{fig:baselineacc}
\end{center}
\end{figure}
@@ -54,26 +43,31 @@ EXPLAIN KNN BRIEFLY
\begin{figure}
\begin{center}
\includegraphics[width=22em]{fig/eucranklist.png}
-\caption{Ranklist @rank10 generated for 5 query images}
+\caption{Top 10 ranklist for 5 probes}
\label{fig:eucrank}
\end{center}
\end{figure}
-
# Suggested Improvement
\begin{figure}
\begin{center}
\includegraphics[width=24em]{fig/ranklist.png}
-\caption{Ranklist (improved method) @rank10 generated for 5 query images}
+\caption{Top 10 ranklist (improved method) 5 probes}
\label{fig:ranklist2}
\end{center}
\end{figure}
+
+TODO:
+~~
+s/kNN/NN/
+~~
+
\begin{figure}
\begin{center}
\includegraphics[width=20em]{fig/comparison.pdf}
-\caption{Comparison of recognition accuracy @rank k}
+\caption{Top K Accurarcy}
\label{fig:baselineacc}
\end{center}
\end{figure}
@@ -81,7 +75,7 @@ EXPLAIN KNN BRIEFLY
\begin{figure}
\begin{center}
\includegraphics[width=17em]{fig/pqvals.pdf}
-\caption{Recognition accuracy varying K1 and K2}
+\caption{Top 1 Accuracy when k1 and k2}
\label{fig:pqvals}
\end{center}
\end{figure}