diff options
-rwxr-xr-x | evaluate.py | 3 | ||||
-rw-r--r-- | report2/fig/baseline.pdf | bin | 0 -> 11539 bytes | |||
-rw-r--r-- | report2/fig/eucranklist.png | bin | 0 -> 3880590 bytes | |||
-rwxr-xr-x | report2/paper.md | 51 |
4 files changed, 50 insertions, 4 deletions
diff --git a/evaluate.py b/evaluate.py index ebfc34e..7ce586b 100755 --- a/evaluate.py +++ b/evaluate.py @@ -86,7 +86,8 @@ def test_model(gallery_data, probe_data, gallery_label, probe_label, gallery_cam else: if args.mahalanobis: # metric = 'jaccard' is also valid - distances = cdist(probe_data, gallery_data, 'jaccard') + cov_inv = np.linalg.inv(np.cov(gallery_data.T)).T + distances = cdist(probe_data, gallery_data, 'mahalanobis', VI=cov_inv) else: distances = cdist(probe_data, gallery_data, 'euclidean') diff --git a/report2/fig/baseline.pdf b/report2/fig/baseline.pdf Binary files differnew file mode 100644 index 0000000..e6a4794 --- /dev/null +++ b/report2/fig/baseline.pdf diff --git a/report2/fig/eucranklist.png b/report2/fig/eucranklist.png Binary files differnew file mode 100644 index 0000000..9daaa9c --- /dev/null +++ b/report2/fig/eucranklist.png diff --git a/report2/paper.md b/report2/paper.md index f70f77a..14f2f98 100755 --- a/report2/paper.md +++ b/report2/paper.md @@ -1,12 +1,57 @@ # Summary +In this report we analysed how distance metrics learning affects classification +accuracy for the dataset CUHK03. The baseline method used for classification is +Nearest Neighbors based on Euclidean distance. The improved approach we propose +mixes Jaccardian and Mahalanobis metrics to obtain a ranklist that takes into +account also the reciprocal neighbors. This approach is computationally more +complex, since the matrices representing distances are effectively calculated +twice. However it is possible to observe a significant accuracy improvement of +around 10% for the $@rank1$ case. Accuracy improves overall, especially for +$@rankn$ cases with low n. + +# Formulation of the Addresssed Machine Learning Problem + +## CUHK03 + +The dataset CUHK03 contains 14096 pictures of people captured from two +different cameras. The feature vectors used come from passing the +rescaled images through ResNet50. Each feature vector contains 2048 +features that we use for classification. The pictures represent 1467 different +people and each of them appears between 9 and 10 times. The separation of +train_idx, query_idx and gallery_idx allows to perform taining and validation +on a training set (train_idx, adequately split between test, train and +validation keeping the same number of identities). This prevents overfitting +the algorithm to the specific data associated with query_idx and gallery_idx. + +## Probelm to solve + +The problem to solve is to create a ranklist for each image of the query set +by finding the nearest neighbor(s) within a gallery set. However gallery images +with the same label and taken from the same camera as the query image should +not be considered when forming the ranklist. + +## Nearest Neighbor ranklist + +Nearest Neighbor aims to find the gallery image whose feature are the closest to +the ones of a query image, predicting the class of the query image as the same +of its nearest neighbor(s). The distance between images can be calculated through +different distance metrics, however one of the most commonly used is euclidean +distance, represented as $d=\sqrt{\sum (x-y)^{2}}$. + +EXPLAIN KNN BRIEFLY -# Baseline Formulation # Baseline Evaluation -# Formulation of Suggested Improvement +\begin{figure} +\begin{center} +\includegraphics[width=17em]{fig/baseline.pdf} +\caption{Recognition accuracy of baseline Nearest Neighbor @rank k} +\label{fig:baselineacc} +\end{center} +\end{figure} -# Suggested Improvement Evaluation +# Suggested Improvement # Conclusion |