aboutsummaryrefslogtreecommitdiff
path: root/report2
diff options
context:
space:
mode:
Diffstat (limited to 'report2')
-rwxr-xr-xreport2/paper.md134
1 files changed, 116 insertions, 18 deletions
diff --git a/report2/paper.md b/report2/paper.md
index bd559d7..2e0bb0a 100755
--- a/report2/paper.md
+++ b/report2/paper.md
@@ -1,17 +1,44 @@
-# Formulation of the Addresssed Machine Learning Problem
-
-## Probelm Definition
-
-The person re-identification problem presented in this paper requires mtatching pedestrian images from disjoint camera's by pedestrian detectors. This problem is challenging, as identities captured in photsos are subject to various lighting, pose, blur, background and oclusion from various camera views. This report considers features extracted from the CUHK03 dataset, following a 50 layer Residual network (Resnet50),. This paper considers distance metrics techniques which can be used to perform person re-identification across **disjoint* cameras, using these features.
-
-## Dataset - CUHK03
+# Summary
+
+In this report we analysed how distance metrics learning affects classification
+accuracy for the dataset CUHK03. The baseline method used for classification is
+Nearest Neighbors based on Euclidean distance. The improved approach we propose
+mixes Jaccardian and Mahalanobis metrics to obtain a ranklist that takes into
+account also the reciprocal neighbors. This approach is computationally more
+complex, since the matrices representing distances are effectively calculated
+twice. However it is possible to observe a significant accuracy improvement of
+around 10% for the $@rank1$ case. Accuracy improves overall, especially for
+$@rankn$ cases with low n.
+
+The person re-identification problem presented in this paper requires mtatching
+pedestrian images from disjoint camera's by pedestrian detectors. This problem
+is challenging, as identities captured in photsos are subject to various
+lighting, pose, blur, background and oclusion from various camera views. This
+report considers features extracted from the CUHK03 dataset, following a 50 layer
+Residual network (Resnet50). This paper considers distance metrics techniques which
+can be used to perform person re-identification across **disjoint* cameras, using
+these features.
+
+## CUHK03
The dataset CUHK03 contains 14096 pictures of people captured from two
different cameras. The feature vectors used, extracted from a trained ResNet50 model
, contain 2048 features that are used for identification.
-The pictures represent 1467 different
-identities, each of which appears 9 to 10 times. Data is seperated in train, query and gallery sets with `train_idx`, `query_idx` and `gallery_idx` respectively, where the training set has been used to develop the ResNet50 model used for feature extraction. This procedure has allowed the evaluation of distance metric learning techniques on the query and gallery sets, without an overfit feature set a the set, as it was explicitly trained on the training set.
+The pictures represent 1467 different identities, each of which appears 9 to 10
+times. Data is seperated in train, query and gallery sets with `train_idx`,
+`query_idx` and `gallery_idx` respectively, where the training set has been used
+to develop the ResNet50 model used for feature extraction. This procedure has
+allowed the evaluation of distance metric learning techniques on the query and
+gallery sets, without an overfit feature set a the set, as it was explicitly
+trained on the training set.
+
+## Probelm to solve
+
+The problem to solve is to create a ranklist for each image of the query set
+by finding the nearest neighbor(s) within a gallery set. However gallery images
+with the same label and taken from the same camera as the query image should
+not be considered when forming the ranklist.
## Nearest Neighbor ranklist
@@ -23,18 +50,21 @@ distance:
$$ \textrm{NN}(x) = \operatorname*{argmin}_{i\in[m]} \|x-x_i\|^2 $$
-*Square root when calculating euclidean distance is ommited as it does not affect ranking by distance*
+*Square root when calculating euclidean distance is ommited as it does not
+affect ranking by distance*
-Alternative distance metrics exist such as jaccardian and mahalanobis, which can be used as an alternative to euclidiean distance.
+Alternative distance metrics exist such as jaccardian and mahalanobis, which can
+be used as an alternative to euclidiean distance.
# Baseline Evaluation
-To evaluate improvements brought by alternative distance learning metrics a baseline is established as trough nearest neighbour identification as previously described.
+To evaluate improvements brought by alternative distance learning metrics a baseline
+is established as trough nearest neighbour identification as previously described.
\begin{figure}
\begin{center}
\includegraphics[width=20em]{fig/baseline.pdf}
-\caption{Top K Accuracy for Nearest Neighbour classification}
+\caption{Recognition accuracy of baseline Nearest Neighbor @rank k}
\label{fig:baselineacc}
\end{center}
\end{figure}
@@ -42,11 +72,12 @@ To evaluate improvements brought by alternative distance learning metrics a base
\begin{figure}
\begin{center}
\includegraphics[width=22em]{fig/eucranklist.png}
-\caption{Top 10 ranklist for 5 probes}
+\caption{Ranklist @rank10 generated for 5 query images}
\label{fig:eucrank}
\end{center}
\end{figure}
+
# Suggested Improvement
## kMean Clustering
@@ -57,7 +88,7 @@ To evaluate improvements brought by alternative distance learning metrics a base
\begin{figure}
\begin{center}
\includegraphics[width=24em]{fig/ranklist.png}
-\caption{Top 10 ranklist (improved method) 5 probes}
+\caption{Ranklist (improved method) @rank10 generated for 5 query images}
\label{fig:ranklist2}
\end{center}
\end{figure}
@@ -65,7 +96,7 @@ To evaluate improvements brought by alternative distance learning metrics a base
\begin{figure}
\begin{center}
\includegraphics[width=20em]{fig/comparison.pdf}
-\caption{Top K Accurarcy}
+\caption{Comparison of recognition accuracy @rank k (KL=0.3,K1=9,K2=3)}
\label{fig:baselineacc}
\end{center}
\end{figure}
@@ -73,14 +104,81 @@ To evaluate improvements brought by alternative distance learning metrics a base
\begin{figure}
\begin{center}
\includegraphics[width=17em]{fig/pqvals.pdf}
-\caption{Top 1 Accuracy when k1 and k2}
+\caption{Identification accuracy varying K1 and K2}
\label{fig:pqvals}
\end{center}
\end{figure}
+\begin{figure}
+\begin{center}
+\includegraphics[width=17em]{fig/cdist.pdf}
+\caption{First two features of gallery(o) and query(x) feature data}
+\label{fig:subspace}
+\end{center}
+\end{figure}
+
+\begin{figure}
+\begin{center}
+\includegraphics[width=17em]{fig/clusteracc.pdf}
+\caption{Top k identification accuracy for cluster count}
+\label{fig:clustk}
+\end{center}
+\end{figure}
+
+\begin{figure}
+\begin{center}
+\includegraphics[width=17em]{fig/jaccard.pdf}
+\caption{Explained Jaccard}
+\label{fig:jaccard}
+\end{center}
+\end{figure}
+
+\begin{figure}
+\begin{center}
+\includegraphics[width=17em]{fig/kmeanacc.pdf}
+\caption{Top 1 Identification accuracy varying kmeans cluster size}
+\label{fig:kmeans}
+\end{center}
+\end{figure}
+
+\begin{figure}
+\begin{center}
+\includegraphics[width=17em]{fig/lambda_acc.pdf}
+\caption{Top 1 Identification Accuracy with Rerank (varying lambda)}
+\label{fig:lambdagal}
+\end{center}
+\end{figure}
+
+\begin{figure}
+\begin{center}
+\includegraphics[width=17em]{fig/lambda_acc_tr.pdf}
+\caption{Top 1 Identification Accuracy with Rerank (varying lambda on train data)}
+\label{fig:lambdatr}
+\end{center}
+\end{figure}
+
+\begin{figure}
+\begin{center}
+\includegraphics[width=17em]{fig/mahalanobis.pdf}
+\caption{Explained Mahalanobis}
+\label{fig:mahalanobis}
+\end{center}
+\end{figure}
+
+\begin{figure}
+\begin{center}
+\includegraphics[width=17em]{fig/trainpqvals.pdf}
+\caption{Identification accuracy varying K1 and K2(train)}
+\label{fig:pqtrain}
+\end{center}
+\end{figure}
+
# Comment on Mahalnobis Distance as a metric
-We were not able to achieve significant improvements using mahalanobis for original distance ranking compared to square euclidiaen metrics. Results can be observed using the `-m|--mahalanobis` when running evalution with the repository complimenting this paper.
+We were not able to achieve significant improvements using mahalanobis for
+original distance ranking compared to square euclidiaen metrics. Results can
+be observed using the `-m|--mahalanobis` when running evalution with the
+repository complimenting this paper.
# Conclusion