aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authornunzip <np.scarh@gmail.com>2018-12-13 15:41:05 +0000
committernunzip <np.scarh@gmail.com>2018-12-13 15:41:05 +0000
commit99deea0b69ccd1d7ad8ef889e6da052beda37fef (patch)
treea351048911e5a18b8a0ca4b2457f13d78199b90b
parent593e701b22513809816359f14a5045528cc50bef (diff)
downloadvz215_np1915-99deea0b69ccd1d7ad8ef889e6da052beda37fef.tar.gz
vz215_np1915-99deea0b69ccd1d7ad8ef889e6da052beda37fef.tar.bz2
vz215_np1915-99deea0b69ccd1d7ad8ef889e6da052beda37fef.zip
Minor layout adjustments
-rwxr-xr-xreport/paper.md22
1 files changed, 11 insertions, 11 deletions
diff --git a/report/paper.md b/report/paper.md
index 79d4183..79612c3 100755
--- a/report/paper.md
+++ b/report/paper.md
@@ -16,7 +16,7 @@ The dataset CUHK03 contains 14096 pictures of people captured from two
different cameras. The feature vectors used, extracted from a trained ResNet50 model
, contain 2048 features that are used for identification.
-The pictures represent 1467 different identities, each of which appears 9 to 10
+The pictures represent 1467 different identities, each of which appears 7 to 10
times. Data is seperated in train, query and gallery sets with `train_idx`,
`query_idx` and `gallery_idx` respectively, where the training set has been used
to develop the ResNet50 model used for feature extraction. This procedure has
@@ -41,9 +41,6 @@ distance:
$$ \textrm{NN}(x) = \operatorname*{argmin}_{i\in[m]} \|x-x_i\| $$
-Alternative distance metrics exist such as jaccardian and mahalanobis, which can
-be used as an alternative to euclidiean distance.
-
# Baseline Evaluation
To evaluate improvements brought by alternative distance learning metrics a baseline
@@ -105,21 +102,22 @@ We find that for the query and gallery set clustering does not seem to improve i
\end{center}
\end{figure}
-# Suggested Improvement
-
## Mahalanobis Distance
We were not able to achieve significant improvements using mahalanobis for
original distance ranking compared to square euclidiaen metrics.
-The mahalanobis distance metric was used to create the ranklist as an alternative to euclidean distance.
-When performing mahalanobis with the training set as the covariance matrix, reported accuracy is reduced to **38%** .
+The mahalanobis distance metric was used to create the ranklist as an alternative to euclidean distance:
+
+$$ d_M(p,g_i) = (p-g_i)^TM(p-g_i). $$
+
+When performing mahalanobis with the covariance matrix $M$ generated from the training set, reported accuracy is reduced to **38%** .
We also attempted to perform the same mahalanobis metric on a reduced PCA featureset. This allowed for significant execution
time improvements due to the greatly reduced computation requierments for smaller featurespace, but nevertheless demonstrated no
improvements over an euclidean metric.
-These results are likely due to the **extremely** low covariance of features in the training set. This is evident when looking at the Covariance matrix of the training data, and is also visible in figure \ref{fig:subspace}. This is likely the result of the feature transformations performed the the ResNet-50 convolution model the features were extracted from.
+These results are likely due to the **extremely** low covariance of features in the training set. This is evident when looking at the covariance matrix of the training data, and is also visible in figure \ref{fig:subspace}. This is likely the result of the feature transformations performed the the ResNet-50 convolution model the features were extracted from.
\begin{figure}
\begin{center}
@@ -130,6 +128,8 @@ These results are likely due to the **extremely** low covariance of features in
\end{center}
\end{figure}
+# Suggested Improvement
+
## $k$-reciprocal Re-ranking Formulation
The approach addressed to improve the identification performance is based on
@@ -167,11 +167,11 @@ e\textsuperscript{\textit{-d(p,g\textsubscript{i})}}, & \text{if}\ \textit{g\tex
Through this transformation it is possible to reformulate the distance obtained
through Jaccardian metric as:
-$$ d_J(p,g_i)=1-\frac{\sum\limits_{j=1}^N min(V_{p,g_j},V_{g_i,g_j})}{\sum\limits_{j=1}^N max(V_{p,g_j},V_{g_i,g_j})} $$
+$$ d_J(p,g_i)=1-\frac{\sum\limits_{j=1}^N min(V_{p,g_j},V_{g_i,g_j})}{\sum\limits_{j=1}^N max(V_{p,g_j},V_{g_i,g_j})}. $$
It is then possible to perform a local query expansion using the g\textsubscript{i} neighbors of
defined as:
-$$ V_p=\frac{1}{|N(p,k_2)|}\sum\limits_{g_i\in N(p,k_2)}V_{g_i} $$.
+$$ V_p=\frac{1}{|N(p,k_2)|}\sum\limits_{g_i\in N(p,k_2)}V_{g_i}. $$
We refer to $k_2$ since we limit the size of the nighbors to prevent noise
from the $k_2$ neighbors. The dimension k of the *$R^*$* set will instead
be defined as $k_1$: $R^*(g_i,k_1)$.