aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorVasil Zlatanov <v@skozl.com>2018-12-13 14:37:24 +0000
committerVasil Zlatanov <v@skozl.com>2018-12-13 14:37:24 +0000
commitc574a2da39d3c4f7fb474f75bd1660fd4380dd1c (patch)
treefe3c34dcba2b585429e5c67cf97dd615eebaf538
parent7229b1be92ad7adf681235c5e48032172e461853 (diff)
downloadvz215_np1915-c574a2da39d3c4f7fb474f75bd1660fd4380dd1c.tar.gz
vz215_np1915-c574a2da39d3c4f7fb474f75bd1660fd4380dd1c.tar.bz2
vz215_np1915-c574a2da39d3c4f7fb474f75bd1660fd4380dd1c.zip
Surround numbers in $
-rwxr-xr-xreport/paper.md39
1 files changed, 19 insertions, 20 deletions
diff --git a/report/paper.md b/report/paper.md
index 9255920..d3e426f 100755
--- a/report/paper.md
+++ b/report/paper.md
@@ -73,9 +73,9 @@ identification is shown in red.
Magnitude normalization of the feature vectors does not improve
accuracy results of the baseline as it can be seen in figure \ref{fig:baselineacc}.
-This is due to the fact that the feature vectors appear scaled, releative to their
+This is due to the fact that the feature vectors appear to be scaled, releative to their
significance, for optimal distance classification, and as such normalising loses this
-scaling by importance which has previously been introduced to the features.
+scaling by importance which may have previously been introduced by the network.
## kMeans Clustering
@@ -107,14 +107,13 @@ We find that for the query and gallery set clustering does not seem to improve i
# Suggested Improvement
-## Comment on Mahalnobis Distance as a metric
+## Mahalanobis Distance
We were not able to achieve significant improvements using mahalanobis for
original distance ranking compared to square euclidiaen metrics.
The mahalanobis distance metric was used to create the ranklist as an alternative to euclidean distance.
-When performing mahalanobis with the training set as the covariance matrix, reported accuracy is reduced to
-**18%** .
+When performing mahalanobis with the training set as the covariance matrix, reported accuracy is reduced to **38%** .
We also attempted to perform the same mahalanobis metric on a reduced PCA featureset. This allowed for significant execution
time improvements due to the greatly reduced computation requierments for smaller featurespace, but nevertheless demonstrated no
@@ -131,14 +130,14 @@ These results are likely due to the **extremely** low covariance of features in
\end{center}
\end{figure}
-## k-reciprocal Re-ranking Formulation
+## $k$-reciprocal Re-ranking Formulation
The approach addressed to improve the identification performance is based on
-k-reciprocal re-ranking. The following section summarizes the idea behind
+$k$-reciprocal re-ranking. The following section summarizes the idea behind
the method illustrated in reference @rerank-paper.
-We define $N(p,k)$ as the top k elements of the ranklist generated through NN,
-where p is a query image. The k reciprocal ranklist, $R(p,k)$ is defined as the
+We define $N(p,k)$ as the top $k$ elements of the ranklist generated through NN,
+where $p$ is a query image. The k reciprocal ranklist, $R(p,k)$ is defined as the
intersection $R(p,k)=\{g_i|(g_i \in N(p,k))\land(p \in N(g_i,k))\}$. Adding
$\frac{1}{2}k$ reciprocal nearest neighbors of each element in the ranklist
$R(p,k)$, it is possible to form a more reliable set
@@ -147,13 +146,13 @@ the problem of query and gallery images being affected by factors such
as position, illumination and foreign objects. $R^*(p,k)$ is used to
recalculate the distance between query and gallery images.
-Jaccard metric of the k-reciprocal sets is used to calculate the distance
-between p and $g_i$ as: $$d_J(p,g_i)=1-\frac{|R^*(p,k)\cap R^*(g_i,k)|}{|R^*(p,k)\cup R^*(g_i,k)|}$$.
+Jaccard metric of the $k$-reciprocal sets is used to calculate the distance
+between $p$ and $g_i$ as: $$d_J(p,g_i)=1-\frac{|R^*(p,k)\cap R^*(g_i,k)|}{|R^*(p,k)\cup R^*(g_i,k)|}$$.
-However, since the neighbors of the query p are close to $g_i$ as well,
+However, since the neighbors of the query $p$ are close to $g_i$ as well,
they would be more likely to be identified as true positive. This implies
the need of a more discriminative method, which is achieved
-encoding the k-reciprocal neighbors into an N-dimensional vector as a function
+encoding the $k$-reciprocal neighbors into an $N$-dimensional vector as a function
of the original distance (in our case square euclidean $d(p,g_i) = \|p-g_i\|^2$)
through the gaussian kernell:
@@ -182,7 +181,7 @@ improved ranklist: $d^*(p,g_i)=(1-\lambda)d_J(p,g_i)+\lambda d(p,g_i)$.
The aim is to learn optimal values for $k_1,k_2$ and $\lambda$ in the training set that improve top1 identification accuracy.
This is done through a simple multi-direction search algorithm followed by exhaustive search to estimate
-$k_{1_{opt}}$ and $k_{2_{opt}}$ for eleven values of $\lambda$ from zero(only Jaccard distance) to one(only original distance)
+$k_{1_{opt}}$ and $k_{2_{opt}}$ for eleven values of $\lambda$ from zero (only Jaccard distance) to one (only original distance)
in steps of 0.1. The results obtained through this approach suggest: $k_{1_{opt}}=9, k_{2_{opt}}=3, 0.1\leq\lambda_{opt}\leq 0.3$.
It is possible to verify that the optimization of $k_{1_{opt}}$, $k_{2_{opt}}$ and $\lambda$
@@ -208,9 +207,9 @@ training are close to the ones for the local maximum of gallery and query.
\end{figure}
-## k-reciprocal Re-ranking Evaluation
+## $k$-reciprocal Re-ranking Evaluation
-Re-ranking achieves better results than the other baseline methods analyzed both as $top k$
+Re-ranking achieves better results than the other baseline methods analyzed both as top $k$
accuracy and mean average precision.
It is also necessary to estimate how precise the ranklist generated is.
For this reason an additional method of evaluation is introduced: mAP. See reference @mAP.
@@ -226,10 +225,10 @@ has improved for the fifth query. The mAP improves from 47% to 61.7%.
\end{center}
\end{figure}
-Figure \ref{fig:compare} shows a comparison between $top k$ identification accuracies
-obtained with the two methods. It is noticeable that the k-reciprocal re-ranking method significantly
-improves the results even for $top1$, boosting identification accuracy from 47% to 56.5%.
-The difference between the $top k$ accuracies of the two methods gets smaller as we increase k.
+Figure \ref{fig:compare} shows a comparison between top $k$ identification accuracies
+obtained with the two methods. It is noticeable that the $k$-reciprocal re-ranking method significantly
+improves the results even for top$1$, boosting identification accuracy from 47% to 56.5%.
+The difference between the top $k$ accuracies of the two methods gets smaller as we increase $k$.
\begin{figure}
\begin{center}