aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorVasil Zlatanov <v@skozl.com>2018-12-14 09:57:32 +0000
committerVasil Zlatanov <v@skozl.com>2018-12-14 09:57:32 +0000
commit62a5dbc67a072029691c8ed4e920fd8be057ce2e (patch)
tree7da8880f1fa5a805a179656b72d341cd99ac72bc
parent8e8ee911c37f89c1029be5389efebf3e42bf38af (diff)
downloadvz215_np1915-62a5dbc67a072029691c8ed4e920fd8be057ce2e.tar.gz
vz215_np1915-62a5dbc67a072029691c8ed4e920fd8be057ce2e.tar.bz2
vz215_np1915-62a5dbc67a072029691c8ed4e920fd8be057ce2e.zip
Polish report
-rw-r--r--report/paper.md44
1 files changed, 19 insertions, 25 deletions
diff --git a/report/paper.md b/report/paper.md
index 8502d9d..46dd82e 100644
--- a/report/paper.md
+++ b/report/paper.md
@@ -10,9 +10,7 @@ features extracted from the CUHK03 dataset, following a 50 layer Residual networ
(ResNet50). Different distance metrics techniques can be used to
perform person re-identification across *disjoint* cameras.
-Features extracted from Neural Networks such as ResNet-50 are already highly processed
-and represent **features** extracted from the raw data. We therefore expect it to be extremely
-hard to further optimise the feature vectors for data separation, but we may be able benefit
+Features extracted from Neural Networks such as ResNet-50 are already highly processed. We therefore expect it to be extremely hard to further optimise the feature vectors for data separation, but we may be able benefit
from alternative neighbour matching algorithms that take into account the
relative positions of the nearest neighbours with the probe and each other.
@@ -27,8 +25,7 @@ times. Data is separated in train, query and gallery sets with `train_idx`,
`query_idx` and `gallery_idx` respectively, where the training set has been used
to develop the ResNet50 model used for feature extraction. This procedure has
allowed the evaluation of distance metric learning techniques on the query and
-gallery sets, with the knowledge that we are not comparing over-fitted features,
-as they were extracted based on the model derived from the training set.
+gallery sets, with the knowledge that we are not comparing features produced by a net over-fitted on them, as they were extracted based on the model derived from the training set.
## Nearest Neighbor rank-list
@@ -188,7 +185,7 @@ $$ d_J(p,g_i)=1-\frac{\sum\limits_{j=1}^N min(V_{p,g_j},V_{g_i,g_j})}{\sum\limit
It is then possible to perform a local query expansion using the g\textsubscript{i} neighbors of
defined as:
$$ V_p=\frac{1}{|N(p,k_2)|}\sum\limits_{g_i\in N(p,k_2)}V_{g_i}. $$
-We refer to $k_2$ since we limit the size of the nighbors to prevent noise
+We refer to $k_2$ since it allows constricting the the size of the neighbors to prevent noise
from the $k_2$ neighbors. The dimension k of the *$R^*$* set will instead
be defined as $k_1$: $R^*(g_i,k_1)$.
@@ -209,7 +206,7 @@ training are close to the ones for the local maximum of gallery and query.
\begin{center}
\includegraphics[width=12em]{fig/pqvals.pdf}
\includegraphics[width=12em]{fig/trainpqvals.pdf}
-\caption{Identification accuracy varying K1 and K2 (gallery-query left, train right) KL=0.3}
+\caption{Identification accuracy varying $k_1$ and $k_2$ (gallery-query left, train right) KL=0.3}
\label{fig:pqvals}
\end{center}
\end{figure}
@@ -218,7 +215,7 @@ training are close to the ones for the local maximum of gallery and query.
\begin{center}
\includegraphics[width=12em]{fig/lambda_acc.pdf}
\includegraphics[width=12em]{fig/lambda_acc_tr.pdf}
-\caption{Top 1 Identification Accuracy with Re-rank varying lambda(gallery-query left, train right) K1=9, K2=3}
+\caption{Top 1 Identification Accuracy with Re-rank varying lambda(gallery-query left, train right) $k_1$=9, $k_2$=3}
\label{fig:lambda}
\end{center}
\end{figure}
@@ -236,7 +233,7 @@ has improved for the fifth query. The mAP improves from 47.2% to 61.7%.
\begin{figure}
\begin{center}
\includegraphics[width=24em]{fig/ranklist.png}
-\caption{Top 10 Rank-list (improved method) generated for 5 query images}
+\caption{Top 10 Rank-list (with re-rank) generated for 5 query images}
\label{fig:ranklist2}
\end{center}
\end{figure}
@@ -249,21 +246,17 @@ The difference between the top $k$ accuracies of the two methods gets smaller as
\begin{figure}
\begin{center}
\includegraphics[width=20em]{fig/comparison.pdf}
-\caption{Top K (@rank) Identification accuracy (KL=0.3,K1=9,K2=3)}
+\caption{Top K (@rank) Identification accuracy ($K_L=0.3$,$k_1=9$,$k_2=3$)}
\label{fig:compare}
\end{center}
\end{figure}
-The improved results due to $k$-reciprocal re-ranking may be explained by considering that re-ranks based on second order neighbours,
-that is, the neighbours of the neighbours. For neighbours which display identifiable features, such as a backpack or binder that is
-not visible in the query but visible in a close neighbour, the reranking algorithm is able to infer **that the strong relationship based on this newly introduced
-feature such as a backpack or folder by the neighbour, uniquely identify other identities in the gallery with the same feature, and moving them up the rankinlist
-as a result despite the identifiable feature being hidden in the query**. An example of this can be seen in figure \ref{fig:rerank}.
+Figure \ref{fig:rerank} shows successful reranking as performed using the query and gallery sets. The $k$-reciprocal ranklist of the second nearest neighbour, contains the target identity, and when the final distance is calculated as a function original and Jaccard distance, it becomes the closest neighbour. This happens with three cases of the identity, achieving 100% mAP despite the image visually hard to classify due to the presence of another person in the photos and a completely different pose.
\begin{figure}
\begin{center}
\includegraphics[width=24em]{fig/rerank.pdf}
-\caption{Rerank example}
+\caption{Re-rank example}
\label{fig:rerank}
\end{center}
\end{figure}
@@ -271,24 +264,25 @@ as a result despite the identifiable feature being hidden in the query**. An exa
# Conclusions
-Overall the reranking method gives a significant improvement to both top $k$ accuracy and mean average precision.
-The cost of this operation is an increase in computation time due to the change in complexity from
-the baseline, summarized in table \ref{tab:complexity}.
\begin{table}[]
\centering
\begin{tabular}{ll}
\hline
-\textbf{Process} & \textbf{Complexity} \\ \hline
-Rerank - Distances calculation & \textit{O(N\textsuperscript{2})} \\
-Rerank - Ranking & \textit{O(N\textsuperscript{2}logN)} \\
-Baseline - Distances calculation & \textit{O(N\textsuperscript{2})} \\
-Baseline - Ranking & \textit{O(N\textsuperscript{2})} \\
+\textbf{Process} & \textbf{Complexity} \\ \hline
+Euclidean Distance Calculatoin & \textit{O(N\textsuperscript{2})} \\
+Mahalanobis Distance Calculatoin & \textit{O(N\textsuperscript{3 (2.708)})} \\
+Re-ranking with Jaccard & \textit{O(N\textsuperscript{2}logN)} \\ \hline
\end{tabular}
-\caption{Complexity evaluation}
\label{tab:complexity}
\end{table}
+
+Overall the re-ranking method gives a significant improvement to both top $k$ accuracy and mean average precision.
+The cost of this operation is an increase in computation time due to the change in complexity from
+the baseline, summarized in table \ref{tab:complexity}.
+
# References
+