aboutsummaryrefslogtreecommitdiff
path: root/report
diff options
context:
space:
mode:
authornunzip <np.scarh@gmail.com>2018-12-13 23:21:00 +0000
committernunzip <np.scarh@gmail.com>2018-12-13 23:21:00 +0000
commit2685022dbcaeeafdbb0025a8737edbaa5eeab425 (patch)
treecb1cf5f437efd1c384e2c9340c5d41625145c6f1 /report
parent2feec00db780914924e751ce0824b5701f1ad744 (diff)
downloadvz215_np1915-2685022dbcaeeafdbb0025a8737edbaa5eeab425.tar.gz
vz215_np1915-2685022dbcaeeafdbb0025a8737edbaa5eeab425.tar.bz2
vz215_np1915-2685022dbcaeeafdbb0025a8737edbaa5eeab425.zip
Fix repetitions and grammar mistakes
Diffstat (limited to 'report')
-rwxr-xr-xreport/metadata.yaml8
-rwxr-xr-xreport/paper.md39
2 files changed, 34 insertions, 13 deletions
diff --git a/report/metadata.yaml b/report/metadata.yaml
index e4d4470..74732a6 100755
--- a/report/metadata.yaml
+++ b/report/metadata.yaml
@@ -13,9 +13,11 @@ nocite: |
abstract: |
This report analyses distance metrics learning techniques with regards to
identification accuracy for the dataset CUHK03. The baseline method used for
- identification is Eucdidian based Nearest Neighbors based on Euclidean distance.
+ identification is Nearest Neighbors based on Euclidean distance.
The improved approach evaluated utilises Jaccardian metrics to rearrange the NN
- ranklist based on reciprocal neighbours. While this approach is more complex and introduced new hyperparameter, significant accuracy improvements are observed -
- approximately 10% increased Top-1 identifications, and good improvements for Top-$N$ accuracy with low $N$.
+ ranklist based on reciprocal neighbours. While this approach is more complex and introduces new hyperparameters,
+ significant accuracy improvements are observed -
+ approximately 10% higher Top-1 identification, and good improvements for Top-$N$
+ accuracy with low $N$.
...
diff --git a/report/paper.md b/report/paper.md
index 73c49de..83cde10 100755
--- a/report/paper.md
+++ b/report/paper.md
@@ -5,9 +5,9 @@
The person re-identification problem presented in this paper requires matching
pedestrian images from disjoint cameras by pedestrian detectors. This problem is
challenging, as identities captured in photos are subject to various lighting, pose,
-blur, background and oclusion from various camera views. This report considers
+blur, background and occlusion from various camera views. This report considers
features extracted from the CUHK03 dataset, following a 50 layer Residual network
-(Resnet50). This paper considers distance metrics techniques which can be used to
+(ResNet50). This paper considers distance metrics techniques which can be used to
perform person re-identification across *disjoint* cameras, using these features.
Features extracted from Neural Networks such as ResNet-50 are already highly processed
@@ -72,9 +72,9 @@ Magnitude normalization is a common technique, used to equalize feature importan
Applying magnitude normalization (scaling feature vectors to unit length) had a negative
effect re-identification. Furthemore standartization by removing feature mean and deviation
also had negative effect on performance as seen on figure \ref{fig:baselineacc}. This may
-be due to the fact that we are removing feature scaling that was introduced by the Neural network,
+be due to the fact that we are removing feature scaling that was introduced by the Neural Network,
such that some of the features are more significant than others. By standartizing our
-features at this point, we remove such scaling and may be losing using useful metrics.
+features at this point, we remove such scaling and may be losing useful information.
## kMeans Clustering
@@ -88,14 +88,14 @@ This method did not bring any major improvement to the baseline, as it can be se
figure \ref{fig:baselineacc}. It is noticeable how the number of clusters affects
performance, showing better identification accuracy for a number of clusters away from
the local minimum achieved at 60 clusters (figure \ref{fig:kmeans}). This trend can likely
-be explained by the number of distance comparison's performed.
+be explained by the number of distance comparisons performed.
We would expect clustering with $k=1$ and $k=\textrm{label count}$ to have the same performance
-the baseline approach without clustering, as we are performing the same number of comparisons.
+of the baseline approach without clustering, as we are performing the same number of comparisons.
-Clustering is a great method of reducing computation time. Assuming 39 clusters of 39 neighbours
-we would be performing only 78 distance computation for a gallery size of 1487, instead of the
-original 1487. This however comes at the cost of ignoring neighbours from other clusters which may
+Clustering is a great method of reducing computation time. Assuming 38 clusters of 38 neighbours
+we would be performing only 76 distance computations for a gallery size of 1467, instead of the
+original 1467. This however comes at the cost of ignoring neighbours from other clusters which may
be closer. Since clusters do not necessarily have the same number of datapoints inside them
(sizes are uneven), we find that the lowest average number of comparison happens at around 60 clusters,
which also appears to be the worst performing number of clusters.
@@ -268,7 +268,26 @@ as a result despite the identifiable feature being hidden in the query. An examp
\end{figure}
-# Conclusion
+# Conclusions
+
+Overall the reranking method gives a significant improvement to both top $k$ accuracy and mean average precision.
+The cost of this operation is an increase in computation time due to the change in complexity from
+the baseline, summarized in table \ref{tab:complexity}.
+
+\begin{table}[]
+\centering
+\begin{tabular}{ll}
+\hline
+\textbf{Process} & \textbf{Complexity} \\ \hline
+Rerank - Distances calculation & \textit{O(N\textsuperscript{2})} \\
+Rerank - Ranking & \textit{O(N\textsuperscript{2}logN)} \\
+Baseline - Distances calculation & \textit{O(N\textsuperscript{2})} \\
+Baseline - Ranking & \textit{O(N\textsuperscript{2})} \\
+\end{tabular}
+\caption{Complexity evaluation}
+\label{tab:complexity}
+\end{table}
+
# References