aboutsummaryrefslogtreecommitdiff
path: root/report
diff options
context:
space:
mode:
authornunzip <np.scarh@gmail.com>2018-12-13 17:11:45 +0000
committernunzip <np.scarh@gmail.com>2018-12-13 17:11:45 +0000
commit2e9341383e5450b5546fac241316ecdcb0fe358b (patch)
tree2db8e298ac7c6c5d8c39bdecc9df556d705b282b /report
parentfff843eaac512db5a62b4476e76783505007c8a7 (diff)
parenta015ffd16b2834e522ad3d3b0e2b9c6160d65044 (diff)
downloadvz215_np1915-2e9341383e5450b5546fac241316ecdcb0fe358b.tar.gz
vz215_np1915-2e9341383e5450b5546fac241316ecdcb0fe358b.tar.bz2
vz215_np1915-2e9341383e5450b5546fac241316ecdcb0fe358b.zip
Merge branch 'master' of git.skozl.com:e4-pattern
Diffstat (limited to 'report')
-rwxr-xr-xreport/paper.md17
1 files changed, 10 insertions, 7 deletions
diff --git a/report/paper.md b/report/paper.md
index a20e943..89e260c 100755
--- a/report/paper.md
+++ b/report/paper.md
@@ -113,16 +113,16 @@ improve identification accuracy, and consider it an additional baseline.
## Mahalanobis Distance
-We were not able to achieve significant improvements using mahalanobis for
+We were not able to achieve significant improvements using Mahalanobis for
original distance ranking compared to square euclidiaen metrics.
-The mahalanobis distance metric was used to create the ranklist as an alternative to euclidean distance:
+The Mahalanobis distance metric was used to create the ranklist as an alternative to euclidean distance:
$$ d_M(p,g_i) = (p-g_i)^TM(p-g_i). $$
-When performing mahalanobis with the covariance matrix $M$ generated from the training set, reported accuracy is reduced to **38%** .
+When performing Mahalanobis with the covariance matrix $M$ generated from the training set, reported accuracy is reduced to **38%** .
-We also attempted to perform the same mahalanobis metric on a reduced PCA featureset. This allowed for significant execution
+We also attempted to perform the same Mahalanobis metric on a reduced PCA featureset. This allowed for significant execution
time improvements due to the greatly reduced computation requierments for smaller featurespace, but nevertheless demonstrated no
improvements over an euclidean metric.
@@ -140,7 +140,7 @@ transformations performed the the ResNet-50 convolution model the features were
\end{center}
\end{figure}
-While we did not use mahalanobis as a primary distance metric, it is possible to use the Mahalanobis metric, together with the next investigated solution involving $k$-reciprocal re-ranking.
+While we did not use Mahalanobis as a primary distance metric, it is possible to use the Mahalanobis metric, together with the next investigated solution involving $k$-reciprocal re-ranking.
# Suggested Improvement
@@ -193,12 +193,13 @@ be defined as $k_1$: $R^*(g_i,k_1)$.
The distances obtained are then mixed, obtaining a final distance $d^*(p,g_i)$ that is used to obtain the
improved rank-list: $d^*(p,g_i)=(1-\lambda)d_J(p,g_i)+\lambda d(p,g_i)$.
+## Optimisation
The aim is to learn optimal values for $k_1,k_2$ and $\lambda$ in the training set that improve top1 identification accuracy.
This is done through a simple multi-direction search algorithm followed by exhaustive search to estimate
$k_{1_{opt}}$ and $k_{2_{opt}}$ for eleven values of $\lambda$ from zero (only Jaccard distance) to one (only original distance)
in steps of 0.1. The results obtained through this approach suggest: $k_{1_{opt}}=9, k_{2_{opt}}=3, 0.1\leq\lambda_{opt}\leq 0.3$.
-It is possible to verify that the optimization of $k_{1_{opt}}$, $k_{2_{opt}}$ and $\lambda$
+It is possible to verify that the optimisation of $k_{1_{opt}}$, $k_{2_{opt}}$ and $\lambda$
has been successful. Figures \ref{fig:pqvals} and \ref{fig:lambda} show that the optimal values obtained from
training are close to the ones for the local maximum of gallery and query.
@@ -220,7 +221,6 @@ training are close to the ones for the local maximum of gallery and query.
\end{center}
\end{figure}
-
## $k$-reciprocal Re-ranking Evaluation
Re-ranking achieves better results than the other baseline methods analyzed both as top $k$
@@ -252,6 +252,9 @@ The difference between the top $k$ accuracies of the two methods gets smaller as
\end{center}
\end{figure}
+The improved results due to $k$-reciprocal re-ranking can be explained by considering...re-ranking can be explained by considering...
+
+
# Conclusion
# References