From 0b2c847e17c9cb322bead25edeaa4e55d681eab9 Mon Sep 17 00:00:00 2001
From: Vasil Zlatanov <v@skozl.com>
Date: Thu, 13 Dec 2018 16:56:36 +0000
Subject: Capital Mahalanobis

---
 report/paper.md | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/report/paper.md b/report/paper.md
index a20e943..0aa514a 100755
--- a/report/paper.md
+++ b/report/paper.md
@@ -113,16 +113,16 @@ improve identification accuracy, and consider it an additional baseline.
 
 ## Mahalanobis Distance
 
-We were not able to achieve significant improvements using mahalanobis for 
+We were not able to achieve significant improvements using Mahalanobis for 
 original distance ranking compared to square euclidiaen metrics. 
 
-The mahalanobis distance metric was used to create the ranklist as an alternative to euclidean distance:
+The Mahalanobis distance metric was used to create the ranklist as an alternative to euclidean distance:
 
 $$ d_M(p,g_i) = (p-g_i)^TM(p-g_i). $$
 
-When performing mahalanobis with the covariance matrix $M$ generated from the training set, reported accuracy is reduced to **38%** .
+When performing Mahalanobis with the covariance matrix $M$ generated from the training set, reported accuracy is reduced to **38%** .
 
-We also attempted to perform the same mahalanobis metric on a reduced PCA featureset. This allowed for significant execution 
+We also attempted to perform the same Mahalanobis metric on a reduced PCA featureset. This allowed for significant execution 
 time improvements due to the greatly reduced computation requierments for smaller featurespace, but nevertheless demonstrated no
 improvements over an euclidean metric.
 
@@ -140,7 +140,7 @@ transformations performed the the ResNet-50 convolution model the features were
 \end{center}
 \end{figure}
 
-While we did not use mahalanobis as a primary distance metric, it is possible to use the Mahalanobis metric, together with the next investigated solution involving $k$-reciprocal re-ranking.
+While we did not use Mahalanobis as a primary distance metric, it is possible to use the Mahalanobis metric, together with the next investigated solution involving $k$-reciprocal re-ranking.
 
 # Suggested Improvement
 
-- 
cgit v1.2.3-70-g09d2


From a015ffd16b2834e522ad3d3b0e2b9c6160d65044 Mon Sep 17 00:00:00 2001
From: Vasil Zlatanov <v@skozl.com>
Date: Thu, 13 Dec 2018 17:07:18 +0000
Subject: Add optimisation and begin desc in eval

---
 report/paper.md | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/report/paper.md b/report/paper.md
index 0aa514a..89e260c 100755
--- a/report/paper.md
+++ b/report/paper.md
@@ -193,12 +193,13 @@ be defined as $k_1$: $R^*(g_i,k_1)$.
 The distances obtained are then mixed, obtaining a final distance $d^*(p,g_i)$ that is used to obtain the
 improved rank-list: $d^*(p,g_i)=(1-\lambda)d_J(p,g_i)+\lambda d(p,g_i)$.
 
+## Optimisation
 The aim is to learn optimal values for $k_1,k_2$ and $\lambda$ in the training set that improve top1 identification accuracy.
 This is done through a simple multi-direction search algorithm followed by exhaustive search to estimate 
 $k_{1_{opt}}$ and $k_{2_{opt}}$ for eleven values of $\lambda$ from zero (only Jaccard distance) to one (only original distance)
 in steps of 0.1. The results obtained through this approach suggest: $k_{1_{opt}}=9, k_{2_{opt}}=3, 0.1\leq\lambda_{opt}\leq 0.3$.
 
-It is possible to verify that the optimization of $k_{1_{opt}}$, $k_{2_{opt}}$ and $\lambda$
+It is possible to verify that the optimisation of $k_{1_{opt}}$, $k_{2_{opt}}$ and $\lambda$
 has been successful. Figures \ref{fig:pqvals} and \ref{fig:lambda} show that the optimal values obtained from 
 training are close to the ones for the local maximum of gallery and query.
 
@@ -220,7 +221,6 @@ training are close to the ones for the local maximum of gallery and query.
 \end{center}
 \end{figure}
 
-
 ## $k$-reciprocal Re-ranking Evaluation 
 
 Re-ranking achieves better results than the other baseline methods analyzed both as top $k$
@@ -252,6 +252,9 @@ The difference between the top $k$ accuracies of the two methods gets smaller as
 \end{center}
 \end{figure}
 
+The improved results due to $k$-reciprocal re-ranking can be explained by considering...re-ranking can be explained by considering...  
+
+
 # Conclusion
 
 # References
-- 
cgit v1.2.3-70-g09d2