aboutsummaryrefslogtreecommitdiff
path: root/report2/paper.md
diff options
context:
space:
mode:
authornunzip <np.scarh@gmail.com>2018-12-12 19:02:42 +0000
committernunzip <np.scarh@gmail.com>2018-12-12 19:02:42 +0000
commit4a287d8af1bf67c96b2116a4614272769c69cc43 (patch)
tree0a8c219ac5df1f4b14b6408fad61215fce6d33ae /report2/paper.md
parentd8b633d900cacb2582e54aa3b9c772a5b95b2e87 (diff)
downloadvz215_np1915-4a287d8af1bf67c96b2116a4614272769c69cc43.tar.gz
vz215_np1915-4a287d8af1bf67c96b2116a4614272769c69cc43.tar.bz2
vz215_np1915-4a287d8af1bf67c96b2116a4614272769c69cc43.zip
Rewrite some paper
Diffstat (limited to 'report2/paper.md')
-rwxr-xr-xreport2/paper.md12
1 files changed, 7 insertions, 5 deletions
diff --git a/report2/paper.md b/report2/paper.md
index 7099df8..6358445 100755
--- a/report2/paper.md
+++ b/report2/paper.md
@@ -115,7 +115,7 @@ original distance ranking compared to square euclidiaen metrics. Results can
be observed using the `-m|--mahalanobis` when running evalution with the
repository complimenting this paper.
-COMMENT ON VARIANCE AND MAHALANOBIS RESULTS
+**COMMENT ON VARIANCE AND MAHALANOBIS RESULTS**
\begin{figure}
\begin{center}
@@ -166,15 +166,17 @@ through Jaccardian metric as:
$$ d_J(p,g_i)=1-\frac{\sum\limits_{j=1}^N min(V_{p,g_j},V_{g_i,g_j})}{\sum\limits_{j=1}^N max(V_{p,g_j},V_{g_i,g_j})} $$
It is then possible to perform a local query expansion using the g\textsubscript{i} neighbors of
-defined as $V_p=\frac{1}{|N(p,k_2)|}\sum\limits_{g_i\in N(p,k_2)}V_{g_i}$. We refer to $k_2$ since
-we limit the size of the nighbors to prevent noise from the $k_2$ neighbors. The dimension k of the *$R^*$*
-set will instead be defined as $k_1$:$R^*(g_i,k_1)$.
+defined as:
+$$ V_p=\frac{1}{|N(p,k_2)|}\sum\limits_{g_i\in N(p,k_2)}V_{g_i} $$.
+We refer to $k_2$ since we limit the size of the nighbors to prevent noise
+from the $k_2$ neighbors. The dimension k of the *$R^*$* set will instead
+be defined as $k_1$: $R^*(g_i,k_1)$.
The distances obtained are then mixed, obtaining a final distance $d^*(p,g_i)$ that is used to obtain the
improved ranklist: $d^*(p,g_i)=(1-\lambda)d_J(p,g_i)+\lambda d(p,g_i)$.
The aim is to learn optimal values for $k_1,k_2$ and $\lambda$ in the training set that improve top1 identification accuracy.
-This is done through a simple **GRADIENT DESCENT** algorithm followed by exhaustive search to estimate
+This is done through a simple multi-direction search algorithm followed by exhaustive search to estimate
$k_{1_{opt}}$ and $k_{2_{opt}}$ for eleven values of $\lambda$ from zero(only Jaccard distance) to one(only original distance)
in steps of 0.1. The results obtained through this approach suggest: $k_{1_{opt}}=9, k_{2_{opt}}=3, 0.1\leq\lambda_{opt}\leq 0.3$.