aboutsummaryrefslogtreecommitdiff
path: root/README.md
blob: 89cec2a523e618cce2f9a65d26bd8073f93253f7 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
```

usage: evaluate.py [-h] [-t] [-c] [-k] [-m] [-e] [-r] [-a RERANKA]
                   [-b RERANKB] [-l RERANKL] [-n NEIGHBORS] [-v] [-s SHOWRANK]
                   [-1] [-2] [-M MULTRANK] [-C] [--data DATA] [-K KMEAN] [-A]
                   [-P PCA]

optional arguments:
  -h, --help            show this help message and exit
  -t, --train           Use train data instead of query and gallery
  -c, --conf_mat        Show visual confusion matrix
  -k, --kmean_alt       Perform clustering with generalized labels(not actual
                        kmean)
  -m, --mahalanobis     Perform Mahalanobis Distance metric
  -e, --euclidean       Use standard euclidean distance
  -r, --rerank          Use k-reciprocal rernaking
  -a RERANKA, --reranka RERANKA
                        Parameter k1 for rerank
  -b RERANKB, --rerankb RERANKB
                        Parameter k2 for rerank
  -l RERANKL, --rerankl RERANKL
                        Parameter lambda for rerank
  -n NEIGHBORS, --neighbors NEIGHBORS
                        Use customized ranklist size NEIGHBORS
  -v, --verbose         Use verbose output
  -s SHOWRANK, --showrank SHOWRANK
                        Save ranklist pics id in a txt file for first SHOWRANK
                        queries
  -1, --normalise       Normalise features
  -2, --standardise	Standardise features
  -M MULTRANK, --multrank MULTRANK
                        Run for different ranklist sizes equal to MULTRANK
  -C, --comparison      Compare baseline and improved metric
  --data DATA           Folder containing data
  -K KMEAN, --kmean KMEAN
                        Perform Kmean clustering, KMEAN number of clusters
  -A, --mAP             Display Mean Average Precision
  -P PCA, --PCA PCA     Perform pca with PCA eigenvectors

```

EXAMPLES for `evaluate.py`:

EXAMPLE 1: Run euclidean distance with top n

`evaluate.py -e -n 10` 

or simply

`evaluate.py -n 10`

EXAMPLE 2: Run euclidean distance for the first 10 values of top n and graph them

`evaluate.py -M 10`

EXAMPLE 3: Run comparison between baseline and rerank for the first 5 values of top n and graph them

`evaluate.py -M 5 -C`

EXAMPLE 4: Run for kmeans, 10 clusters

`evaluate.py -K 10`

EXAMPLE 5: Run for mahalanobis, using PCA for top 100 eigenvectors to speed up the calculation

`evaluate.py -m -P 100`

EXAMPLE 6: Run rerank for customized values of RERANKA, RERANKB and RERANKL

`evaluate.py -r -a 11 -b 3 -l 0.3`

EXAMPLE 7: Run on the training set with euclidean distance and normalize feature vectors. Draw confusion matrix at the end.

`evaluate.py -t -1 -c`

EXAMPLE 8: Run euclidean distance standardising the feature data for the first 10 values of top n and graph them.

`evaluate.py -2 -M 10`

EXAMPLE 8: Run for rerank top 10 and save the names of the images that compose the ranklist for the first 5 queries: query.txt, ranklist.txt.

`evaluate.py -r -s 5 -n 10`

EXAMPLE 9: Display mAP. It is advisable to use high n to obtain an accurate results.

`evaluate.py -A -n 5000`

EXAMPLE 10: Run euclidean distance specifying a different data folder location

for data int the same folder as evaluate.py:

`evaluate.py --data ./` 
		
or for data in another folder:

`evaluate.py --data ./foo/bar/`

EXAMPLES for `opt.py`:

EXAMPLE 1: optimize top 1 accuracy for k1, k2, lambda speeding up the process with PCA, top 50 eigenvectors

`opt.py -P 50`

EXAMPLE 2: optimize mAP for k1, k2, lambda speeding up the process with PCA, top 50 eigenvectors

`opt.py -P 50 -A`