summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorVasil Zlatanov <v@skozl.com>2019-03-20 22:02:43 +0000
committerVasil Zlatanov <v@skozl.com>2019-03-20 22:02:43 +0000
commitd8f9a4863c5a2002495fd698cc1d9517254f28c4 (patch)
tree987071a5bd521bfd654b361a1017911ba499188f
parent99313240a9407d553bc71336e72166d4bd6f4c6b (diff)
downloade3-deep-d8f9a4863c5a2002495fd698cc1d9517254f28c4.tar.gz
e3-deep-d8f9a4863c5a2002495fd698cc1d9517254f28c4.tar.bz2
e3-deep-d8f9a4863c5a2002495fd698cc1d9517254f28c4.zip
very good progress
-rw-r--r--report/paper.md113
1 files changed, 90 insertions, 23 deletions
diff --git a/report/paper.md b/report/paper.md
index 723318a..87c22ab 100644
--- a/report/paper.md
+++ b/report/paper.md
@@ -1,55 +1,122 @@
-# Probelm Definion
+# Problem Definion
-This coursework's goal is to develop an image representation for measuring similarity between patches from the `HPatches` dataset. The `HPatches` dataset contains pacthes sampled from image sequences, where each sequence contains images of the same scenes. Patches are separeted into `i_X` patches which ahve undergone illumintation changes and `v_X` patches which have undergone viewpoint changes. For each image sequence there is a reference image with corresponding reference patches, and two more flie `eX.png` and `hX.png` containing corresponding pacthes from the images in the sequence with altered illumintation or viewpoint. Corresponding patches are extracted by adding geometric noise, easy `e_X` have a small amount of jitter while `h_X` patches have more[@patches]. The patches as processed by our networks are monochrome 32 by 32 images.
+This coursework's goal is to develop an image representation of patches from the `HPatches` dataset for the purpose of matching and retrieval, where images with the same features (and classes) are nearby in the reduced Euclidean space, while dissimilar ones are far away. The dataset contains patches sampled from image sequences taken from the same scenes. For each image sequence there is a reference image, and two more files `eX.png` and `hX.png` containing corresponding patches from the images in the sequence with altered illumination (`i_X`) or viewpoint (`v_X`). Furthemore corresponding patches are extracted with geometric noise, easy `e_X` have a small amount of jitter while `h_X` patches have a larger amount [@patches]. The patches as processed by our networks are monochrome 32 by 32 images.
-## Tasks
-
-The task is to train a network, which given a patch is able to produce a descriptor vector with a dimension of 128. The descriptors are evaluated based on there performance across three tasks:
+The goal is to train a network, which given a patch is able to produce a descriptor vector with a dimension of 128. The descriptors are evaluated based on there performance across three tasks:
* Retrieval: Use a given image's descriptor to find similar images in a large gallery
* Matching: Use a given image's descriptor to find similar in a small gallery with difficult distractors
-* Verificaiton: Given two images, use the descriptors to determine their similarity
+* Verification: Given two images, use the descriptors to determine their similarity
# Baseline Model
The baseline model provided in the given IPython notebook, approaches the problem by using two networks for the task.
-## Shallow U-Net
+## TPU Training
-A shallow version of the U-Net network is used to denoise the noisy patches. The shallow U-Net network has the same output size as the input size, , is fed a noisy image and has loss computed as the euclidean distance with a clean reference patch. This effectively teaches the U-Net autoencoder to perform a denoising operation on the input images.
+Training with Google's proprietary TPU's is used for this project as it can be done with some a few modifications of the baseline. Specifically it is necessary to rewrite the `Upsampling2D` layer using a `Lambda` layer, replace python's `keras` with `tensorflow.keras` and replace the sequence generators with those in the `tensorflow` library. Furthemore the TPU's do not support the placeholder op necessary for the implementation of the baseline siamese triplet network, but is compatible with the online batch (hard) triplet loss implemented and presented in this paper.
-Efficient training can performed with TPU acceleartion, a batch size of 4096 and the Adam optimizer with learning rate of 0.001 and is shown on figure \ref{fig:denoise}. Training and validation was performed with **all** available data.
+Direct comparison of performance with GPU is made difficult by the fact that Google's TPU's show the biggest performance increase for large batch sizes, which do not fit in GPU memory. Furthemore TPU's incur an extra model compilation time and copying of the weights. Nevertheless approximate performance times for the denoise network are presented in table \ref{tpu-time}.
-The network is able to achieve a mean average error of 5.3 after 19 epochs. With gradient descent we observed a loss of 5.5 after the same number of epochs. We do not observe evidence of overfitting with the shallow net, something which may be expected with a such a shallow network. An example of denoising as performed by the network is visible in figure \ref{fig:den3}.
+\begin{table}[]
+\begin{center}
+\begin{tabular}{lrrrr} \hline
+Batch Size & CPU & GPU & TPU & \\ \hline
+64 & 85s & 41s & 43s & \\
+512 & 84s & 40s & 19s & \\
+1024 & 83s & 40s & 10s & \\ \hline
+\end{tabular}
+\label{tpu-time}
+\caption{Epoch Training Time on Accelerators}
+\end{center}
+\end{table}
-Quick experimentation with a deeper version of U-Net shows it is possible to achieve validation loss of below 5.0 after training for 10 epochs, and a equivalent to the shallow loss of 5.3 is achievable aftery only 3 epochs.
+### Shallow U-Net
-## L2 Net
+A shallow version of the U-Net network is used to denoise the noisy patches. The shallow U-Net network has the same output size as the input size, is fed a noisy image and has loss computed as the euclidean distance with a clean reference patch. This effectively teaches the U-Net autoencoder to perform a denoising operation on the input images.
+Efficient training can performed with TPU acceleartion, a batch size of 4096 and the Adam optimizer with learning rate of 0.001 and is shown on figure \ref{fig:denoise}. Training and validation was performed with **all** available data.
+The network is able to achieve a mean average error of 5.3 after 19 epochs. With gradient descent we observed a loss of 5.5 after the same number of epochs. There is no observed evidence of overfitting with the shallow net, something which may be expected with a such a shallow network. An example of denoising as performed by the network is visible in figure \ref{fig:den3}.
+Quick experimentation with a deeper version of U-Net shows it is possible to achieve validation loss of below 5.0 after training for 10 epochs, and a equivalent to the shallow loss of 5.3 is achievable aftery only 3 epochs.
-The network used to output the 128 dimension descritors is a L2-network with triplet loss as defined in CVPR 17 [@l2net]. L2-Net was specifically for descriptor output of patches and is a very suitable choice for this task. L2-Net is robust architecture which has been developed with the HPatches dataset.
+### L2 Net
-Training of the L2-Net can be done on the noisy images, but it is beneficial to use the denoise images from the U-Net to improve performance. Training the L2-Net with denoised yields training curves shown in \label{fig:descriptor}
+The network used to output the 128 dimension descritors is a L2-network with triplet loss as defined in CVPR 17 [@l2net]. L2-Net was specifically for descriptor output of patches and is a very suitable choice for this task. Training of the L2-Net can be done on the noisy images, but expirimentation showed it is beneficial to use denoised images. Training the L2-Net with denoised yields training curves shown in \ref{fig:descriptor}. L2-Net is a robust architecture which has been developed with the HPatches dataset and may be hard to improve upon. The architecture is visualised in fig \ref{fig:l2arch}.
### Triplet Loss
-The loss used to train the siamese L2 Network:
-$ \mathcal{L] = \textrm{max}(d(a,p) - d(a,n) + \textrm{margin}, 0)$
+The loss used for Siamese L2-Net:
+
+\begin{equation}\label{eq:loss_trip}
+ \loss{tri}(\theta) = \frac{1}{N} \sum\limits_{\substack{a,p,n \\ y_a = y_p \neq y_n}} \left[ D_{a,p} - D_{a,n} + \alpha \right]_+.
+\end{equation}
-There is an intrinsic problem that occurs when loss approaches 0, training becomes more difficult as we are throwing away loss data which prevents the network from progerssing significantly past that point. Solutions may involve increase the margin $\alpha$ or addopting a non linear loss which is able to avoid the loss truncation.
+Where N is the number of triplets in the batch.
+
+There is an intrinsic problem that occurs when loss approaches 0, training becomes more difficult as we are throwing away loss data which prevents the network from progressing significantly past that point. Solutions may involve increase the margin $\alpha$ or adopting a non linear loss which is able to avoid the loss truncation similar to the L2-Net paper.
# Peformance & Evaluation
-Training speed was found to be greatly improvable by utilising Google's dedicated TPU and increasing batch size. With the increase in batch size, it becomes beneficial to increase learning rate. Particularly we found an increase of batch size to 4096 to allow an increase in learning rate of a factor of 10 over the baseline which offered around 10 training time speedup, together with faster convergence of the loss for the denosie U-Net.
+Training speed was found to be greatly improvable by utilising Google's dedicated TPU and increasing batch size. With the increase in batch size, it becomes beneficial to increase learning rate. Particularly we found an increase of batch size to 4096 to allow an increase in learning rate of a factor of 10 over the baseline which offered around 10 training time speedup, together with faster convergence of the loss for the denoise U-Net.
+
+The baseline is evaluated against the three metrics is: Retrieval - 81.3%, Matching - 31.7%, Verification - 54.4%
+
+# Hard Triplet Mining
+
+Training the baseline for a sufficient number of epochs results in loss convergance, that is the validation (and training) loss stop improving. The baseline generator creates random valid triplets which are fed to the siamese network. This results in the network being given too many *easy* triplets, which for a well trained baseline result in a low loss but do not incentivise the model to improve.
+
+One way to boost performance is to run the dataset of patches through a singular decriptor model, record the descriptors and generate a subset of hard triplets with which the baseline can be trained. This will result in higher immediate loss and will allow the network to optimise the loss for thos hard triplets. This can be done once or multiple times. It can also be done on every epoch, or ideally on every batch under the form of *online mining*.
+
+## Online Mining
+
+Online mining is introduced by Google in their Facenet paper [@facenet] and further evaluated by the Szkocka Research group in their Hardnet paper [@hardnet] which utilises the padded L2Net network as implemented in baseline.
+
+Instead of using a three parallel siamese models online batch mining as evaluated in this paper uses a single model which is given a batch of $n$ patches and produces $n$ descriptors. The loss function calculates the pairwise Euclidean distance between each descriptors creating a $n^2$ matrix. We mask this matrix with the labels of the patches (based on the sequence they were extracted from), to find all valid posive and negative distances.
+
+We then have then have a few options for determining our loss. We define batch losses as introduced by Alexander Hermans et. al. [@defense] who applied the batch all and batch hard losses to person re-identification. The **batch all** triplet is computed as the average of all valid triplets in the distance matrix. This is in a way equivalent to loss as found by the siamese network in the baseline, but is able to compute the loss as the average of a vastly larger number of triplets.
-We evaluate the baseline accross the retrieval, matching and verification tasks:
+Perhaps the more interesting approach is the **batch hard** strategy of computing the loss using the hardest triplets for each anchor. We define the anchor's as each and every input patch which can form a valid triplet, i.e. has a positive with the samle label. For every anchor we find largest positive distance and the smallest negative distance in the batch. The loss for the batch is the mean of the hardest triplets for each anchor.
+\begin{align}\label{eq:loss_bh}
+ \loss{BH}(\theta; X) = \overbrace{\sum\limits_{i=1}^{S} \sum\limits_{a=1}^{K}}^{\textnormal{all anchors}}
+ \Big[& \overbrace{\max\limits_{p=1 \dots K} \hspace*{-5pt} D\left(\nnfn(x^i_a), \nnfn(x^i_p)\right)}^{\textnormal{hardest positive}} \\
+ & - \hspace*{-5pt} \underbrace{\min\limits_{\substack{j=1 \dots P \\ n=1 \dots K \\ j \neq i}} \hspace*{-5pt} D\left(\nnfn(x^i_a), \nnfn(x^j_n)\right)}_{\textnormal{hardest negative}} + \alpha \Big]_+,\nonumber
+\end{align}
-# Planned Work
+Where $S$ is the number of sequences in the batch, $K$ is the number of images in each sequence and $D$ is the distance function, such as Euclidean or square Euclidean distance, $\alpha$ is the margin and $x^i_j$ is the $j$-th patch of the $i$-th sequence in the batch $X$.
+## Symmetric Batch Formation
+Batch loss presents a series of problems when applied on the HPatches dataset. Implementations of batch triplet loss often use randomly sampled batches. For a dataset like MNIST which has only 10 classes, this is not a problem as there is it is very unlikely to have no valid triplets. In the HPatches dataset, image sequences tend to have over 1000 patches, meaning that the probability of having no valid triplets is very high. In these situations loss is meaningliss and hence a random batch sequence is unfeasable. The first test of batch loss failed due to this and required an alternatet solution.
+
+We therefore implemented batches of size $SK$ formed with $S$ number patch sequences containg $K \geq 2$ patches. The anchor positive permutation's are therefor $(K-1)$ possible positives for each anchor, and $(S-1)K)$ negatives for each pair. With a guarranteed total number of $K^2(K-1)(S-1)$ triplets. This has the added benefit of allowing the positive and negative distances masks to be precomputed based on the $S$ and $K$ as the patches are ordered. It should be noted that the difficult of the batch is highly reliant both $SK$ and $K$. The larger $K$ the more likely it is to have a harder the furthest anchor-postive pair, and the bigger the batch size $SK$ the more likely it is to find a close negative.
+
+## Collapse
+
+Even after the implementation of symmetric batch formation, we were unable to train the descriptor model without having the loss becomming stuck at the margin, that is $\loss{BH} = \alpha$. Upon observation of the descriptors it was evident that the descriptor model producing descriptors with all dimensions approaching zero, regardless of the input. A naive solution is to apply $L2$ normalisation to the descriptors, but that does not avoid collapse as the network learns to output the same descriptor every time.
+
+Avoiding collapse of all the descriptors to a single point proved to not be an easy task. Upon experimentation, we find that descriptors tend to collapse for large batch sizes.
+
+## Progressive hard batch mining
+
+
+## Soft Margin Loss
+
+* Softplus function
+
+# Visualisation
+
+Descripotrs can be visualised with TSNE
# Appendix
-![U-Net Training with TPU](fig/denoise.pdf){\label{fig:denoise}}
-![L2-Net](fig/descriptor.pdf){\label{fig:descriptor}
-![Denoise example - 20th epoch](fig/denoised.png){\label{fig:den3}
+![U-Net Training with TPU\label{fig:denoise}](fig/denoise.pdf){width=20em height=15em}
+
+![L2-Net\label{fig:descriptor}](fig/descriptor.pdf){width=20em height=15em}
+
+![Denoise example - 20th epoch\label{fig:den3}](fig/denoised.png){width=20em height=15em}
+
+![L2-Net Architecture\label{fig:l2arch} \[@l2net\]](fig/l2net.png){width=15em height=15em}
+
+\newpage
+
+# References