summaryrefslogtreecommitdiff
path: root/report/paper.md
diff options
context:
space:
mode:
Diffstat (limited to 'report/paper.md')
-rw-r--r--report/paper.md8
1 files changed, 8 insertions, 0 deletions
diff --git a/report/paper.md b/report/paper.md
index 3b05cf4..723318a 100644
--- a/report/paper.md
+++ b/report/paper.md
@@ -39,6 +39,14 @@ There is an intrinsic problem that occurs when loss approaches 0, training becom
# Peformance & Evaluation
+Training speed was found to be greatly improvable by utilising Google's dedicated TPU and increasing batch size. With the increase in batch size, it becomes beneficial to increase learning rate. Particularly we found an increase of batch size to 4096 to allow an increase in learning rate of a factor of 10 over the baseline which offered around 10 training time speedup, together with faster convergence of the loss for the denosie U-Net.
+
+We evaluate the baseline accross the retrieval, matching and verification tasks:
+
+
+# Planned Work
+
+
# Appendix
![U-Net Training with TPU](fig/denoise.pdf){\label{fig:denoise}}