summaryrefslogtreecommitdiff
path: root/report/paper.md
diff options
context:
space:
mode:
authorVasil Zlatanov <v@skozl.com>2019-02-22 13:28:14 +0000
committerVasil Zlatanov <v@skozl.com>2019-02-22 13:28:14 +0000
commit99313240a9407d553bc71336e72166d4bd6f4c6b (patch)
treed1cfe78f77be03644b1b8816c4fa913bf0b85d41 /report/paper.md
parent63bc26cc20ca8c74078d253ec4cd0658143955fe (diff)
downloade3-deep-99313240a9407d553bc71336e72166d4bd6f4c6b.tar.gz
e3-deep-99313240a9407d553bc71336e72166d4bd6f4c6b.tar.bz2
e3-deep-99313240a9407d553bc71336e72166d4bd6f4c6b.zip
Add performance
Diffstat (limited to 'report/paper.md')
-rw-r--r--report/paper.md8
1 files changed, 8 insertions, 0 deletions
diff --git a/report/paper.md b/report/paper.md
index 3b05cf4..723318a 100644
--- a/report/paper.md
+++ b/report/paper.md
@@ -39,6 +39,14 @@ There is an intrinsic problem that occurs when loss approaches 0, training becom
# Peformance & Evaluation
+Training speed was found to be greatly improvable by utilising Google's dedicated TPU and increasing batch size. With the increase in batch size, it becomes beneficial to increase learning rate. Particularly we found an increase of batch size to 4096 to allow an increase in learning rate of a factor of 10 over the baseline which offered around 10 training time speedup, together with faster convergence of the loss for the denosie U-Net.
+
+We evaluate the baseline accross the retrieval, matching and verification tasks:
+
+
+# Planned Work
+
+
# Appendix
![U-Net Training with TPU](fig/denoise.pdf){\label{fig:denoise}}