This is a paper from Ruud. J.G. van Sloun, Oren Solomon, Matthew Bruce, Zin Z. Khaing, Hessel Wijkstra, Yonina C. Eldar, Massimo Mischi, published on 20 April 2018 arXiv: arXiv:1804.07661
In this work, they present a deep learning method for super-resolution ULM. Their deep learning architecture depends on U-Net.
I also tried to implement their network. Data set is generated using four different PSFs: Near region, Center, Far Region and asymmetric Gaussian function.




The network structure is based on U-Net. It takes 128×128 patches and converts them to 256×256 super resolved images. I used Adam optimizer and the following objective function.
\[ \min_{x} || f(x|\theta) – y ||_{2}^{2} + \lambda || f(x|\theta) ||_{1} \]
In the following, I plotted some validation and training error graphs and some results of the network.
- \(\lambda = 10 \), \(Ntest = 15 \), \(Ntraining = 15 \) only Near PSF is used.




- \(\lambda = 1 \), \(Ntest = 15 \), \(Ntraining = 15 \) only Near PSF is used.




- \(\lambda = 100 \), \(Ntest = 15 \), \(Ntraining = 15 \) only Near PSF is used.




- \(\lambda = 100 \), \(Ntest = 60 \), \(Ntraining = 60 \) all PSFs are used.




- \(\lambda = 1 \), \(Ntest = 60 \), \(Ntraining = 60 \) all PSFs are used.



