Super-resolution Ultrasound Localization Microscopy through Deep Learning

This is a paper from Ruud. J.G. van Sloun, Oren Solomon, Matthew Bruce, Zin Z. Khaing, Hessel Wijkstra, Yonina C. Eldar, Massimo Mischi, published on 20 April 2018 arXiv: arXiv:1804.07661 

In this work, they present a deep learning method for super-resolution ULM. Their deep learning architecture depends on U-Net.

I also tried to implement their network. Data set is generated using four different PSFs: Near region, Center, Far Region and asymmetric Gaussian function.

PSF Near
PSF center
PSF far
PSF Gaussian

The network structure is based on U-Net. It takes 128×128 patches and converts them to 256×256 super resolved images. I used Adam optimizer and the following objective function.

\[ \min_{x} || f(x|\theta) – y ||_{2}^{2} + \lambda || f(x|\theta) ||_{1} \]

In the following, I plotted some validation and training error graphs and some results of the network.

  • \(\lambda = 10 \), \(Ntest = 15 \), \(Ntraining = 15 \) only Near PSF is used.
Input
Output
  • \(\lambda = 1 \), \(Ntest = 15 \), \(Ntraining = 15 \) only Near PSF is used.
Input
Output
  • \(\lambda = 100 \), \(Ntest = 15 \), \(Ntraining = 15 \) only Near PSF is used.
Input
Output
  • \(\lambda = 100 \), \(Ntest = 60 \), \(Ntraining = 60 \) all PSFs are used.
Input
Output
  • \(\lambda = 1 \), \(Ntest = 60 \), \(Ntraining = 60 \) all PSFs are used.
Input
Output

Leave a Comment

Your email address will not be published. Required fields are marked *