ULM Through DL 13

In these set of experiments, we changed the pixel size and upsampling scheme.

Simulation Scheme: There are 128 sensors evenly spaced. Width of each sensor is 0.27 mm and empty space between each sensor is 0.03 mm. We assume that there is no attenuation. Image has 50 mm depth(in z direction) and 38 mm width(in x direction). Bubble density is chosen as 260 bubbles per cm2. Transducer frequency is 5 Mhz. There is one single plane wave. Pixel sizes are \( \frac{ \lambda}{8} =0.0385 mm\) in x direction and \( \frac{ \lambda}{10} = 0.0308mm \) in z direction. Total number of pixels in x direction is 990 and total pixels in z direction is 1624.

Dataset Generation: Firstly, I generated 9 images using Field2 on high resolution grid(1624*990) by using the parameters above. Then, ground truth images which are on high resolution grid, are obtained at this step directly from Field2. Then in order to obtain low resolution input images(1624*495), i applied skimage.transform.resize. Then we generated 4374 from far field region . Input patches size is 128*64 and ground truth patches size is 128*128. Additionally, patches are overlapping. Patches are obtained in raster scan order. Lastly, one sixth of patches are randomly chosen as validation set and remaining patches as training set.

Training process: Training is done using patches. Lets have some definitions as follows:

x: a patch form ground truth image ( \( 128pixels\times 64pixels \)) ( \(3.94mm \times 4.8mm\) )

y: a patch form Field2 simulation image ( \( 128pixels\times 128pixels \)) (\(3.94mm \times 4.8mm\))

z: output of the network

f : blur kernel

\[ w = z * f \]

\[ v = x*f \]

Then the training lost can be expressed as follows:

\[ loss = MSEloss(w-v) + \lambda \times L1loss(z)\]

Network Structure: Our network is based U-Net with batch normalization and drop out layer.

Summary:

sigma
of
f
\(\lambda \)learning
rate
number
of
epochs
Q1Q2Q1
per
bubble
Q2
per
bubble
%gap
between
training
and validation
exp110.0052e-5500 5.832042 0.180823 0.115017 0.003539 34.7
exp220.012e-5500 16.240247 0.515657 0.320883 0.010153 3.1

Experiment 1: learning rate=2e-5, sigma of kernel f=1, regularizer parameter=0.005

Percentage gap between training and validation error ( \( \frac{validation errror – training error}{training error}*100 \) ) = 34.7

Average quantitative metric1 is 5.832042

Average metric1 per bubble is 0.115017

Average quantitative metric2 is 0.180823

Average metric2 per bubble is 0.003539

Worst Case: Average Metric1 per bubble: 0.603911, Average Metric2 per bubble: : 0.038877

Best Case: Average Metric1 per bubble: 0.051972, Average Metric2 per bubble: : 0.000223

Experiment 2: learning rate=2e-5, sigma of kernel f=2, regularizer parameter=0.01

Percentage gap between training and validation error ( \( \frac{validation errror – training error}{training error}*100 \) ) = 3.1

Average quantitative metric1 is 16.240247

Average metric1 per bubble is 0.320883

Average quantitative metric2 is 0.515657

Average metric2 per bubble is 0.010153

Worst Case: Average Metric1 per bubble: 0.630101, Average Metric2 per bubble: : 0.031205

Best Case: Average Metric1 per bubble: 0.248063, Average Metric2 per bubble: : 0.005163

Leave a Comment

Your email address will not be published. Required fields are marked *