ULM Through DL 6

There are many experiments that I have done for transfer learning. In the first phase of training, I repeated the process that I described in the previous post named as “ULM Through DL 5”. In the second phase of training, I used the patches coming from Field 2 simulation.

Simulation Scheme: There are 128 sensors evenly spaced. Width of each sensor is 0.27 mm and empty space between each sensor is 0.03 mm. We assume that there is no attenuation. Image has 50 mm depth(in z direction) and 38 mm width(in x direction). Bubble density is chosen as 260 bubbles per cm2. Transducer frequency is 5 Mhz. There is one single plane wave. Pixel sizes are \( \frac{ \lambda}{8} =0.0385 mm\) in x direction and \math\( \frac{ \lambda}{20} = 0.0154mm \) in z direction. Total number of pixels in x direction is 990 and total pixels in z direction is 3247.

Dataset Generation: Firstly, I generated 8 images using Field2 on high resolution grid(3247*990) by using the parameters above. Then, ground truth images which are on high resolution grid, are obtained at this step directly from Field2. Then in order to obtain low resolution input images(1624*495), I first applied aliasing filter(gaussian filter with sigma=1) and then downsample by ratio of 2. Then I generated 686 low resolution input patches from each images and their corresponding 686 high resolution ground truth patches. Input patches size is 64*64 and ground truth patches size is 128*128. Additionally, patches are overlapping. Half of the patch is overlapping with the previous one. Patches are obtained in raster scan order. Lastly, one set of patches are chosen as validation set and remaining patches as training set. It means that one set of patches coming from one simulation image is chosen as validation set and remaining seven sets of patches coming from simulation images are chosen as training set.

Training process: Training is done using patches. Lets have some definitions as follows:

x: a patch form ground truth image ( \( 64pixels\times 64pixels \)) ( \(1.92mm \times 4.8mm\) )

y: a patch form Field2 simulation image ( \( 128pixels\times 128pixels \)) (\(1.92mm \times 4.8mm\))

z: output of the network

f : blur kernel

\[ w = z * f \]

\[ v = x*f \]

Then the training lost can be expressed as follows:

\[ loss = MSEloss(w-v) + \lambda \times L1loss(z)\]

Network Structure: Our network is based U-Net with batch normalization and drop out layer.

Experiment 1: learning rate=1e-5, sigma of kernel f=1, regularizer parameter=0.01

Worst Case:

Best Case:

Experiment 2: learning rate=1e-5, sigma of kernel f=1, regularizer parameter=0.005

Worst Case:

Best Case:

Experiment 3: learning rate=1e-5, sigma of kernel f=1.5, regularizer parameter=0.01

Worst Case:

Best Case:

Experiment 4: learning rate=1e-5, sigma of kernel f=1.5, regularizer parameter=0.005

Worst Case:

Best Case:

Experiment 5: learning rate=1e-5, sigma of kernel f=2, regularizer parameter=0.01

Worst Case:

Best Case:

Experiment 6: learning rate=1e-5, sigma of kernel f=2, regularizer parameter=0.005

Worst Case:

Best Case:

Leave a Comment

Your email address will not be published. Required fields are marked *