This work is continuation of previous blog named as ULM Through DL 7. The only difference is that patches are coming from far region. It means that patch centers start at 34 mm in z direction. Another difference is that amount of overlapping. Previously, patches were moving half size in x and z direction. In this set of experiments, patches move quarter size in x and z direction.

** Note:** Best and Worst cases are chosen by comparing the average bubble error in a patch.

**Quantitative Metrics:** In order to compare success of different experiments, we decide the use the followings:

**Quantitative Metric1** = \[ L1Loss(z**f_0 – x**f_0) \]

**Quantitative Metric2** = \[ MSELossLoss(z**f_0 – x**f_0) \]

where f_0 is a gaussian kernel with sigma =1.

* Note*: Both f and f_0 are in pixel coordinates.

** Experiment 1: **learning rate=1e-5, sigma of kernel f=1, regularizer parameter=0.01

**Average quantitative metric1** is 12.737782

**Average metric1 per bubble** is 0.501236

**Average quantitative metric2** is 0.556178

**Average metric2 per bubble** is 0.021680

Worst Case:Center Location(x,z) = 29.61 mm and 48.80 mm , Average Metric1 per bubble: 0.9929, Average Metric2 per bubble: : 0.0697

Best Case:Center Location(x,z) = 14.82 mm and 45.35 mm , Average Metric1 per bubble: 0.259, Average Metric2 per bubble: : 0.005795

** Experiment 2: **learning rate=1e-5, sigma of kernel f=1, regularizer parameter=0.005

**Average quantitative metric1** is 13.120187

**Average metric1 per bubble** is 0.517213

**Average quantitative metric2** is 0.516576

**Average metric2 per bubble** is 0.020148

Worst Case:Center Location(x,z) = 29.61 mm and 48.80 mm , Average Metric1 per bubble: 1.025, Average Metric2 per bubble: : 0.0679

Best Case:Center Location(x,z) = 4.97 mm and 47.82 mm , Average Metric1 per bubble: 0.275, Average Metric2 per bubble: : 0.0051

** Experiment 3: **learning rate=1e-5, sigma of kernel f=1.5, regularizer parameter=0.01

**Average quantitative metric1** is 13.126150

**Average metric1 per bubble** is 0.517840

**Average quantitative metric2** is 0.544524

**Average metric2 per bubble** is 0.021331

Worst Case:Center Location(x,z) = 29.61 mm and 48.80 mm , Average Metric1 per bubble: 0.970241, Average Metric2 per bubble: : 0.064166

Best Case:Center Location(x,z) = 12.36 mm and 45.35 mm , Average Metric1 per bubble: 0.364038, Average Metric2 per bubble: : 0.009693

** Experiment 4: **learning rate=1e-5, sigma of kernel f=1.5, regularizer parameter=0.005

**Average quantitative metric1** is 13.454322

**Average metric1 per bubble** is 0.531489

**Average quantitative metric2** is 0.492222

**Average metric2 per bubble** is 0.019310

Worst Case:Center Location(x,z) = 29.61 mm and 48.80 mm , Average Metric1 per bubble: 0.969855, Average Metric2 per bubble: : 0.055510

Best Case:Center Location(x,z) = 13.59 mm and 45.85 mm , Average Metric1 per bubble: 0.349155, Average Metric2 per bubble: : 0.007754

** Experiment 5: **learning rate=1e-5, sigma of kernel f=2, regularizer parameter=0.01

**Average quantitative metric1** is 14.191951

**Average metric1 per bubble** is 0.559143

**Average quantitative metric2** is 0.614085

**Average metric2 per bubble** is 0.024005

Worst Case:Center Location(x,z) = 32.07 mm and 48.80 mm , Average Metric1 per bubble: 1.0193, Average Metric2 per bubble: : 0.0624

Best Case:Center Location(x,z) = 3.73 mm and 44.86 mm , Average Metric1 per bubble: 0.431, Average Metric2 per bubble: : 0.0128

** Experiment 6: **learning rate=1e-5, sigma of kernel f=2, regularizer parameter=0.005

**Average quantitative metric1** is 13.774952

**Average metric1 per bubble** is 0.543580

**Average quantitative metric2** is 0.536625

**Average metric2 per bubble** is 0.021043

Worst Case:Center Location(x,z) = 33.30 mm and 48.80 mm , Average Metric1 per bubble: 0.947143, Average Metric2 per bubble: : 0.052615

Best Case:Center Location(x,z) = 12.36 mm and 45.35 mm , Average Metric1 per bubble: 0.3772, Average Metric2 per bubble: 0.0095

Nice results. Looks like the agreement between testing and validation errors improves with larger sigma of f, and smaller L1 penalty.

1) Can you also report a second quantitative metric L1(x*f_0-z*f_0)?

2 Can you also report the quantitative metric for each reconstructed image that you display? This will help us judge whether the quantitative metric agrees with our subjective judgement.

3) Define the distance between centers of bubble i and bubble j in the ground truth as

d(i,j) =\sqrt(\Delta(i,j)_x^2 + \Delta(i,j)_z^2)

where \Delta(i,j)_x and \Delta(i,j)_z are in pixel units. (Pixel units are better scaled for the non-isotropic shape of the PSF with a single plane wave, than physical units).

It seem that the difference between “best case” and “worst case” is not necessarily the total number of bubbles, but rather

(i) the minimum distance \min_{i,j} d(i,j) between centers of bubble i and bubble j, or perhaps

(ii) the number of bubbles N_{close} that are closer than a certain threshold d_min, that is, satisfy d(i,j) < d_min

Is this also your impression? Can you test these hypotheses by showing scatter plots:

(a) of the quantitative MSE metric for images in the validation set vs. \min_{i,j} d(i,j)

(b) of the quantitative MSE metric vs. N_{close} in the validation set. You will need to pick some value for d_min to determine N_{close}. You can try a few values.

(c) Repeat (a) and (b) for the L1 quantitative metric.

4) It appears that the training is still not done after 140 epochs, and both training and validation loss are still decreasing. Can you try to longer – say for another 100 epochs?