After showing that using deep NN for deconvolution of microbubbles is successful as stated in Eldar at al, we would like to move to more realistic case. Until now, we were using our way of simulating training set data. we wanted to move to actual Field2 simulator.
We changed our data generation step. We used the same network, same training strategy. We just modify our way to obtain blurred PSF.
Dataset Generation: We assume that our target scene has 50 mm depth in z direction and 38 mm width in x direction. Number of pixels: 3246*990. Bubble density is chosen as 260 bubbles per cm^2.
Number of images generated for training set:7
Number of images generated for training set:1
Number of patches generated from each image: 686
Image Generation: We used 3 different PSFs from Field2(far,center,close). Each PSf is used to obtain one strip of the image which is consisted of 3 strips. Generated Image is on the high resolution grid.(3246*990). So we obtain the ground truth first then we obtain the low resolution image. In order to obtain low resolution image, we first used anti aliasing filter (gaussian filter with sigma=1). Then down sampled the image by ratio of 2.
Here, there are some results for this case(lambda = 0.001 and sigma of kernel f =1):
Now, we are working on using directly Field2 simulations. We are trying to use transfer learning . Our training strategy is to train the network using the batches generated as described above. Then to continue training by using patches directly from Field2 simulation.