Following works can be found in learning based ultrasound imaging context:

- Shear Wave Imaging
- CNN for MV(Minimum Variance) Beamforming Approximation
- Reinforcement Learning Application
- Despeckle (GAN and CNN)
- Compressed Sensing (SLA and Plane Wave Imaging)
- Learning Transmit Pattern
- Robust PCA(Unfolding)
- Multi-Line Acquisition to Single-Line Acqusition
- Switch-Net

**SHEAR-net: An End-to-End Deep Learning Approach for Single Push Ultrasound Shear Wave Elasticity Imaging (****arXiv:1902.04845**** )**(Shear-net: 2D CNN +3D CNN + RNN Collection of B-mode images to Elasticity Map)

** Abstract**—Ultrasound Shear Wave Elastography (USWE) with conventional B-mode imaging demonstrates better performance in lesion segmentation and classification problems. In this article, we propose SHEAR-net, an end-to-end deep neural network, to reconstruct USWE images from tracked tissue displacement data at different time instants induced by a single acoustic radiation force (ARF) with 100% or 50% of the energy in conventional use. The SHEAR-net consists of a localizer called the S-net to first localize the lesion location and then uses recurrent layers to extract temporal correlations from wave patterns using different time frames, and finally, with an estimator, it reconstructs the shear modulus image from the concatenated outputs of S-net and recurrent layers. The network is trained with 800 simulation and a limited number of CIRS tissue mimicking phantom data and is optimized using a multi-task learning loss function where the tasks are: inclusion localization and modulus estimation. The efficacy of the proposed SHEAR-net is extensively evaluated both qualitatively and quantitatively on 125 test set of motion data obtained from simulation and CIRS phantoms. We show that the proposed approach consistently outperforms the current state-of-the-art method and achieves overall 4–5 dB improvement in PSNR and SNR. In addition, an average gain of 0.15 in DSC and SSIM values indicate that the SHEAR-net has a better inclusion coverage area and structural similarity of the two approaches. The proposed real-time deep learning based technique can accurately estimate shear modulus for a minimum tissue displacement of 0.5µm and image multiple inclusions with a single push ARF.

** Introduction** “The DNN-based SWE(Shear Wave Elastography) image reconstruction algorithm can be alternative to existing conventional algorithms” “S-net is a combination of 3-D CNN(?), Convolutional LSTM(?), and 2-D CNN” ” Dynamic training of the S-net, recurrent layers and modulus estimation block with multi-task learning (MTL) loss function. ”

**End-to-End Learning Based Ultrasound Reconstruction**(**arXiv:1904.04696** )( Raw data to image domain (MV Beamformer) )

* Abstract*. Ultrasound imaging is caught between the quest for the highest image quality, and the necessity for clinical usability. Our contribution is two-fold: First, we propose a novel fully convolutional neural network for ultrasound reconstruction. Second, a custom loss function tailored to the modality is employed for end-to-end training of the network. We demonstrate that training a network to map time-delayed raw data to a minimum variance ground truth offers performance increases in a clinical environment. In doing so, a path is explored towards improved clinically viable ultrasound reconstruction. The proposed method displays both promising image reconstruction quality and acquisition frequency when integrated for live ultrasound scanning. A clinical evaluation is conducted to verify the diagnostic usefulness of the proposed method in a clinical setting.

* Introduction *… “The implicit aim of all these approaches is to extract more information about the tissue being scanned. ” … ” no DNN proposed to generate clinically viable full contrast ultrasound images “

* Contribution* “End-to-end learning-based data-dependent ultrasound reconstruction method for real-time applications “

** Learning Beamforming** “Time-delayed raw data as a feature input and MV beamformed data as regression target”

* Evaluation* “MV beamforming can be consired as gold standard in term of image quality however suffers from impractical runtimes ”

** Experimental Evaluation** The resulting acquisition frequencies are 21 Hz, 0.14 Hz, and 17 Hz for DAS, MV and our reconstruction respectively

**Clinical Evaluation**

**Straight to the point: reinforcement learning for user guidance in ultrasound ****(arXiv:1903.00586)**** **( RL application for user guidance for ultrasound )

** Abstract**. Point of care ultrasound (POCUS) consists in the use of ultrasound imaging in critical or emergency situations to support clinical decisions by healthcare professionals and first responders. In this setting it is essential to be able to provide means to obtain diagnostic data to potentially inexperienced users who did not receive an extensive medical training. Interpretation and acquisition of ultrasound images is not trivial. First, the user needs to find a suitable sound window which can be used to get a clear image, and then he needs to correctly interpret it to perform a diagnosis. Although many recent approaches focus on developing smart ultrasound devices that add interpretation capabilities to existing systems, our goal in this paper is to present a reinforcement learning (RL) strategy which is capable to guide novice users to the correct sonic window and enable them to obtain clinically relevant pictures of the anatomy of interest. We apply our approach to cardiac images acquired from the parasternal long axis (PLAx) view of the left ventricle of the heart.

** Introduction** ” we show how to use deep learning and in particular deep reinforcement learning to create a system to guide inexperienced users towards the acquisition of clinically relevant images of the heart in ultrasound ” , ” our deep reinforcement learning model predicts a motion instruction that is promptly displayed to the user “

* Related Work* ” … apply reinforcement learning to a guidance problem whose goal is to provide instructions to users …”

*Method:*

Let \(\theta\) parameterize Transmission Event and \(\alpha\) parameterize reconstruction. Let’s assume we discretize \(\theta\) and \(\alpha\).So we can define discrete set of actions as above table. Reward strategy can be based on some quantitative metric such as SSIM. We can have intervals instead of bins. For instance: \(0.9 \leq SSIM \leq 1 \) is correct interval and other intervals can be defined similar manner. Subsequently, we can have similar reward strategy as above.

**Towards CT-quality Ultrasound Imaging using Deep Learning**(**arXiv:1710.06304**)( Raw data to heavy speckle algorithm and raw data to CT images )

** Abstract **The cost-effectiveness and practical harmlessness of ultrasound imaging have made it one of the most widespread tools for medical diagnosis. Unfortunately, the beam-forming based image formation produces granular speckle noise, blurring, shading and other artifacts. To overcome these effects, the ultimate goal would be to reconstruct the tissue acoustic properties by solving a full wave propagation inverse problem. In this work, we make a step towards this goal, using Multi-Resolution Convolutional Neural Networks (CNN). As a result, we are able to reconstruct CT-quality images from the reflected ultrasound radio-frequency(RF) data obtained by simulation from real CT scans of a human body. We also show that CNN is able to imitate existing computationally heavy despeckling methods, thereby saving orders of magnitude in computations and making them amenable to real-time applications. ,

** Introduction** ” … image despeckling algorithms has been enhanced by particularly effective methods based on homomorphic de-noising [1], non-local means (NLM) [2], and BM3D filtering ” ” Firstly, we propose an end-to-end CNN architecture for real-time despeckling of ultrasound images. Secondly, we show that our network successfully approximates the state-of-the-art de-speckling algorithms …. Finally, aiming to go beyond the quality of despeckling, we propose a method for training CNN for the purpose of converting regular US scans into corresponding “CT-like” images “

** Data Sources** ” generating such data by means of standard simulation software packages (such as, e.g., k-Wave or Field-II) was found to be infeasible. As a result, the in silico data were simulated using the fast simulation scheme of [9] ” ” The distributions of tissue density and acoustic velocities have been deduced from the corresponding Hounsfield units of abdominal X-ray CT scans ”

** CNN-based Approximaiton** ” We propose training a CNN over the dataset of pairs of IQ images and resulted conventional despeckled images to achieve a fast approximation of the despeckling “

**SCATGAN for Reconstruction of Ultrasound Scatterers Using Generative Adversarial Networks**(** ****arXiv:1902.00469**** **) ( GAN to reconstruct the scatters map )

** Abstract **Computational simulation of ultrasound (US) echography is essential for training sonographers. Realistic simulation of US interaction with microscopic tissue structures is often modeled by a tissue representation in the form of point scatterers, convolved with a spatially varying point spread function. This yields a realistic US B-mode speckle texture, given that a scatterer representation for a particular tissue type is readily available. This is often not the case and scatterers are nontrivial to determine. In this work we propose to estimate scatterer maps from sample US B-mode images of a tissue, by formulating this inverse mapping problem as image translation, where we learn the mapping with Generative Adversarial Networks using a US simulation software for training. We demonstrate robust reconstruction results, invariant to US viewing and imaging settings such as imaging direction and center frequency. Our method is shown to generalize beyond the trained imaging settings, demonstrated on in vivo US data. Our inference runs orders of magnitude faster than optimization-based techniques, enabling future extensions for reconstructing 3D B-mode volumes with only linear computational complexity.

** Introduction** “This paper focuses on extracting such scatterer maps from given image examples” … ” We propose a novel pipeline by formulating the inverse problem of US simulation as an image translation task ”

** Method **“However, this process(Field II) proved insufficient for the learning algorithm to generalize well to in vivo images. Based on this preliminary observation, we eventually used a set of in-vivo CT images, out of which I(x) is randomly drawn. “

** Experiments and Results** “The ultimate goal of a scatterer map is a realistic B-mode image to be simulated from it. “

**Universal Deep Beamformer for Variable Rate Ultrasound Imaging**(**arXiv:1901.01706**** **) (Compressed Sensing for SLA both ADC and less sensors)

** Abstract**—Ultrasound (US) imaging is based on the timereversal principle, in which individual channel RF measurements are back-propagated and accumulated to form an image after applying specific delays. While this time reversal is usually implemented as a delay-and-sum (DAS) beamformer, the image quality quickly degrades as the number of measurement channels decreases. To address this problem, various types of adaptive beamforming techniques have been proposed using predefined models of the signals. Unfortunately, the performance of these adaptive beamforming approaches degrade when the underlying model is not sufficiently accurate. Here, we demonstrate for the first time that a single universal deep beamformer trained using a purely data-driven way can generate significantly improved images over widely varying aperture and channel subsampling patterns. In particular, we design an end-to-end deep learning framework that can directly process sub-sampled RF data acquired at different subsampling rate and detector configuration to generate high quality ultrasound images using a single beamformer. Experimental results using B-mode focused ultrasound confirm the efficacy of the proposed methods.

*Theory*

**Method**

**Deep Learning-based Universal Beamformer for Ultrasound Imaging**(**arXiv:1904.02843 **) (Compressed Sensing for SLA and Plane Wave)

** Abstract**. In ultrasound (US) imaging, individual channel RF measurements are back-propagated and accumulated to form an image after applying specific delays. While this time reversal is usually implemented using a hardware- or software-based delay-and-sum (DAS) beamformer, the performance of DAS decreases rapidly in situations where data acquisition is not ideal. Herein, for the first time, we demonstrate that a single data-driven beamformer designed as a deep neural network can directly process sub-sampled RF data acquired at different sampling rates to generate high quality US images. In particular, the proposed deep beamformer is evaluated for two distinct acquisition schemes: focused ultrasound imaging and planewave imaging. Experimental results showed that the proposed deep beamformer exhibit significant performance gain for both focused and planar imaging schemes, in terms of contrast-to-noise ratio and structural similarity.

** Introduction** ” While these recent deep neural network approaches provide impressive reconstruction performance, the current design is not universal in the sense that the designed neural network cannot completely replace a DAS beamformer, since they are designed and trained for specific acquisition scenario. “, ” . Thanks to the expressiveness of neural networks, our novel deep beamformer can learn the mapping to images from various sub-sampled RF measurements, and exhibits superior image quality for all sub-sampling rates ”

**RF sub-sampling scheme** “For unfocused planar wave imaging, we generated six sets of sub-sampled RF data at different down-sampling rates. In particular, we used two subsampling schemes: variable down-sampling of RF-channel data pattern across the depth to reduce high data-rate and power requirements, and uniform sub-sampling of PWs angles to accelerate acquisition speed”

**Learning beamforming in ultrasound imaging **(**arXiv:1812.08043**)( Learning Transmit Beam)

** Abstract** Medical ultrasound (US) is a widespread imaging modality owing its popularity to cost efficiency, portability, speed, and lack of harmful ionizing radiation. In this paper, we demonstrate that replacing the traditional ultrasound processing pipeline with a datadriven, learnable counterpart leads to significant improvement in image quality. Moreover, we demonstrate that greater improvement can be achieved through a learning-based design of the transmitted beam patterns simultaneously with learning an image reconstruction pipeline. We evaluate our method on an in-vivo first-harmonic cardiac ultrasound dataset acquired from volunteers and demonstrate the significance of the learned pipeline and transmit beam patterns on the image quality when compared to standard transmit and receive beamformers used in high frame-rate US imaging. We believe that the presented methodology provides a fundamentally different perspective on the classical problem of ultrasound beam pattern design.

** Contributions** …” we propose to learn the parameters of the forward model, specifically, the transmitted patterns. We propose to jointly learn the end-to-end transmit (Tx) and receive (Rx) beamformers optimized for the task of high frame rate ultrasound imaging “… ” We also propose a new beam forming layer inspired by (Jaderberg et al., 2015), that implements beam forming as a differentiable geometric transformation between pre-beamformed Rx signal and the beam formed one “

** Learning optimal transmit patterns ** “The idea of simultaneously training a signal reconstruction process and some parameters of the signal acquisition forward model has been previously corroborated in computational imaging, including compressed tomography (Menashe and Bronstein, 2014), phase-coded aperture extended depth-of-field and range image sensing (Haim et al., 2018). In all the mentioned cases, a significant improvement in performance was observed both in simulation and in real systems. “

** Data Acquisition** “We refer to this baseline acquisition scenario as single-line acquisition (SLA) and consider it to be the groundtruth in all reduced transmission experiments “

** Different initializations ** multi-line acquisition (MLA), multi-line transmission (MLT), random initialization (not clear to connection to plane wave imaging)

** Learned beam patterns ** ” The wider the initialized beams are (higher MLA rates), the greater is the increase in the directivity ” (paralel planewave is advantageous when we consider trasnmitting pattern not only one case)(If we just use one case(one transmitting and one receiving) then directivity advantageous )(MIssing thing is to consider multiple event )

**Deep Unfolded Robust PCA with Application to Clutter Suppression in Ultrasound ( ****arXiv:1811.08252**** )**( Robust PCA formulation and Deep Unfolding)

** Abstract**—Contrast enhanced ultrasound is a radiation-free imaging modality which uses encapsulated gas microbubbles for improved visualization of the vascular bed deep within the tissue. It has recently been used to enable imaging with unprecedented subwavelength spatial resolution by relying on super-resolution techniques. A typical preprocessing step in super-resolution ultrasound is to separate the microbubble signal from the cluttering tissue signal. This step has a crucial impact on the final image quality. Here, we propose a new approach to clutter removal based on robust principle component analysis (PCA) and deep learning. We begin by modeling the acquired contrast enhanced ultrasound signal as a combination of a low rank and sparse components. This model is used in robust PCA and was previously suggested in the context of ultrasound Doppler processing and dynamic magnetic resonance imaging. We then illustrate that an iterative algorithm based on this model exhibits improved separation of microbubble signal from the tissue signal over commonly practiced methods. Next, we apply the concept of deep unfolding to suggest a deep network architecture tailored to our clutter filtering problem which exhibits improved convergence speed and accuracy with respect to its iterative counterpart. We compare the performance of the suggested deep network on both simulations and in-vivo rat brain scans, with a commonly practiced deep-network architecture and the fast iterative shrinkage algorithm, and show that our architecture exhibits better image quality and contrast.

* Introduction* … ” A major challenge in ultrasonic vascular imaging such as CEUS is to suppress clutter signals stemming from stationary and slowly moving tissue as they introduce significant artifacts in blood flow imaging ” … ” Over the past few decades several approaches have been suggested for clutter removal. “(Summary of tissue signal suppression)” A major limitation of SVD-based filtering is the requirement to determine a threshold ” ” sum of a low-rank matrix (tissue) and a sparse outlier signal (UCAs) “” Second, we utilize recent ideas from the field of deep learning (Unfolding)”” In contrast, our approach does not require a-priori knowledge of the rank “

** Problem Formulation** L : Low Rank Tissue Signal, S: Sparse Micro bubble and N:Noise

** Unfolding the iterative algorithm ** ” An iterative algorithm can be considered as a recurrent neural network ”

** Training Corona ** ” In practice, Sˆ i and Lˆ i can either be obtained from simulations or by decomposing Di using iterative algorithms such as FISTA ” ” Instead of training the network over entire scans, we divide the US movie used for training into 3D patches (axial coordinate, lateral coordinate and frame number). “

The best we can achieve is algorithm 1 result. Can we do better? If we can model motion blur, we can train with better simulation data. They train CORONA instead of solving algorithm 1 for faster convergence.

** Experiments** “To compare with CORONA, we implemented ResNet using complex convolutions for the tissue clutter suppression task ” ” The network does not recover the tissue signal, as CORONA, but only the UCA signal. “

** RESULTS **” First, we attribute the improved performance over the commonly practiced SVD filtering, wall filtering and FISTA to two main reasons. The first, is the fact that for application on in-vivo data, the networks are trained based on both invivo data and simulated data ” ” thus help avoid over-fitting and achieve good performance even when the training sets are relatively low. “

**A Deep Learning Framework for Single-Sided Sound Speed Inversion in Medical Ultrasound **(**arXiv:1810.00322 **) (U-Net Raw data to speed of sound inversion)

** Abstract**—Objective: Ultrasound elastography is gaining traction as an accessible and useful diagnostic tool for such things as cancer detection and differentiation and thyroid disease diagnostics. Unfortunately, state of the art shear wave imaging techniques, essential to promote this goal, are limited to high end ultrasound hardware due to high power requirements; are extremely sensitive to patient and sonographer motion; and generally suffer from low frame rates. Motivated by research and theory showing that pressure wave velocity carries similar diagnostic abilities to shear wave imaging, we present an alternative approach using single sided pressurewave velocity measurements from a conventional ultrasound probe, enabling elasticity based diagnostics using portable and low cost devices. Methods: In this paper, we present a single sided sound speed inversion solution using a fully convolutional deep neural network. We use simulations for training, allowing the generation of large volumes of ground truth data. Results: We show that it is possible to invert for longitudinal sound speed in soft tissue at super real time frame rates. Our method shows exceptional results on simulated data, and highly encouraging initial results on real data. Conclusion: Sound speed inversion on channel data has significant potential, made possible in real time with deep learning technologies. Significance: High end ultrasound devices remain inaccessible in many locations. Utilizing pressure sound speed and deep learning technologies, brings the same quality diagnostic abilities to low power devices at real time frame rates. Potential frame rates also enable dynamic functional imaging, impossible with shear wave imaging.

**High frame-rate cardiac ultrasound imaging with deep learning**(**arXiv:1808.07823**) ( U-NET for MLA to SLA )

** Abstract**. Cardiac ultrasound imaging requires a high frame rate in order to capture rapid motion. This can be achieved by multi-line acquisition (MLA), where several narrow-focused received lines are obtained from each wide-focused transmitted line. This shortens the acquisition time at the expense of introducing block artifacts. In this paper, we propose a data-driven learning-based approach to improve the MLA image quality. We train an end-to-end convolutional neural network on pairs of real ultrasound cardiac data, acquired through MLA and the corresponding single-line acquisition (SLA). The network achieves a significant improvement in image quality for both 5 − and 7 −line MLA resulting in a decorrelation measure similar to that of SLA while having the frame rate of MLA.

** Introduction** The main idea behind MLA is to transmit a weakly focused beam that provides a sufficiently wide coverage for a high number of received lines

** Method** “Our network comprises both the interpolation and the apodization stages that are trained jointly”

**SWITCHNET: A NEURAL NETWORK MODEL FOR FORWARD AND INVERSE SCATTERING PROBLEMS **(**arXiv:1810.09675**) ( New NN structure for channel data to image domain for acoustic inverse problems)

** Abstract**. We propose a novel neural network architecture, SwitchNet, for solving the wave equation based inverse scattering problems via providing maps between the scatterers and the scattered field (and vice versa). The main difficulty of using a neural network for this problem is that a scatterer has a global impact on the scattered wave field, rendering typical convolutional neural network with local connections inapplicable. While it is possible to deal with such a problem using a fully connected network, the number of parameters grows quadratically with the size of the input and output data. By leveraging the inherent low-rank structure of the scattering problems and introducing a novel switching layer with sparse connections, the SwitchNet architecture uses much fewer parameters and facilitates the training process. Numerical experiments show promising accuracy in learning the forward and inverse maps between the scatterers and the scattered wave field.

** Introdution** … ” an efficient map η → d provides an alternative to expensive partial differential equation (PDE) solvers for the Helmholtz equation; an efficient map d → η is more valuable as it allows us to solve the inverse problem of determining the scatterers from the scattering field, without going through the usual iterative process “… ” the connectivity in the NN has to be wired in a non-local fashion, rendering typical NN with local connectivity insufficient . This leads to the development of the proposed SwitchNet. The key idea is the inclusion of a novel low-complexity switch layer that sends information between all pairs of sites effectively, following the ideas from butterfly factorizations “

** Discussion** ” For these maps, local information at the input has a global impact at the output ” … ” Based on certain low-rank property that arises in the linearized operators, we are able to replace a fully connected NN with the sparse SwitchNet, thus reducing complexity dramatically ” … ” the proposed SwitchNet connects the input layer with the output layer globally “