# Iterative reconstruction

(Redirected from Image reconstruction)
Example showing differences between filtered backprojection (right half) and iterative reconstruction method (left half)

Iterative reconstruction refers to iterative algorithms used to reconstruct 2D and 3D images in certain imaging techniques. For example, in computed tomography an image must be reconstructed from projections of an object. Here, iterative reconstruction techniques are usually a better, but computationally more expensive alternative to the common filtered back projection (FBP) method, which directly calculates the image in a single reconstruction step.[1] In recent research works, scientists have shown that extremely fast computations and massive parallelism is possible for iterative reconstruction, which makes iterative reconstruction practical for commercialization.[2]

## Basic concepts

The reconstruction of an image from the acquired data is an inverse problem. Often, it is not possible to exactly solve the inverse problem directly. In this case, a direct algorithm has to approximate the solution, which might cause visible reconstruction artifacts in the image. Iterative algorithms approach the correct solution using multiple iteration steps, which allows to obtain a better reconstruction at the cost of a higher computation time.

In computed tomography, this approach was the one first used by Hounsfield. There are a large variety of algorithms, but each starts with an assumed image, computes projections from the image, compares the original projection data and updates the image based upon the difference between the calculated and the actual projections.

There are typically five components to iterative image reconstruction algorithms, e.g. .[3]

1. An object model that expresses the unknown continuous-space function ${\displaystyle f(r)}$ that is to be reconstructed in terms of a finite series with unknown coefficients that must be estimated from the data.
2. A system model that relates the unknown object to the "ideal" measurements that would be recorded in the absence of measurement noise. Often this is a linear model of the form ${\displaystyle \mathbf {A} x+\epsilon }$, where ${\displaystyle \epsilon }$ represents the noise.
3. A statistical model that describes how the noisy measurements vary around their ideal values. Often Gaussian noise or Poisson statistics are assumed. Because Poisson statistics are closer to reality, it is more widely used.
4. A cost function that is to be minimized to estimate the image coefficient vector. Often this cost function includes some form of regularization. Sometimes the regularization is based on Markov random fields.
5. An algorithm, usually iterative, for minimizing the cost function, including some initial estimate of the image and some stopping criterion for terminating the iterations.

The advantages of the iterative approach include improved insensitivity to noise and capability of reconstructing an optimal image in the case of incomplete data. The method has been applied in emission tomography modalities like SPECT and PET, where there is significant attenuation along ray paths and noise statistics are relatively poor.

Statistical, likelihood-based approaches: Statistical, likelihood-based iterative expectation-maximization algorithms [4] [5] are now the preferred method of reconstruction. Such algorithms compute estimates of the likely distribution of annihilation events that led to the measured data, based on statistical principle, often providing better noise profiles and resistance to the streak artifacts common with FBP. Since the density of radioactive tracer is a function in a function space, therefore of extremely high-dimensions, methods which regularize the maximum-likelihood solution turning it towards penalized or maximum a-posteriori methods can have significant advantages for low counts. Examples such as Ulf Grenander's Sieve estimator[6][7] or Bayes penalty methods [8] [9] or via I.J. Good's roughness method [10] ,[11] may yield superior performance to expectation-maximization-based methods which involve a Poisson likelihood function only.

As another example, it is considered superior when one does not have a large set of projections available, when the projections are not distributed uniformly in angle, or when the projections are sparse or missing at certain orientations. These scenarios may occur in intraoperative CT, in cardiac CT, or when metal artifacts [12] [13] require the exclusion of some portions of the projection data.

In Magnetic Resonance Imaging it can be used to reconstruct images from data acquired with multiple receive coils and with sampling patterns different from the conventional Cartesian grid[14] and allows the use of improved regularization techniques (e.g. total variation)[15] or an extended modeling of physical processes[16] to improve the reconstruction. For example, with iterative algorithms it is possible to reconstruct images from data acquired in a very short time as required for Real-time MRI.[17]

In Cryo Electron Tomography, where the limited number of projections are acquired due to the hardware limitations and to avoid the biological specimen damage, it can be used along with compressive sensing techniques or regularization functions (e.g. Huber function) to improve the reconstruction for better interpretation.[18]

Here is an example that illustrates the benefits of iterative image reconstruction for cardiac MRI.[19]

A single frame from a Real-time MRI movie of a human heart. a) direct reconstruction b) iterative (nonlinear inverse) reconstruction[17]

## References

1. ^ Herman, G. T., Fundamentals of computerized tomography: Image reconstruction from projection, 2nd edition, Springer, 2009
2. ^ Wang, Xiao; Sabne, Amit; Kisner, Sherman; Raghunathan, Anand; Bouman, Charles; Midkiff, Samuel (2016-01-01). "High Performance Model Based Image Reconstruction". Proceedings of the 21st ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. PPoPP '16. New York, NY, USA: ACM: 2:1–2:12. doi:10.1145/2851141.2851163. ISBN 9781450340922.
3. ^ Fessler J A (1994). "Penalized weighted least-squares image reconstruction for positron emission tomography". IEEE Trans. on Medical Imaging. 13 (2): 290–300. doi:10.1109/42.293921.
4. ^ Carson, Lange; Richard Carson (1984). "EM reconstruction algorithm for emission and transmission tomography". Journal of Computer Assisted Tomography. 8 (2): 306–316. PMID 6608535.
5. ^ Vardi, Y.; L. A. Shepp; L. Kaufman (1985). "A statistical model for positron emission tomography". Journal of the American Statistical Association. 80 (389): 8–37. doi:10.1080/01621459.1985.10477119.
6. ^ Snyder, Donald L.; Miller, Michael I. (1985). "On the Use of the Method of Sieves for Positron Emission Tomography". IEEE Transactions on Medical Imaging. NS-32(5): 3864–3872. doi:10.1109/TNS.1985.4334521.
7. ^ Snyder, D.L.; Miller, M.I.; Thomas, L.J.; Politte, D.G. (1987). "Noise and edge artifacts in maximum-likelihood reconstructions for emission tomography". IEEE Trans. on Medical Imaging. 6 (3): 228–238. doi:10.1109/tmi.1987.4307831.
8. ^ Geman, Stuart; McClure, Donald E. (1985). "Bayesian image analysis: An application to single photon emission tomography". Proceedings Amererican Statistical Computing: 12–18.
9. ^ Green, Peter J. (1990). "Bayesian Reconstructions for Emission Tomography Data Using a Modified EM Algorithm". IEEE Transactions on Medical Imaging. 9 (1): 84–93. doi:10.1109/TNS.1985.4334521.
10. ^ Miller, Michael I.; Snyder, Donald L. (1987). "The role of likelihood and entropy in incomplete data problems: Applications to estimating point-process intensites and toeplitz constrained covariance estimates". Proceedings of the IEEE. 5 (7): 3223–3227. doi:10.1109/PROC.1987.13825.
11. ^ Miller, Michael I.; Roysam, Badrinath (April 1991). "Bayesian image reconstruction for emission tomography incorporating Good's roughness prior on massively parallel processors". Proceedings National Academy Sciences, USA. 88: 3223–3227. doi:10.1109/TNS.1985.4334521.
12. ^ Wang, G.E.; Snyder, D.L.; O'Sullivan, J.A.; Vannier, M.W. "Iterative deblurring for CT metal artifact reduction". IEEE Trans. on Medical Imaging. 15 (5): 657–664. doi:10.1109/42.538943.
13. ^ Boas FE, Fleischmann D (2011). "Evaluation of two iterative techniques for reducing metal artifacts in computed tomography". Radiology. 259: 894–902. doi:10.1148/radiol.11101782.
14. ^ Pruessmann K. P., Weiger M., Börnert P., Boesiger P. (2001). "Advances in sensitivity encoding with arbitrary k-space trajectories". Magnetic Resonance in Medicine. 46: 638–651. doi:10.1002/mrm.1241.
15. ^ Block K. T., Uecker M., Frahm J. (2007). "Undersampled radial MRI with multiple coils. Iterative image reconstruction using a total variation constraint". Magnetic Resonance in Medicine. 57: 1086–1098. doi:10.1002/mrm.21236.
16. ^ Fessler J (2010). "Model-based Image Reconstruction for MRI". Signal Processing Magazine, IEEE. 27: 81–89. doi:10.1109/msp.2010.936726.
17. ^ a b Uecker M, Zhang S, Voit D, Karaus A, Merboldt KD, Frahm J (2010a). "Real-time MRI at a resolution of 20 ms.". NMR Biomed. 23: 986–994. doi:10.1002/nbm.1585.
18. ^ Albarqouni, Shadi; Lasser, Tobias; Alkhaldi, Weaam; Al-Amoudi, Ashraf; Navab, Nassir (2015-01-01). Gao, Fei; Shi, Kuangyu; Li, Shuo, eds. Gradient Projection for Regularized Cryo-Electron Tomographic Reconstruction. Lecture Notes in Computational Vision and Biomechanics. Springer International Publishing. pp. 43–51. doi:10.1007/978-3-319-18431-9_5. ISBN 978-3-319-18430-2.
19. ^ I Uyanik, P Lindner, D Shah, N Tsekos I Pavlidis (2013) Applying a Level Set Method for Resolving Physiologic Motions in Free-Breathing and Non-gated Cardiac MRI. FIMH, 2013, [1]

[1]

1. ^ Bruyant, P.P. "Analytic and iterative reconstruction algorithms in SPECT" Journal Of Nuclear Medicine 43(10):1343-1358, 2002