# Parareal

Parareal is a parallel algorithm from numerical analysis and used for the solution of initial value problems.[1] It was introduced in 2001 by Lions, Maday and Turinici. Since then, it has become one of the most widely studied parallel-in-time integration methods.[citation needed]

## Parallel-in-time integration methods

In contrast to e.g. Runge-Kutta or multi-step methods, some of the computations in Parareal can be performed in parallel and Parareal is therefore one example of a parallel-in-time integration method. While historically most efforts to parallelize the numerical solution of partial differential equations focussed on the spatial discretization, in view of the challenges from exascale computing, parallel methods for temporal discretization have been identified as a possible way to increase concurrency in numerical software.[2] Because Parareal computes the numerical solution for multiple time steps in parallel, it is categorized as a parallel across the steps method.[3] This is in contrast to approaches using parallelism across the method like parallel Runge-Kutta or extrapolation methods, where independent stages can be computed in parallel or parallel across the system methods like waveform relaxation.[4][5]

## History

Parareal can be derived as both a multigrid method in time method or as multiple shooting along the time axis.[6] Both ideas, multigrid in time as well as adopting multiple shooting for time integration, go back to the 1980s and 1990s.[7][8] Parareal is a widely studied method and has been used and modified for a range of different applications.[9] Ideas to parallelize the solution of initial value problems go back even further: the first paper proposing a parallel-in-time integration method appeared in 1964.[10]

## Algorithm

Visualization of the Parareal algorithm. The coarse propagator here is labelled ${\displaystyle {\bar {\varphi }}}$ whereas the fine propagator is labelled ${\displaystyle \varphi }$.

Parareal solves an initial value problem of the form

${\displaystyle {\dot {y}}(t)=f(y(t),t),\quad y(t_{0})=y_{0}\quad {\text{with}}\quad t_{0}\leq t\leq T.}$

Here, the right hand side ${\displaystyle f}$ can correspond to the spatial discretization of a partial differential equation in a method of lines approach.

Parareal now requires a decomposition of the time interval ${\displaystyle [t_{0},T]}$ into ${\displaystyle P}$ so-called time slices ${\displaystyle [t_{j},t_{j+1}]}$ such that

${\displaystyle [t_{0},T]=[t_{0},t_{1}]\cup [t_{1},t_{2}]\cup \ldots \cup [t_{P-1},t_{P}].}$

Each time slice is assigned to one processing unit when parallelizing the algorithm, so that ${\displaystyle P}$ is equal to the number of processing units used for Parareal: in an MPI based code for example, this would be the number of processes, while in an OpenMP based code, ${\displaystyle P}$ would be equal to the number of threads.

Parareal is based on the iterative application of two methods for integration of ordinary differential equations. One, commonly labelled ${\displaystyle {\mathcal {F}}}$, should be of high accuracy and computational cost while the other, typically labelled ${\displaystyle {\mathcal {G}}}$, must be computationally cheap but can be much less accurate. Typically, some form of Runge-Kutta method is chosen for both coarse and fine integrator, where ${\displaystyle {\mathcal {G}}}$ might be of lower order and use a larger time step than ${\displaystyle {\mathcal {F}}}$. If the initial value problem stems from the discretization of a PDE, ${\displaystyle {\mathcal {G}}}$ can also use a coarser spatial discretization, but this can negatively impact convergence unless high order interpolation is used.[11] The result of numerical integration with one of these methods over a time slice ${\displaystyle [t_{j},t_{j+1}]}$ for some starting value ${\displaystyle y_{j}}$ given at ${\displaystyle t_{j}}$ is then written as

${\displaystyle y={\mathcal {F}}(y_{j},t_{j},t_{j+1})}$ or ${\displaystyle y={\mathcal {G}}(y_{j},t_{j},t_{j+1})}$.

Serial time integration with the fine method would then correspond to a step-by-step computation of

${\displaystyle y_{j+1}={\mathcal {F}}(y_{j},t_{j},t_{j+1}),\quad j=0,\ldots ,P-1.}$

Parareal instead uses the following iteration

${\displaystyle y_{j+1}^{k+1}={\mathcal {G}}(y_{j}^{k+1},t_{j},t_{j+1})+{\mathcal {F}}(y_{j}^{k},t_{j},t_{j+1})-{\mathcal {G}}(y_{j}^{k},t_{j},t_{j+1}),\quad j=0,\ldots ,P-1,\quad k=0,\ldots ,K-1,}$

where ${\displaystyle k}$ is the iteration counter. As the iteration converges and ${\displaystyle y_{j}^{k+1}-y_{j}^{k}\to 0}$, the terms from the coarse method cancel out and Parareal reproduces the solution that is obtained by the serial execution of the fine method only. It can be shown that Parareal converges after a maximum of ${\displaystyle P}$ iterations.[6] For Parareal to provide speedup, however, it has to converge in a number of iterations significantly smaller than the number of time slices, that is ${\displaystyle K\ll P}$.

In the Parareal iteration, the computationally expensive evaluation of ${\displaystyle {\mathcal {F}}(y_{j}^{k},t_{j},t_{j+1})}$ can be performed in parallel on ${\displaystyle P}$ processing units. By contrast, the dependency of ${\displaystyle y_{j+1}^{k+1}}$ on ${\displaystyle {\mathcal {G}}(y_{j}^{k+1},t_{j},t_{j+1})}$ means that the coarse correction has to be computed in serial order.

### Speedup

Under some assumptions, a simple theoretical model for the speedup of Parareal can be derived.[12] Although in applications these assumptions can be too restrictive, the model still is useful to illustrate the trade offs that are involved in obtaining speedup with Parareal.

First, assume that every time slice ${\displaystyle [t_{j},t_{j+1}]}$ consists of exactly ${\displaystyle N_{f}}$ steps of the fine integrator and of ${\displaystyle N_{c}}$ steps of the coarse integrator. This includes in particular the assumption that all time slices are of identical length and that both coarse and fine integrator use a constant step size over the full simulation. Second, denote by ${\displaystyle \tau _{f}}$ and ${\displaystyle \tau _{c}}$ the computing time required for a single step of the fine and coarse methods, respectively, and assume that both are constant. This is typically not exactly true when an implicit method is used, because then runtimes vary depending on the number of iterations required by the iterative solver.

Under these two assumptions, the runtime for the fine method integrating over ${\displaystyle P}$ time slices can be modelled as

${\displaystyle c_{\text{fine}}=PN_{f}\tau _{f}.}$

The runtime of Parareal using ${\displaystyle P}$ processing units and performing ${\displaystyle K}$ iterations is

${\displaystyle c_{\text{parareal}}=(K+1)PN_{c}\tau _{c}+KN_{f}\tau _{f}.}$

Speedup of Parareal then is

${\displaystyle S_{p}={\frac {c_{\text{fine}}}{c_{\text{parareal}}}}={\frac {1}{(K+1){\frac {N_{c}}{N_{f}}}{\frac {\tau _{c}}{\tau _{f}}}+{\frac {K}{P}}}}\leq \min \left\{{\frac {N_{f}\tau _{f}}{N_{c}\tau _{c}}},{\frac {P}{K}}\right\}.}$

These two bounds illustrate the trade off that has to be made in choosing the coarse method: on the one hand, it has to be cheap and/or use a much larger time step to make the first bound as large as possible, on the other hand the number of iterations ${\displaystyle K}$ has to be kept low to keep the second bound large. In particular, Parareal's parallel efficiency is bounded by

${\displaystyle E_{p}={\frac {S_{p}}{P}}\leq {\frac {1}{K}},}$

that is by the inverse of the number of required iterations.

### Instability for imaginary eigenvalues

The vanilla version of Parareal has issues for problems with imaginary eigenvalues.[6] It typically only converges toward the very last iterations, that is as ${\displaystyle k}$ approaches ${\displaystyle P}$, and the speedup ${\displaystyle S_{p}}$ is always going to be smaller than one. So either the number of iterations is small and Parareal is unstable or, if ${\displaystyle k}$ is large enough to make Parareal stable, no speedup is possible. This also means that Parareal is typically unstable for hyperbolic equations.[13] Even though the formal analysis by Gander and Vandewalle covers only linear problems with constant coefficients, the problem also arises when Parareal is applied to the nonlinear Navier–Stokes equations when the viscosity coefficient becomes too small and the Reynolds number too large.[14] Different approaches exist to stabilise Parareal,[15][16][17] one being Krylov-subspace enhanced Parareal.

## Variants

There are multiple algorithms that are directly based or at least inspired by the original Parareal algorithm.

### Krylov-subspace enhanced Parareal

Early on it was recognised that for linear problems information generated by the fine method ${\displaystyle {\mathcal {F}}_{\delta t}}$ can be used to improve the accuracy of the coarse method ${\displaystyle {\mathcal {G}}_{\Delta t}}$.[16] Originally, the idea was formulated for the parallel implicit time-integrator PITA,[18] a method closely related to Parareal but with small differences in how the correction is done. In every iteration ${\displaystyle k}$ the result ${\displaystyle {\mathcal {F}}_{\delta t}(y_{j}^{k})}$ is computed for values ${\displaystyle u_{j}^{k}\in \mathbb {R} ^{d}}$ for ${\displaystyle j=0,\ldots ,P-1}$. Based on this information, the subspace

${\displaystyle S_{k}:=\left\{y_{j}^{k'}:0\leq k'\leq k,j=0,\ldots ,P-1\right\}}$

is defined and updated after every Parareal iteration.[19] Denote as ${\displaystyle P_{k}}$ the orthogonal projection from ${\displaystyle \mathbb {R} ^{d}}$ to ${\displaystyle S_{k}}$. Then, replace the coarse method with the improved integrator ${\displaystyle {\mathcal {K}}_{\Delta t}(y)={\mathcal {F}}_{\delta t}(P_{k}y)+{\mathcal {G}}_{\Delta t}((I-P_{k})y)}$.

As the number of iterations increases, the space ${\displaystyle S_{k}}$ will grow and the modified propagator ${\displaystyle {\mathcal {K}}_{\Delta t}}$ will become more accurate. This will lead to faster convergence. This version of Parareal can also stably integrate linear hyperbolic partial differential equations.[20] An extension to nonlinear problems based on the reduced basis method exists as well.[17]

### Hybrid Parareal spectral deferred corrections

A method with improved parallel efficiency based on a combination of Parareal with spectral deferred corrections (SDC) [21] has been proposed by M. Minion.[22] It limits the choice for coarse and fine integrator to SDC, sacrificing flexibility for improved parallel efficiency. Instead of the limit of ${\displaystyle 1/K}$, the bound on parallel efficiency in the hybrid method becomes

${\displaystyle E_{p}\leq {\frac {K_{s}}{K_{p}}}}$

with ${\displaystyle K_{s}}$ being the number of iterations of the serial SDC base method and ${\displaystyle K_{p}}$ the typically greater number of iterations of the parallel hybrid method. The Parareal-SDC hybrid has been further improved by addition of a full approximation scheme as used in nonlinear multigrid. This led to the development of the parallel full approximation scheme in space and time (PFASST).[23] Performance of PFASST has been studied for PEPC, a Barnes-Hut tree code based particle solver developed at Juelich Supercomputing Centre. Simulations using all 262,144 cores on the IBM BlueGene/P system JUGENE showed that PFASST could produce additional speedup beyond saturation of the spatial tree parallelisation.[24]

### Multigrid reduction in time (MGRIT)

The multigrid reduction in time method (MGRIT) generalises the interpretation of Parareal as a multigrid-in-time algorithms to multiple levels using different smoothers.[25] It is a more general approach but for a specific choice of parameters it is equivalent to Parareal. The XBraid library implementing MGRIT is being developed by Lawrence Livermore National Laboratory.

### ParaExp

ParaExp uses exponential integrators within Parareal.[26] While limited to linear problems, it can produce almost optimal parallel speedup.

## References

1. ^ Lions, Jacques-Louis; Maday, Yvon; Turinici, Gabriel (2015). "A "parareal" in time discretization of PDE's" (PDF). Comptes Rendus de l'Académie des Sciences, Série I. 332 (7): 661–668. Bibcode:2001CRASM.332..661L. doi:10.1016/S0764-4442(00)01793-6.
2. ^ Jack Dongarra; Jeffrey Hittinger; John Bell; Luis Chacon; Robert Falgout; Michael Heroux; Paul Hovland; Esmond Ng; Clayton Webster; Stefan Wild (March 2014). Applied Mathematics Research for Exascale Computing (PDF) (Report). US Department of Energy. Retrieved August 2015. Check date values in: |accessdate= (help)
3. ^ Burrage, Kevin (1997). "Parallel methods for ODEs". Advances in Computational Mathematics. 7 (1–2): 1–31. doi:10.1023/A:1018997130884.
4. ^ Iserles, A.; NøRSETT, S. P. (1990-10-01). "On the Theory of Parallel Runge—Kutta Methods". IMA Journal of Numerical Analysis. 10 (4): 463–488. doi:10.1093/imanum/10.4.463. ISSN 0272-4979.
5. ^ Ketcheson, David; Waheed, Umair bin (2014-06-13). "A comparison of high-order explicit Runge–Kutta, extrapolation, and deferred correction methods in serial and parallel". Communications in Applied Mathematics and Computational Science. 9 (2): 175–200. arXiv:1305.6165. doi:10.2140/camcos.2014.9.175. ISSN 2157-5452.
6. ^ a b c Gander, Martin J.; Vandewalle, Stefan (2007). "Analysis of the Parareal Time‐Parallel Time‐Integration Method". SIAM Journal on Scientific Computing. 29 (2): 556–578. CiteSeerX 10.1.1.154.6042. doi:10.1137/05064607X.
7. ^ Hackbusch, Wolfgang (1985). Parabolic multi-grid methods. Computing Methods in Applied Sciences and Engineering, VI. pp. 189–197. ISBN 9780444875976. Retrieved August 2015. Check date values in: |access-date= (help)
8. ^ Kiehl, Martin (1994). "Parallel multiple shooting for the solution of initial value problems". Parallel Computing. 20 (3): 275–295. doi:10.1016/S0167-8191(06)80013-X.
9. ^ Gander, Martin J. (2015). 50 years of Time Parallel Time Integration. Contributions in Mathematical and Computational Sciences. 9 (1 ed.). Springer International Publishing. doi:10.1007/978-3-319-23321-5. ISBN 978-3-319-23321-5.
10. ^ Nievergelt, Jürg (1964). "Parallel methods for integrating ordinary differential equations". Communications of the ACM. 7 (12): 731–733. doi:10.1145/355588.365137.
11. ^ Ruprecht, Daniel (2014-12-01). "Convergence of Parareal with spatial coarsening" (PDF). PAMM. 14 (1): 1031–1034. doi:10.1002/pamm.201410490. ISSN 1617-7061.
12. ^ Minion, Michael L. (2010). "A Hybrid Parareal Spectral Deferred Corrections Method". Communications in Applied Mathematics and Computational Science. 5 (2): 265–301. doi:10.2140/camcos.2010.5.265.
13. ^ Staff, Gunnar Andreas; Rønquist, Einar M. (2005-01-01). Barth, Timothy J.; Griebel, Michael; Keyes, David E.; Nieminen, Risto M.; Roose, Dirk; Schlick, Tamar; Kornhuber, Ralf; Hoppe, Ronald; Périaux, Jacques (eds.). Stability of the Parareal Algorithm. Lecture Notes in Computational Science and Engineering. Springer Berlin Heidelberg. pp. 449–456. doi:10.1007/3-540-26825-1_46. ISBN 9783540225232.
14. ^ Steiner, Johannes; Ruprecht, Daniel; Speck, Robert; Krause, Rolf (2015-01-01). Abdulle, Assyr; Deparis, Simone; Kressner, Daniel; Nobile, Fabio; Picasso, Marco (eds.). Convergence of Parareal for the Navier-Stokes Equations Depending on the Reynolds Number. Lecture Notes in Computational Science and Engineering. Springer International Publishing. pp. 195–202. CiteSeerX 10.1.1.764.6242. doi:10.1007/978-3-319-10705-9_19. ISBN 9783319107042.
15. ^ Dai, X.; Maday, Y. (2013-01-01). "Stable Parareal in Time Method for First- and Second-Order Hyperbolic Systems". SIAM Journal on Scientific Computing. 35 (1): A52–A78. doi:10.1137/110861002. ISSN 1064-8275.
16. ^ a b Farhat, Charbel; Cortial, Julien; Dastillung, Climène; Bavestrello, Henri (2006-07-30). "Time-parallel implicit integrators for the near-real-time prediction of linear structural dynamic responses". International Journal for Numerical Methods in Engineering. 67 (5): 697–724. Bibcode:2006IJNME..67..697F. doi:10.1002/nme.1653. ISSN 1097-0207.
17. ^ a b Chen, Feng; Hesthaven, Jan S.; Zhu, Xueyu (2014-01-01). Quarteroni, Alfio; Rozza, Gianluigi (eds.). On the Use of Reduced Basis Methods to Accelerate and Stabilize the Parareal Method. MS&A - Modeling, Simulation and Applications. Springer International Publishing. pp. 187–214. doi:10.1007/978-3-319-02090-7_7. ISBN 9783319020891.
18. ^ Farhat, Charbel; Chandesris, Marion (2003-11-07). "Time-decomposed parallel time-integrators: theory and feasibility studies for fluid, structure, and fluid–structure applications". International Journal for Numerical Methods in Engineering. 58 (9): 1397–1434. Bibcode:2003IJNME..58.1397F. doi:10.1002/nme.860. ISSN 1097-0207.
19. ^ Gander, M.; Petcu, M. (2008). "Analysis of a Krylov subspace enhanced parareal algorithm for linear problems". ESAIM: Proceedings. 25: 114–129. doi:10.1051/proc:082508.
20. ^ Ruprecht, D.; Krause, R. (2012-04-30). "Explicit parallel-in-time integration of a linear acoustic-advection system". Computers & Fluids. 59: 72–83. arXiv:1510.02237. doi:10.1016/j.compfluid.2012.02.015.
21. ^ Dutt, Alok; Greengard, Leslie; Rokhlin, Vladimir (2000-06-01). "Spectral Deferred Correction Methods for Ordinary Differential Equations". BIT Numerical Mathematics. 40 (2): 241–266. doi:10.1023/A:1022338906936. ISSN 0006-3835.
22. ^ Minion, Michael (2011-01-05). "A hybrid parareal spectral deferred corrections method". Communications in Applied Mathematics and Computational Science. 5 (2): 265–301. doi:10.2140/camcos.2010.5.265. ISSN 2157-5452.
23. ^ Emmett, Matthew; Minion, Michael (2012-03-28). "Toward an efficient parallel in time method for partial differential equations". Communications in Applied Mathematics and Computational Science. 7 (1): 105–132. doi:10.2140/camcos.2012.7.105. ISSN 2157-5452.
24. ^ Speck, R.; Ruprecht, D.; Krause, R.; Emmett, M.; Minion, M.; Winkel, M.; Gibbon, P. (2012-11-01). A massively space-time parallel N-body solver. High Performance Computing, Networking, Storage and Analysis (SC), 2012 International Conference for. pp. 1–11. doi:10.1109/SC.2012.6. ISBN 978-1-4673-0805-2.
25. ^ Falgout, R.; Friedhoff, S.; Kolev, T.; MacLachlan, S.; Schroder, J. (2014-01-01). "Parallel Time Integration with Multigrid". SIAM Journal on Scientific Computing. 36 (6): C635–C661. CiteSeerX 10.1.1.701.2603. doi:10.1137/130944230. ISSN 1064-8275.
26. ^ Gander, M.; Güttel, S. (2013-01-01). "PARAEXP: A Parallel Integrator for Linear Initial-Value Problems". SIAM Journal on Scientific Computing. 35 (2): C123–C142. CiteSeerX 10.1.1.800.5938. doi:10.1137/110856137. ISSN 1064-8275.