Appendix B — Order of accuracy

We here give precise definitions of the order of accuracy of a numerical methods, and a sketch proof of the link between the local error and the global error.

The notation here will be slightly different (and annoyingly more pedantic) than other sections.

B.1 Notation

Assume that we are given a PDE with appropriate initial boundary conditions. We denote the solution to the PDE, using initial data at \(t=0\) of \(y_0(x)\), as

\[ y\left(x, t \vert y_0(x; t=0)\right) \, . \tag{B.1}\]

We will assume that the PDE is well posed, in the sense that a small perturbation in the initial conditions is bounded. That is, we assume that for all small numbers \(\epsilon\) and for all perturbations \(\delta\) there exists a \(K\) independent of time such that

\[ \| y(x, t \vert y_0(x; t=0) + \delta) - y(x, t \vert y_0(x; t=0)) \| \le \delta e^{K t} \, . \tag{B.2}\]

Next we assume that we are constructing a numerical approximation to the solution \(\mathtt{y}(x, t)\). It is possible that this solution is only constructed at a finite number of points (for example, a finite difference method on a grid), or it might be constructed at finitely many time intervals but be computable everywhere in space (as in a spectral or finite element method). We denote the numerical method’s update scheme by

\[ \mathtt{y}\left(x, t^{n+1} \vert \left\{ \mathtt{y}(x, t^n) \right\} \right) = \mathcal{F} \left( \left\{ \mathtt{y}(x, t^n) \right\} \right) \, . \tag{B.3}\]

Even an implicit method can (in principle) be written in this form, but it is simplest to think of this as an explicit method.

B.2 Local truncation error

The easiest analysis of the accuracy of a method looks at the error introduced over a single step, assuming that the input data is exact. This is the Local Truncation Error \(\epsilon_{n+1}\), given by

\[ \epsilon_{n+1} = \left\| y(x, t^{n+1} \vert y_0(x; t=0) ) - \mathtt{y}\left(x, t^{n+1} \vert \left\{ y(x, t^n \vert y_0(x; t=0)) \right\} \right) \right\| \, . \tag{B.4}\]

When computing the local truncation error it is usual to use a series expansion, such as a Taylor series expansion, in terms of small quantities controllable by the numerical method. Typical examples would be the grid spacings \(\Delta t, \, \Delta x\) in finite difference methods, and the inverse of the number of degrees of freedom \(N^{-1}\) in spectral or finite element methods. We then assume that only the first term in the expansion survives, giving (for example)

\[ \epsilon_{n+1} \propto \Delta t^{p+1} \, . \tag{B.5}\]

This makes the local order of accuracy be \(p+1\). We typically assume that \(p\) is independent of time (and hence independent of \(n\)), and write the local truncation error as \(\epsilon\) by maximising over all time.

B.3 Global truncation error

Local truncation error is the easiest thing to analyse, but not what we want to compute. We want to relate the error at the end of the simulation, when it has taken many steps, to the parameters (such as grid spacing). This is the Global Truncation Error

\[ \mathcal{E}_{T} = \left\| y(x, T \vert y_0(x; t=0)) - \mathtt{y}\left(x, T \vert \left\{ \mathtt{y}(x, T - \Delta t) \right\} \right) \right\| \, . \tag{B.6}\]

This cannot be directly computed using the local truncation error, as the numerical solution at time \(T\) is not computed from the exact solution at the previous timestep, but from the approximate numerical solution at that point. However, we can add and subtract the exact solution with different initial conditions, at the time \(T - \Delta t\). Thus

\[ \begin{aligned} \mathcal{E}_{T} &= \left\| y(x, T \vert y_0(x; t=0)) - y(x, T \vert \mathtt{y}(x; T - \Delta t)) + \right. \\ &\quad \left. y(x, T \vert \mathtt{y}(x; T - \Delta t)) - \mathtt{y}\left(x, T \vert \left\{ \mathtt{y}(x, T - \Delta t) \right\} \right) \right\| \\ &\le \left\| y(x, T \vert y_0(x; t=0)) - y(x, T \vert \mathtt{y}(x; T - \Delta t)) \right\| + \\ &\quad \left\| y(x, T \vert \mathtt{y}(x; T - \Delta t)) - \mathtt{y}\left(x, T \vert \left\{ \mathtt{y}(x, T - \Delta t) \right\} \right) \right\| \\ &\le \left\| y(x, T \vert y_0(x; t=0)) - y(x, T \vert \mathtt{y}(x; T - \Delta t)) \right\| + \epsilon \\ &\le \left\| y(x, T \vert y(x; t=T - \Delta t)) - y(x, T \vert \mathtt{y}(x; T - \Delta t)) \right\| + \epsilon \\ &\le \mathcal{E}_{T-\Delta t} e^{K \, \Delta t} + \epsilon \, . \end{aligned} \tag{B.7}\]

Here we have first used the definition of the local truncation error, then the definition of well-posedness.

Using this recursion relation, combined with the fact that the global truncation error after a single step is precisely the local truncation error, we find that

\[ \mathcal{E}_T \propto \frac{\epsilon}{\Delta t} \, . \tag{B.8}\]

This result says that if the local truncation error has order of accuracy \(p+1\), then the global truncation error has order of accuracy \(p\). Loosely, this can be understood as the finite time \(T\) requires \(\sim \Delta t^{-1}\) timesteps to reach, and each of those steps introduces an error \(\sim \epsilon\).