$$ \newcommand{\dt}{\Delta t} \newcommand{\udt}[1]{u^{({#1})}(T)} \newcommand{\Edt}[1]{E^{({#1})}} \newcommand{\uone}[1]{u_{1}^{({#1})}} $$

This is the third in a series of posts on testing scientific software. For this to make sense, you'll need to have skimmed the motivation and background. The first in the series assumed we only cared about the answer, and whether we'd implemented the expected algorithm wasn't important. The second assumed we cared about the convergence rate of the algorithm, not the specific answer, nor the precise algorithm itself.

In the previous cases we've focused on the behaviour of the algorithm: whether it will give the correct answer in the limit, or whether it converges as expected. This is really what you want to do: you're trying to do science, to get an answer, and so implementing the precise algorithm should be secondary. If you are trying to implement a precise algorithm, it should be because of its (expected) behaviour, and so you should be testing for that!

However, let's put that aside and see if we can work out how to test whether we've implemented exactly the algorithm we want: Euler's method. Checking convergence alone is not enough: the Backwards Euler method has identical convergence behaviour, as do whole families of other methods. We need a check that characterizes the method uniquely.

The local truncation error $\Edt{\dt}$ would be exactly such a check. This is the error produced by a single step from exact data, eg

$$ \begin{equation} \Edt{\dt} = u_1 - u(\dt). \end{equation} $$

This is enough as Euler's method is independent of the data, so each individual step performs the same operations.

For Euler's method we have

$$ \begin{equation} u_{n+1} = u_n + \dt f(t_n, u_n) \end{equation} $$

and so

$$ \begin{equation} \Edt{\dt} = \left| u_0 + \dt f(0, u_0) - u(\dt) \right| = \left| \frac{\dt^2}{2} \left. u''\right|_{t=0} \right| + {\cal O}(\dt^3). \end{equation} $$

This is all well and good, but we don't know the exact solution (in principle) at any point other than $t=0$, so cannot compute $u(\dt)$, so cannot compute $\Edt{\dt}$. We only know $\uone{\dt}$ for whichever values of $\dt$ we wish to compute.

We can use repeated Richardson extrapolation to get the solution $u(\dt)$ to sufficient accuracy, however. On the assumption that the algorithm is first order (we can use the previous techniques to check this), we can use Richardson extrapolation to repeatedly remove the highest order error terms. We can thus find the local truncation errors.

In [1]:
from math import sin, cos, log, ceil
import numpy
from matplotlib import pyplot
%matplotlib inline
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16

We will again need the code implementing Euler's method the full phugoid model notebook.

In [2]:
# model parameters:
g = 9.8      # gravity in m s^{-2}
v_t = 30.0   # trim velocity in m s^{-1}   
C_D = 1/40.  # drag coefficient --- or D/L if C_L=1
C_L = 1.0    # for convenience, use C_L = 1

### set initial conditions ###
v0 = v_t     # start at the trim velocity (or add a delta)
theta0 = 0.0 # initial angle of trajectory
x0 = 0.0     # horizotal position is arbitrary
y0 = 1000.0  # initial altitude
In [3]:
def f(u):
    """Returns the right-hand side of the phugoid system of equations.
    
    Parameters
    ----------
    u : array of float
        array containing the solution at time n.
        
    Returns
    -------
    dudt : array of float
        array containing the RHS given u.
    """
    
    v = u[0]
    theta = u[1]
    x = u[2]
    y = u[3]
    return numpy.array([-g*sin(theta) - C_D/C_L*g/v_t**2*v**2,
                      -g*cos(theta)/v + g/v_t**2*v,
                      v*cos(theta),
                      v*sin(theta)])
In [4]:
def euler_step(u, f, dt):
    """Returns the solution at the next time-step using Euler's method.
    
    Parameters
    ----------
    u : array of float
        solution at the previous time-step.
    f : function
        function to compute the right hand-side of the system of equation.
    dt : float
        time-increment.
    
    Returns
    -------
    u_n_plus_1 : array of float
        approximate solution at the next time step.
    """
    
    return u + dt * f(u)

This time we will need lots of solutions in order to measure anything. We will construct ten local truncation errors. For each, we take a single step and store the result for $v$. Then, for each single step result, we use eight other calculations using our algorithm. Each will take multiple, smaller, steps to get a more accurate result for $v$ at the same time. We then use repeated Richardson extrapolation to find, to very high accuracy, the "true" result for $v$ at this time. Finally, we can compare against the original single step results to find the local truncation errors.

This only works if the algorithm is converging at the expected rate, as we checked in the second post. It does not rely on the algorithm being exactly Euler's method (thankfully, or the argument would be circular!), but does need the convergence rate to be known.

In [5]:
T_values = numpy.array([0.001*2**(i) for i in range(10)])
lte_values = numpy.zeros_like(T_values)
for j, T in enumerate(T_values):
    dt_values = numpy.array([T*2**(i-8) for i in range(8)])
    v_values = numpy.zeros_like(dt_values)
    for i, dt in enumerate(dt_values):
        N = int(T/dt)+1
        t = numpy.linspace(0.0, T, N)
        u = numpy.empty((N, 4))
        u[0] = numpy.array([v0, theta0, x0, y0])
        for n in range(N-1):
            u[n+1] = euler_step(u[n], f, dt)
        v_values[i] = u[-1,0]
    v_next = v_values
    for s in range(1, len(v_values-1)):
        v_next = (2**s*v_next[1:]-v_next[0:-1])/(2**s-1)
    lte_values[j] = abs(v_values[0]-v_next)

This gives us a set of local truncation errors at given timesteps:

In [6]:
for dt, lte in zip(T_values, lte_values):
    print("For dt={} the local truncation error is {}.".format(dt, lte))
For dt=0.001 the local truncation error is 1.99954897084e-09.
For dt=0.002 the local truncation error is 8.05573563412e-09.
For dt=0.004 the local truncation error is 3.26825286834e-08.
For dt=0.008 the local truncation error is 1.34407251551e-07.
For dt=0.016 the local truncation error is 5.67045063349e-07.
For dt=0.032 the local truncation error is 2.50349015474e-06.
For dt=0.064 the local truncation error is 1.18961298661e-05.
For dt=0.128 the local truncation error is 6.26360622817e-05.
For dt=0.256 the local truncation error is 0.000370836831557.
For dt=0.512 the local truncation error is 0.00244291651282.

We now have many values for the local truncation error. We can thus compute the convergence rate of the local truncation error itself (which should be two), and check that it is close enough to the expected value using the same techniques as in the second post in the series:

In [7]:
s_m = numpy.zeros(2)
for i in range(2):
    s_m[i] = log(abs((lte_values[2+i]-lte_values[1+i])/
                     (lte_values[1+i]-lte_values[0+i]))) / log(2.0)
    print("Measured convergence rate (base dt {}) is {:.6g} (error is {:.4g}).".format(
            T_values[i], s_m[i], abs(s_m[i]-2)))
print("Convergence error has reduced by factor {:.4g}.".format(
        abs(s_m[0]-2)/abs(s_m[1]-2)))
Measured convergence rate (base dt 0.001) is 2.02375 (error is 0.02375).
Measured convergence rate (base dt 0.002) is 2.04637 (error is 0.04637).
Convergence error has reduced by factor 0.5121.

So the error has gone down considerably, and certainly $0.51 > 1/3$, so the convergence rate of the local truncation error is close enough to 2.

However, that alone isn't enough to determine that this really is Euler's method: as noted above, the convergence rate of the local truncation error isn't the key point: the key point is that we can predict its actual value as

$$ \begin{equation} \Edt{\dt} = \frac{\dt^2}{2} \left| \left. u''\right|_{t=0} \right| + {\cal O}(\dt^3) = \frac{\dt^2}{2} \left| \left( \left. \frac{\partial f}{\partial t} \right|_{t=0} + f(0, u_0) \left. \frac{\partial f}{\partial u} \right|_{t=0, u=u_0} \right) \right|. \end{equation} $$

For the specific problem considered here we have

$$ \begin{equation} u = \begin{pmatrix} v \\ \theta \\ x \\ y \end{pmatrix}, \quad f = \begin{pmatrix} -g\sin \theta - \frac{C_D}{C_L} \frac{g}{v_t^2} v^2 \\ -\frac{g}{v}\cos \theta + \frac{g}{v_t^2} v \\ v \cos \theta \\ v \sin \theta \end{pmatrix}. \end{equation} $$

We note that $f$ does not explicitly depend on $t$ (so $\partial f / \partial t \equiv 0$), and that the values of the parameters $g, C_D, C_L$ and $v_t$ are given above, along with the initial data $u_0 = (v_0, \theta_0, x_0, y_0)$.

So, let's find what the local truncation error should be.

In [8]:
import sympy
sympy.init_printing()
v, theta, x, y, g, CD, CL, vt, dt = sympy.symbols('v, theta, x, y, g, C_D, C_L, v_t, {\Delta}t')
u = sympy.Matrix([v, theta, x, y])
f = sympy.Matrix([-g*sympy.sin(theta)-CD/CL*g/vt**2*v**2, 
                  -g/v*sympy.cos(theta)+g/vt**2*v, 
                  v*sympy.cos(theta), 
                  v*sympy.sin(theta)])
dfdu = f.jacobian(u)
lte=dt**2/2*dfdu*f
In [9]:
lte_0=lte.subs([(g,9.8),(vt,30.0),(CD,1.0/40.0),(CL,1.0),(v,30.0),(theta,0.0),(x,0.0),(y,1000.0)])
lte_0
Out[9]:
$$\left[\begin{matrix}0.00200083333333333 {\Delta}t^{2}\\- 0.00266777777777778 {\Delta}t^{2}\\- 0.1225 {\Delta}t^{2}\\0\end{matrix}\right]$$

So let us check the local truncation error values, which are computed for v:

In [10]:
lte_exact = float(lte_0[0]/dt**2)
lte_values/T_values**2
Out[10]:
array([ 0.00199955,  0.00201393,  0.00204266,  0.00210011,  0.00221502,
        0.00244481,  0.00290433,  0.003823  ,  0.00565852,  0.00931899])

These are indeed converging towards $0.002 \dt^2$ as they should. To check this quantitatively, we use that our model is

$$ \begin{equation} \Edt{\dt} = \alpha \dt^2 + {\cal O}(\dt^3), \end{equation} $$

with the exact value $\alpha_e \simeq 0.002$. So we can use our usual Richardson extrapolation methods applied to $\Edt{\dt}/\dt^2$, to get a measured value for $\alpha$ with an error interval:

$$ \begin{equation} \alpha_m = \frac{8\Edt{\dt} - \Edt{2\dt} \pm \left| \Edt{\dt} - \Edt{2\dt} \right|}{4\dt^2}. \end{equation} $$

In [11]:
for i in range(len(lte_values)-1):
    Edt = lte_values[i]
    E2dt = lte_values[i+1]
    dt = T_values[i]
    err1 = abs(Edt - E2dt)
    a_lo = (8.0*Edt - E2dt - err1)/(4.0*dt**2)
    a_hi = (8.0*Edt - E2dt + err1)/(4.0*dt**2)
    print("Base dt={:.4g}: the measured alpha is in [{:.5g}, {:.5g}]".format(
            dt, a_lo, a_hi))
    print("Does this contain the exact value? {}".format(
            a_lo <= lte_exact <= a_hi))
Base dt=0.001: the measured alpha is in [0.00047112, 0.0034992]
Does this contain the exact value? True
Base dt=0.002: the measured alpha is in [0.00044604, 0.0035244]
Does this contain the exact value? True
Base dt=0.004: the measured alpha is in [0.00039575, 0.0035747]
Does this contain the exact value? True
Base dt=0.008: the measured alpha is in [0.00029522, 0.0036752]
Does this contain the exact value? True
Base dt=0.016: the measured alpha is in [9.4165e-05, 0.0038763]
Does this contain the exact value? True
Base dt=0.032: the measured alpha is in [-0.00030782, 0.0042784]
Does this contain the exact value? True
Base dt=0.064: the measured alpha is in [-0.0011113, 0.0050826]
Does this contain the exact value? True
Base dt=0.128: the measured alpha is in [-0.0027153, 0.0066903]
Does this contain the exact value? True
Base dt=0.256: the measured alpha is in [-0.0059063, 0.0099024]
Does this contain the exact value? True

So, to the limits that we can measure the local truncation error, we have implemented Euler's method.


You can download this notebook.


Comments

comments powered by Disqus