david/ssp-lmm-vss
Interesting questions:
Can we save computation by varying the step size? If \(h_{FE}\) is constant? If \(h_{FE}\) varies?
Can we prove an upper bound on the step size in terms of \(h_{FE}\) when the step size varies?
Linear multistep methods
We consider the numerical solution of the initial value problem
\[\begin{align*} \label{ivp} u'(t) & = f(u) & u(t_0) = u_0,\end{align*}\]
by an explicit linear multistep method (LMM)
\[\begin{align*} \label{lmm} u_n = \sum_{j=0}^{k-1} \left( \alpha_j u_{n-k+j} + h_n \beta_j f(u_{n-k+j}) \right).\end{align*}\]
Here \(u_n\) is an approximation to the solution \(u(t_n)\) where \(t_0 < t_1 < \cdots\) and \(h_n = t_n - t_{n-1}\). If we assume a uniform step size \(h_n = h\), then by replacing \(u_n\) with the exact solution \(u(t_n)\) in and expanding terms in Taylor series about \(t_{n-k}\), we obtain the order conditions for order \(p\):
\[\begin{align*} \sum_{j=0}^{k-1} \alpha_j & = 1 \\ \sum_{j=0}^{k-1} \left(j^m \alpha_j + m j^{m-1} \beta_j\right) & = k^m & 1 \le m \le p.\end{align*}\]
Variable step-size
Now we let \(h_n=t_n-t_{n-1}\) denote step size and let \(\omega_j = h_{n-k+j}/h_n\). We also define \(\Omega_j = \sum_{i=1}^j \omega^n_i\) (note that \(\omega\) and \(\Omega\) depend on \(n\), but this dependence is omitted when there is no danger of confusion) with the convention \(\Omega_0 = 0\) (and note that \(\omega^n_k = 1\)). Then the order conditions become:
[oc-vss]
\[\begin{align*} \sum_{j=0}^{k-1} \alpha_j & = 1 \\ \sum_{j=0}^{k-1} \left(\Omega_j^m \alpha_j + m \Omega_j^{m-1} \beta_j\right) & = \Omega_k^m & 1 \le m \le p.\end{align*}\]
Strong stability preservation
We are interested in initial value problems whose solutions satisfy a monotonicity condition
\[\begin{align*} \label{monotonicity} \|u(t+h)\| & \le \|u(t)\| & \text{for any } h \ge 0,\end{align*}\]
where \(\|\cdot\|\) represents any convex functional (for instance, a norm). We assume that \(f\) satisfies the (stronger) forward Euler condition
\[\begin{align*} \label{FE-condition} \|u + h f(u)\| & \le \|u\| & \text{for any } 0\le h \le h_{FE}(u)\end{align*}\]
where the maximum step size \(h_{FE}(u)\) may depend on \(u\) (though we will sometimes omit this dependency in the notation for brevity). We note that implies .
We are interested in numerical methods that satisfy the discrete monotonicity property
\[\begin{align*} \label{discrete-monotonicity} \|u_n\| \le \max(\|u_{n-k}\|,\|u_{n-k+1}\|,\dots,\|u_{n-1}\|).\end{align*}\]
The class of methods that satisfy whenever \(f\) satisfies are known as strong stability preserving (SSP) methods. The solution given by an SSP method satisfies whenever the step size satisfies
\[\begin{align*} \label{SSP-step-size} 0 \le h \le {\mathcal C}\min_{0\le j\le k-1} h_{FE}(u_{n-k+j})\end{align*}\]
where the constant \({\mathcal C}\) depends only on the discretization method and is referred to as the SSP coefficient of the method. We remark that the step size condition is unusual in that it depends on the solution value at several previous steps.
The fact that \(h_{FE}\) depends on the solution \(u\) is typically glossed over in the literature, but is quite important for the efficient, stable solution of nonlinear problems . For instance, in the solution of nonlinear hyperbolic conservation laws, \(h_{FE}\) is inversely proportional to the maximum characteristic velocity. In order to ensure the monotonicity condition and to minimize computational cost, it is necessary to adjust the step size when \(h_{FE}\) changes.
The SSP coefficient of a LMM is
\[\begin{align*} {\mathcal C}& = \max \left\{ r\in{\mathbb R^+} : \alpha_j \ge 0, \quad \alpha_j-r\beta_j \ge 0\right\}\end{align*}\]
Let \(\delta_j = \alpha_j - r \beta_j\). Then the order conditions become
[oc-vss-delta]
\[\begin{align*} \sum_{j=0}^{k-1} \left(\delta_j + r\beta_j\right) & = 1 \\ \sum_{j=0}^{k-1} \left(\Omega_j^m (\delta_j + r\beta_j) + m \Omega_j^{m-1} \beta_j\right) & = \Omega_k^m & 1 \le m \le p.\end{align*}\]
Given a set of step sizes \(\omega_j\), we can solve the problem of finding the optimal \(k\)-step, order \(p\) SSP LMM by solving a sequence of linear programming problems, in which we find (for a given \(r\)) non-negative \(\delta_j, \beta_j\) satisfying .
Methods
Given the order conditions up to order \(p\), it can be shown that the optimal SSP \(k\)-step method (in both the constant and variable-coefficient cases) has \(p\) (or \(p+1\)?) non-zero coefficients \(\delta_j, \beta_j\). Assuming \(p\) for now. Thus the set of non-zero coefficients, together with the \(p+1\) order conditions uniquely determine the method, since there are \(p+1\) unknowns (counting \(r\)). Thus we can define a variable step-size method by keeping the same set of non-zero coefficients.
3-step, 2nd order method
The optimal 3-step, 2nd-order SSP LMM is \[u^n = \frac{3}{4} \left( u^{n-1} + 2h f(u^{n-1})\right) + \frac{1}{4} u^{n-3}.\] Here we extend this method to handle variable step sizes. The non-zero coefficients are \(\delta_0, \beta_2\). The equations for order two read
\[\begin{align*} \delta_0 + r \beta_2 & = 1 \\ r\Omega_2 \beta_2 + \beta_2 & = \Omega_3 \\ r \Omega_2^2 \beta_2 + 2 \Omega_2 \beta_2 & = \Omega_3^2.\end{align*}\]
The unique solution is
\[\begin{align*} r & = \frac{\Omega_2 -1}{\Omega_2} \\ \beta_2 & = \frac{\Omega_2 + 1}{\Omega_2} \\ \delta_0 & = \frac{1}{\Omega_2^2}.\end{align*}\]
For any choice of step sizes, we have \(\delta_0,\beta_2 >0\). However, if \(\omega_1 + \omega_2 < 1\), then \(r<0\). This means that the step size cannot grow too quickly; we must choose \(h_n < h_{n-2} + h_{n-1}\) in order to have a positive step size restriction.
In fact, the analysis becomes more complicated because the SSP coefficient (and therefore the allowable size of \(h_n\)) depends on the method coefficients, while the method coefficients depend on the choice of \(h_n\). Specifically, to ensure , we must have
\[\begin{align*} h_n & \le {\mathcal C}h_{FE}\\ & = \frac{\omega_1 + \omega_2 - 1}{\omega_1 + \omega_2} h_{FE}\\ & = \frac{h_{n-2} + h_{n-1} - h_n}{h_{n-2} + h_{n-1}} h_{FE}\end{align*}\]
Solving this inequality for \(h_n\) gives \[h_n \le \frac{h_{n-2}+h_{n-1}}{h_{n-2}+h_{n-1} + h_{FE}} h_{FE}\] We now consider \(h_{FE}\) as fixed and suppose that the maximum allowable step size is taken at each step, i.e. we have the iteration \[\tau_n = \frac{\tau_{n-2} + \tau_{n-1}}{\tau_{n-2} + \tau_{n-1} + 1}\] where \(\tau_j = h_j/h_{FE}(u^n)\). Observe that \(\tau=1/2\) is a fixed point and it is attractive (proof?).
What about variable \(h_{FE}\)?
Should I formulate the method instead using \(h_{FE}(u^n)\) to simplify the analysis?
Generalize to \(k\)-step, order \(p\) methods. SSP coefficient: \(\frac{k-2}{k-1}\). See SSP book page 115 (still just two non-negative coefficients).
\(k\)-step, 2nd order method
The optimal \(k\)-step, 2nd order method is...
We again extend to variable step size. The order conditions (with \(\delta_0, \beta_{k-1}\) non-zero) read
\[\begin{align*} \delta_0 + r \beta_{k-1} & = 1 \\ r\Omega_{k-1} \beta_{k-1} + \beta_{k-1} & = \Omega_k \\ r \Omega_{k-1}^2 \beta_{k-1} + 2 \Omega_{k-1} \beta_{k-1} & = \Omega_k^2.\end{align*}\]
The unique solution is
\[\begin{align*} r & = \frac{\Omega_{k-1} - 1}{\Omega_{k-1}} \\ \beta_{k-1} & = \frac{\Omega_{k-1} + 1}{\Omega_{k-1}} \\ \delta_0 & = \frac{1}{\Omega_{k-1}^2}.\end{align*}\]
Again, for any choice of step sizes, we have \(\delta_0,\beta_{k-1} >0\). Furthermore, as long as \(\Omega_{k-1}>1\), then \(r>0\). This means that the step size cannot grow too quickly, but is a rather loose restriction for \(k\) larger than 3, since typically \(\Omega_{k-1}\approx k-1\).
Following an analysis similar to that above, we see that the time step will tend toward \((k-2)/(k-1)\).
4-step, 3rd-order method
Again, our starting point is the non-zero coefficients \(\delta,\beta\) of the optimal SSP method with constant step size. They are: \(\delta_0, \beta_0, \beta_3\). Solving the order conditions up to order 3 with these coefficients yields:
\[\begin{align*} r & = \frac{\Omega_{3} - 2}{\Omega_{3}} \\ \delta_0 & = \frac{4\Omega_4 - \Omega_3^2}{\Omega_{3}^3}. \\ \beta_{0} & = \frac{\Omega_{4}}{\Omega_{3}} \\ \beta_{0} & = \frac{\Omega_{4}}{\Omega_{3}} \\\end{align*}\]
General approach
Determine non-zero coefficients in optimal fixed-step-size method
Solve variable-step-size order conditions
Work out (nonlinear) SSP step-size restriction
Notes
Include implicit methods?
Has anyone dealt with zero-stability of SSP LMMs? Obviously (?) \({\mathcal C}>0\) implies zero-stability.
Variable step-size stability, convergence (see Hairer/Wanner/Norsett)