david/More absolute monotonicity conjectures

There are a number of fascinating patterns in the results on optimal threshold factors from my thesis. Recall that \(R_{s,k,p}\) denotes the optimal threshold factor for explicit general linear methods with \(s\) stages, \(k\) steps, and order at least \(p\). The conjectures 4.5.1 and 4.5.3 point out the curious behavior of \(R_{s,k,p}\) as \(k\) increases with \(s\) and \(p\) fixed:

  • For even \(p\), \(R_{s,k,p}\) seems to be a strictly increasing but bounded sequence.
  • For odd \(p\), \(R_{s,k,p}\) seems to always reach some maximal value. Let \(k_0(s,p)\) denote \[\min\{k : R_{s,k,p} = \max_k R_{s,k,p}\}\] i.e., the minimum number of steps at which the maximum value is achieved. For \(k>k_0\), the optimal method is always equivalent to the corresponding optimal method for \(k=k_0\) (in other words, the additional steps are not used).

This seems to be true for implicit methods as well; see Lenferink.


Conjecture 4.5.2 is truly remarkable, and it can be made even stronger (based on the available evidence). Let \(C^I_{s,k,p}\) denote the optimal SSP coefficient for implicit general linear methods. Then it seems that, for any odd value of \(p\), \[C^I_{1,k,p-1} = R_{2,k,p} \ \ \ \forall \ \ \ k\ge k_0(2,p)\]

Furthermore, it seems that \(k_0(2,p)=((p+1)/2)^2\). What is \(k_0(s,p)\) for other values of \(s\)? For \(s=1\), we have the following values: \[k_0(1,1) = 1\cdot 1\] \[k_0(1,3) = 2\cdot 3\] \[k_0(1,5) = 4\cdot 4\] \[k_0(1,7) = 5\cdot 6\] \[k_0(1,9) = 7\cdot 7\]

Looking at which coefficients are non-zero in the \(k=k_0\) case for increasing (odd) \(p\) also reveals interesting patterns and natural conjectures.


Most of these conjectures are violated by numerical results for large enough values of \(s,k,p\), but this is presumably due to numerical error. It would be useful to compute exact values with Mathematica, if possible.

In order to determine exact values of \(R_{s,k,p}\) and corresponding optimal coefficients, the following approach usually works.

  1. Determine by numerical optimization which coefficients of the optimal method are non-zero.
  2. Eliminate the columns corresponding to zero coefficients and solve exactly the resulting (fully determined) set of equations.