david/11-3-2012

Global optimality of stability regions

In our recent paper on optimal stability polynomials we make an “ad hoc assumption” that for any \(0\le h< H_{opt}\) (where \(H_{opt}\) denotes the optimal stepsize) there exists a feasible stability polynomial. I don’t yet see a way to prove this in the general case, but it seems provable for a very important subset of cases, if we can solve the associated convex semi-infinite programming problems.

Specifically, suppose that we wish to solve

\[\max h\] such that \[|R(h\lambda)|\le 1 \ \ \ \forall \ \ \ \ \lambda \in \Lambda\] where \(\Lambda\in \mathbb{C}\) is a curve passing through the origin and the interior of \(\Lambda\) is a convex set.

Let \(R_{opt}\) denote the optimal polynomial and \(S_{opt}\) the corresponding stability region. Then, since it can be shown that the complement of the stability region (i.e. \(\mathbb{C}\setminus S_{opt}\)) is simply connected, it follows that \(h\lambda\in S_{opt}\) for all \(\lambda\in\Lambda\) and \(0\le h\le H_{opt}\). In other words, the bisection hypothesis holds for this problem.

However, we don’t actually solve the problem stated above; we solve a relaxation of it based on discretizing \(\Lambda\). So the question is, can we guarantee that the bisection hypothesis holds for a fine enough discretization of \(\Lambda\) – and how fine does it need to be?

More thoughts on March 24: key paper is Hettich’s SIREV on semi-infinite programming, and in particular section 7. Perhaps we should use an exchange algorithm?

In Ellacott, Math. of Comp. (1976) some results in the direction of what we need are presented.

Probably the simplest approach is as follows. Let \(H_{opt,\delta}\) denote the optimal step size when some subset of \(\Lambda\) with density \(\delta\) is used instead. Then just determine \(\epsilon\) such that \((H_{opt,\delta}-\epsilon)\lambda \in S_{opt,\delta}\) for all \(\lambda\in\Lambda\).

A more careful approach would be to require \[|R(h\lambda)| < 1-\delta\] where \(\delta\) is a bound on the change in \(|R(z)|\) between two neighboring points \(\lambda\). Since

\[R(z) = R(z_0) + \sum_{j=1}^s \frac{(z-z_0)^j}{j!} R^{(j)}(z_0)\]

Enough musing; let’s prove a theorem!

Starlike w.r.t.: Given a set \(S\in\mathbb{C}\), we say \(S\) is starlike with respect to \(\{\lambda_i\}\) if \(h\lambda_i\in S\) for all \(\lambda_i\) and for all \(0<h<H\).

Clearly, if \(S_{opt}\) is starlike with respect to the desired spectrum \(\{\lambda_i\}\) then Assumption 1 holds (but this condition is far from necessary).

Theorem. Let \(\Lambda \in \mathbb{C}\) be a closed curve passing through the origin whose interior is a convex set. Let \(\{\lambda_i\}_n\) denote, for each \(n\), a set of \(n\) points \(\lambda_i\in\Lambda\) such that the density of \(\{\lambda_i\}_n\) tends to zero as \(n\to\infty\).

Given an ordered set of points \(\{\lambda_i\}\in \mathbb{C}\), let \(\Gamma\) denote the curve obtained by connecting each successive point with a straight line. Then there exists \(\delta>0\) such that the interior

Two false starts; back up

The basic idea is this: Consider the problem \[\max h\] subject to \[|R(h\lambda_i)|\le 1-\epsilon\] and denote its solution by \(R_{opt,\epsilon,\{\lambda\}}\). Then since \[R(z) = R(z_0) + \sum_{j=1}^s \frac{(z-z_0)^j}{j!} R^{(j)}(z_0)\] we have the estimate \[|R(z)| \le |R(z_0)| + \sum_{j=1}^s \frac{|z-z_0|^j}{j!} |R^{(j)}(z_0)|\] for all \(z\) in the connected convex curve, where \(z_0\) corresponds to the nearest \(h\lambda\). Thus

\[|R(z)| \le 1-\epsilon + \sum_{j=1}^s \frac{\delta^j}{j!} |R^{(j)}(z_0)|\] where \(\delta\) is the density of the samples. Now what we’d like to say is that, for a fixed \(\epsilon\), by taking \(\delta\) small enough, we can ensure that \(|R_{opt,\epsilon,\lambda}|\le 1\) over the closed “connect-the-dots” curve. It is enough to show that the sum in the equation above tends to zero as \(\delta\to 0\), for every \(z_0=h\lambda_i\). But as we reduce \(\delta\) (by taking more points), we change the set \(\lambda_i\) as well as the polynomial \(R_{opt,\epsilon,\lambda}\). I think we can show that the \(z_0\) values are uniformly bounded, but we need a uniform bound on the coefficients of \(R_{opt,\epsilon,\lambda}\). This might be available from results on convergence of the discretization approach to SIPs.