Composite Concave
Linear Programming


A Bit of Maths

The following short overview of the mathematical idiom of CCLP should give you a clear picture of how CCLP is related to Linear Programming and the Simplex Method.

We stress at the outset that the extension provided by CCLP is in the objective function of Linear Programming problems, rather then in the constraints. Thus, we are dealing here with optimization problems whose set of feasible solutions, call it X, is of the following familiar form:

X := {x in Rn : Ax = b, x >= 0} (1)
where:
As suggested by the title, the objective functions of CCLP probelms are of a composite structure. For the purposes of our discussion it is convenient and sufficient to represent it by the following simple format, which can be generalised in various ways:
f(x) := F(u(x),v(x)) , x in X (2)
where:

For example, the following is a typical CCLP objective function:

f(x) := log(ctx)+(dtx)1/2 (3)
where c and d are some given n-vectors. To comply with the format given in (2) we can set :
u(x) := ctx (4)
v(x) := dtx (5)
F(w) := log(w1)+(w2)1/2 ,   w in W (6)
Observe that here and elsewhere in this discussion it is assumed that the objective is to minimize f over X .

We also suggest that you don't panic if you have never heard about pseudoconcavity. It is a simple and useful generalization of concavity, thus any concave function is definitely pseudocancave. However, you must have come across some pseudoconcave functions even if your OR career is short. For example, the real valued function defined by

g(x,y) := xy (7)
is not concave, but is pseudoconcave, on
{(x,y) in R2 : x > 0, y > 0} (8)
In short, CCLP problems are optimization problems that admit the following representation:

Problem P:

z* := min {F(u(x),v(x)): Ax = b, x >= 0}

where:

Let grad_F(w(x)) denote the gradient of F with respect to w at w(x) = (u(x),v(x)). For example, for

F(u(x)) = u(x) - v2(x) (9)
we have
grad_F(u(x),v(x)) = (1,-2v(x)) (10)
Don't be offended if we draw your attention to the fact that grad_F(u(x)) denotes the gradient of F with respect to u not x. Some referees of OR journals don't get this message first time!

In any case, CCLP tackles this problem by linearizing F with respect to u, namely via the following parametric problem:

Problem P(p): p in R2

z*(p) := min {pw(x) : Ax = b, x >= 0}

where pw(x) denotes the inner product of vectors p and w(x) := (u(x),v(x)), namely

pw(x) := p1u(x) + p2v(x) (11)
The Purists should note that the parameter p is regarded as a row vector.

The point to observe is that for each given value of p the parametric problem is a standard linear programming problem, thus can be solved rather easily by conventional LP methods. You might have guessed already that c-programming solves Problem P via its parametric problem, namely via Problem P(p).

Thus, the following should not come as a surprise:

The Fundamental Theorem of CCLP:

If Problem P has an optimal solution, say x*, then there exists a p in R2, say p*, such that any optimal solution to Problem P( p*) is also optimal for Problem P.

In fact,

p* := grad_F(w(x*))

is such a p.

Furthermore, x* must be optimal with respect to Problem P(p*).

What emerges then is that all we have to do to solve Problem P is solve Problem P(p) for p = grad_F(w(x*)) where x* can be any optimal solution for Problem P. There is good news and bad news with regard to this naive solution strategy.

Bad News:

The strategy outlined above has a major public relation problem: to apply it you need to have at hand an optimal solution to Problem P, the very problem that you wish to solve in the first place!

Furthermore, even if by pure luck you happen to somehow guess the desired value of p , you'll not be able to confirm this fact on the spot. What a pity!

The implication is that in general you have to solve the parametric problem for all p in some (potentially large) subset P of R2.

Good News:

It is not difficult to solve the parametric problem for all values of p in P using conventional parametic linear programming techniques even if P is a large set. Hence, in practice it is sufficient to make use of the first (existence) part of the Fundamental Theorem.

The following is then a typical CCLP problem:
z* := min {u(x)v(x): Ax = b, x >= 0} (12)
where u and v are strictly positive affine functions on X.

And this is another typical example:

z* := min {u(x) - ev(x): Ax = b, x >= 0} (13)
where u and v are arbitrary affine functions on X.

And here is another typical example:

z* := min {log(u(x))/v(x): Ax = b, x >= 0} (14)
where u and v are strictly positive affine functions on X.

To fully appreciate why it is do easy to solve such problems with conventional linear programming tools, note that the parametric problems of CCLP for all these cases can be rewritten as follows:

z*(p) := min {p1u(x) + p2v(x): Ax = b, x >= 0} (15)
Although it involves two parameters, namely p1 and p2 , it is equivalent to the following "single" parameter formulation:
z*(b) := min {u(x) + bv(x): Ax = b, x >= 0} (16)
And since u and v are affine, the problem in fact takes the following very familiar form:
z*(b) := min {(ct + bdt)x: Ax = b, x >= 0} (17)
where b is varied over some interval of the real line.

The mechanics of solving this parametric problem appear in introductory OR text books in chapters dealing with sensitivity analysis techniques of the simplex method (eg. Hillier and Leiberman[1990, p. 98, pp. 187-190], Schrage [1991, pp. 41-45]).

So here is the bottom line:

Using familiar sensitivity analysis techniques of the simplex method, CCLP tackles and successfully solves, large scale optimization problems possessing highly nonlinear objective functions.