|
We stress at the outset that the extension provided by CCLP is in the objective function of Linear Programming problems, rather then in the constraints. Thus, we are dealing here with optimization problems whose set of feasible solutions, call it X, is of the following familiar form:
| X := {x in Rn : Ax = b, x >= 0} | (1) |
| f(x) := F(u(x),v(x)) , x in X | (2) |
For example, the following is a typical CCLP objective function:
| f(x) := log(ctx)+(dtx)1/2 | (3) |
| u(x) := ctx | (4) |
| v(x) := dtx | (5) |
| F(w) := log(w1)+(w2)1/2 , w in W | (6) |
We also suggest that you don't panic if you have never heard about pseudoconcavity. It is a simple and useful generalization of concavity, thus any concave function is definitely pseudocancave. However, you must have come across some pseudoconcave functions even if your OR career is short. For example, the real valued function defined by
| g(x,y) := xy | (7) |
| {(x,y) in R2 : x > 0, y > 0} | (8) |
|
Problem P: z* := min {F(u(x),v(x)): Ax = b, x >= 0}
|
where:
Let grad_F(w(x)) denote the gradient of F with respect to w at w(x) = (u(x),v(x)). For example, for
| F(u(x)) = u(x) - v2(x) | (9) |
| grad_F(u(x),v(x)) = (1,-2v(x)) | (10) |
In any case, CCLP tackles this problem by linearizing F with respect to u, namely via the following parametric problem:
|
Problem P(p): p in R2 z*(p) := min {pw(x) : Ax = b, x >= 0}
|
where pw(x) denotes the inner product of vectors p and w(x) := (u(x),v(x)), namely
| pw(x) := p1u(x) + p2v(x) | (11) |
The point to observe is that for each given value of p the parametric problem is a standard linear programming problem, thus can be solved rather easily by conventional LP methods. You might have guessed already that c-programming solves Problem P via its parametric problem, namely via Problem P(p).
Thus, the following should not come as a surprise:
If Problem P has an optimal solution, say x*, then there exists a p in R2, say p*, such that any optimal solution to Problem P( p*) is also optimal for Problem P. In fact, is such a p. Furthermore, x* must be optimal with respect to Problem P(p*). |
What emerges then is that all we have to do to solve Problem P is solve Problem P(p) for p = grad_F(w(x*)) where x* can be any optimal solution for Problem P. There is good news and bad news with regard to this naive solution strategy.
Bad News:The following is then a typical CCLP problem:The strategy outlined above has a major public relation problem: to apply it you need to have at hand an optimal solution to Problem P, the very problem that you wish to solve in the first place!
Furthermore, even if by pure luck you happen to somehow guess the desired value of p , you'll not be able to confirm this fact on the spot. What a pity!
The implication is that in general you have to solve the parametric problem for all p in some (potentially large) subset P of R2.
Good News:
It is not difficult to solve the parametric problem for all values of p in P using conventional parametic linear programming techniques even if P is a large set. Hence, in practice it is sufficient to make use of the first (existence) part of the Fundamental Theorem.
| z* := min {u(x)v(x): Ax = b, x >= 0} | (12) |
And this is another typical example:
| z* := min {u(x) - ev(x): Ax = b, x >= 0} | (13) |
And here is another typical example:
| z* := min {log(u(x))/v(x): Ax = b, x >= 0} | (14) |
To fully appreciate why it is do easy to solve such problems with conventional linear programming tools, note that the parametric problems of CCLP for all these cases can be rewritten as follows:
| z*(p) := min {p1u(x) + p2v(x): Ax = b, x >= 0} | (15) |
| z*(b) := min {u(x) + bv(x): Ax = b, x >= 0} | (16) |
| z*(b) := min {(ct + bdt)x: Ax = b, x >= 0} | (17) |
The mechanics of solving this parametric problem appear in introductory OR text books in chapters dealing with sensitivity analysis techniques of the simplex method (eg. Hillier and Leiberman[1990, p. 98, pp. 187-190], Schrage [1991, pp. 41-45]).
So here is the bottom line:
Using familiar sensitivity analysis techniques of the simplex method, CCLP tackles and successfully solves, large scale optimization problems possessing highly nonlinear objective functions.