Mathematical Optimization: Methods

Numerical analysis

If an optimi­zation pro­blem \( \min f(x), x \in D\) is for­mu­la­ted, it still needs to be sol­ved. The exact pro­cess of sol­ving it depends on the struc­tu­re of the func­tion \( f(x) \) and the search space \( D\). If both are non­line­ar, the pro­blem may not be sol­va­ble with cur­rent methods. In any case, sol­ving optimi­zation pro­blems falls within the domain of nume­ri­cal analysis—the branch of mathe­ma­tics that deals with the con­s­truc­tion of algo­rith­ms to sol­ve mathe­ma­ti­cal problems.

The deve­lo­p­ment of nume­ri­cal algo­rith­ms is a chal­len­ging task that requi­res trade-offs regar­ding roun­ding errors, com­pu­ta­tio­nal inten­si­ty, memo­ry requi­re­ments, run­time, cor­rect­ness, and relia­bi­li­ty of the solu­ti­on pro­po­sed by the algo­rithm. Typi­cal­ly, such an algo­rithm pro­ces­ses the pro­blem data \(f\) and \(D\) accor­ding to a non­line­ar and ite­ra­ti­ve sche­me, which has signi­fi­cant poten­ti­al for errors.

Classes of optimization problems

For­t­u­na­te­ly, for cer­tain arche­typ­al clas­ses of func­tions \(f\) and search spaces \(D\), relia­ble solu­ti­on algo­rith­ms have alre­a­dy been deve­lo­ped and imple­men­ted as com­pu­ter pro­grams. Con­se­quent­ly, an optimi­zation pro­blem is most com­for­ta­b­ly sol­ved when it can be refor­mu­la­ted as an ins­tance of a class of optimi­zation pro­blems for which solu­ti­on algo­rith­ms are available. The­se include the fol­lo­wing configurations.

Figu­re: Over­view of some relia­bly sol­va­ble pro­blems. LP = Line­ar Pro­gramming, QP = Qua­dra­tic Pro­gramming, QCQP = Qua­dra­ti­cal­ly Cons­trai­ned Qua­dra­tic Pro­gramming, SOCP = Second Order Cone Pro­gramming, SDP = Semi­de­fi­ni­te Programming.

Relationship between the classes

If the optimi­zation problem

$$ \begin{align} \min_x ~~~&f(x) \\ \text{s.t.} ~~~&x \in D \end{align}$$

can be for­mu­la­ted as an LP, QP, QCQP, SOCP, or SDP, thus invol­ving cer­tain second-order poly­no­mi­als, then it can be relia­bly sol­ved with limi­t­ed com­pu­ta­tio­nal effort. Other pro­blem clas­ses exist, such as Geo­me­tric Pro­gramming [1, p. 160], log-log con­vex Pro­gramming [2], or Max­det Pro­blem [3], but the afo­re­men­tio­ned clas­ses alre­a­dy cover a wide spectrum.

Addi­tio­nal­ly, dyna­mic pro­gramming, which deals with the optimi­zation of sequen­ces of decis­i­ons, is of gre­at prac­ti­cal importance in ope­ra­tio­nal optimi­zation. For many of the abo­ve clas­ses, the fur­ther cons­traint \( x \in Z\) can also be added, i.e., \(x\) should be inte­gral. Con­side­ring such con­di­ti­ons can be sen­si­ble when \(x\) repres­ents dis­crete indi­vi­si­ble units like logi­cal yes-no decis­i­ons. Howe­ver, the solu­ti­on algo­rith­ms beco­me con­sider­a­b­ly less relia­ble, and no gua­ran­tees can be offe­red. The pro­blem is non-con­vex and NP-hard just like the so-cal­led black-box optimi­zation for func­tions wit­hout known structure.

Outlook

On the sub­or­di­na­te pages, you will find infor­ma­ti­on about LP, QP, QCQP, SOCP, SDP, and their mixed-inte­ger vari­ants, as well as about dyna­mic pro­gramming and black-box optimization.

Sources

[1] Boyd, S., & Van­den­berg­he, L. (2004). Con­vex Optimi­zation. Cam­bridge: Cam­bridge Uni­ver­si­ty Press.

[2] Agra­wal, A., Dia­mond, S., & Boyd, S. (2019). Disci­pli­ned geo­me­tric pro­gramming. Optimi­zation Let­ters, 13(5), 961–976.

[3] Van­den­berg­he, L., Boyd, S., & Wu, S. (1998). Deter­mi­nant Maxi­miza­ti­on with Line­ar Matrix Ine­qua­li­ty Cons­traints. SIAM Jour­nal on Matrix Ana­ly­sis and Appli­ca­ti­ons 1998 19:2, 499–533