Deep Learning for Two-Stage Robust Integer Optimization
& Neur2RO: Neural Two-Stage Robust Optimization
Justin Dumouchelle1
Esther Julien2
Jannis Kurtz3
Elias B. Khalil1

1University of Toronto, 2TU Delft, 3University of Amsterdam
justin.dumouchelle@mail.utoronto.ca

[Extended Paper]  
[ICLR Paper]  
[GitHub]  
[Video]  
[Poster]  


Overview of Neur2RO, a fast learning-based approximation to column-and-constraint generation (CCG) for solving two-stage robust optimization problems. In each iteration, a main problem (box (a)) is solved to find a good first-stage solution x for the set of scenarios that have been identified thus far (initially, none). Then, an adversarial problem (box (c)) is solved to obtain a new scenario for which the solution x is not feasible anymore in main problem. If no such scenario exists, then x is optimal and CCG terminates. Otherwise, the adversarial scenario is added to the set of worst-case scenarios and we iterate to main problem. For each of the main and adversarial problems, we show two versions: classical (CCG, boxes (a) and (c)) and learning-augmented (Neur2RO, dashed boxes (b) and (d))


Abstract

Robust optimization is an established framework for modeling optimization problems with uncertain parameters. While static robust optimization is often criticized for being too conservative, two-stage (or adjustable) robust optimization (2RO) provides a less conservative alternative by allowing some decisions to be made after the uncertain parameters have been revealed. Unfortunately, in the case of integer decision variables, existing solution methods for 2RO typically fail to solve large-scale instances, limiting the applicability of this modeling paradigm to simple cases. We propose Neur2RO, a deep-learning-augmented instantiation of the column-and-constraint-generation (CCG) algorithm, which expands the applicability of the 2RO framework to large-scale instances with integer decisions in both stages. A custom-designed neural network is trained to estimate the optimal value and feasibility of the second-stage problem. The network can be incorporated into CCG, leading to more computationally tractable subproblems in each of its iterations. The resulting algorithm enjoys approximation guarantees which depend on the neural network's prediction error. In our experiments, Neur2RO produces high-quality solutions quickly, outperforming state-of-the-art methods on two-stage knapsack, capital budgeting, and facility location problems. Compared to existing methods, which often run for hours, Neur2RO finds better solutions in a few seconds or minutes.



[Video]



[Poster]




[Journal Extension of the Paper]

J. Dumouchelle, E. Julien, J. Kurtz, E. B. Khalil
Deep Learning for Two-Stage Robust Integer Optimization
[Under Review] preprint: arXiv:2310.04345, 2024
(hosted on ArXiv)





[ICLR Version of the Paper]

J. Dumouchelle, E. Julien, J. Kurtz, E.B. Khalil
Neur2RO: Neural Two-Stage Robust Optimization
In ICLR, 2024
(hosted on OpenReview)





Bibtex

@article{dumouchelle2024deeplearningtwostagerobust,
	title={Deep Learning for Two-Stage Robust Integer Optimization},
	author={Justin Dumouchelle and Esther Julien and Jannis Kurtz and Elias B. Khalil},
	journal={arXiv preprint arXiv:2310.04345},
	year={2024}
}
@inproceedings{dumouchelle2023neur2ro,
	title={Neur2{RO}: Neural Two-Stage Robust Optimization},
	author={Dumouchelle, Justin and Julien, Esther and Kurtz, Jannis and Khalil, Elias Boutros},
	booktitle={The Twelfth International Conference on Learning Representations},
	year={2024}
}