Volume 2017http://edoc.hu-berlin.de/18452/6322021-11-30T19:33:04Z2021-11-30T19:33:04ZA randomized method for handling a difficult function in a convex optimization problem, motivated by probabilistic programmingFábián, Csaba I.Szántai, Tamáshttp://edoc.hu-berlin.de/18452/190842020-03-07T04:54:55Z2017-09-26T00:00:00ZA randomized method for handling a difficult function in a convex optimization problem, motivated by probabilistic programming
Fábián, Csaba I.; Szántai, Tamás
We propose a randomized gradient method for the handling of a convex function whose gradient computation is demanding. The method bears a resemblance to the stochastic approximation family. But in contrast to stochastic approximation, the present method builds a model problem. The approach requires that estimates of function values and gradients be provided at the iterates. We present a variance reduction Monte Carlo simulation procedure to provide such estimates in the case of certain probabilistic functions.
2017-09-26T00:00:00ZLearning Enabled Optimization: Towards a Fusion of Statistical Learning and Stochastic OptimizationSen, SuvrajeetDeng, Yunxiaohttp://edoc.hu-berlin.de/18452/187562020-03-07T04:57:15Z2017-07-31T00:00:00ZLearning Enabled Optimization: Towards a Fusion of Statistical Learning and Stochastic Optimization
Sen, Suvrajeet; Deng, Yunxiao
Several emerging applications, such as “Analytics of Things" and “Integrative Analytics" call for a fusion of statistical learning (SL) and stochastic optimization (SO). The Learning Enabled Optimization paradigm fuses concepts from these disciplines in a manner which not only enriches both SL and SO, but also provides a framework which supports rapid model updates and optimization, together with a methodology for rapid model-validation, assessment, and selection. Moreover, in many big data/big decisions applications these steps are repetitive, and possible only through a continuous cycle involving data analysis, optimization, and validation. This paper sets forth the foundation for such a framework by introducing several novel concepts such as statistical optimality, hypothesis tests for modeldelity, generalization error of stochastic optimization, and finally, a non-parametric methodology for model selection. These new concepts provide a formal framework for modeling, solving, validating, and reporting solutions for Learning Enabled Optimization (LEO). We illustrate the LEO framework by applying it to an inventory control model in which we use demand data available for ARIMA modeling in the statistical package \R". In addition, we also study a production-marketing coordination model based on combining a pedagogical production planning model with an advertising data set intended for sales prediction. Because the approach requires the solution of several stochastic programming instances, some using continuous random variables, we leverage stochastic decomposition (SD) for the fusion of regression and stochastic linear programming. In this sense, the novelty of this paper is its framework, rather than a specific new algorithm. Finally, we present an architecture of a software framework to bring about the fusion we envision.
2017-07-31T00:00:00ZQuantitative Stability Analysis for Minimax Distributionally Robust RiskOptimizationPichler, AloisXu, Huifuhttp://edoc.hu-berlin.de/18452/91072020-03-07T04:14:48Z2017-04-19T00:00:00ZQuantitative Stability Analysis for Minimax Distributionally Robust RiskOptimization
Pichler, Alois; Xu, Huifu
Higle, Julie L.; Römisch, Werner; Sen, Surrajeet
This paper considers distributionally robust formulations of a two stage stochastic programmingproblem with the objective of minimizing a distortion risk of the minimal cost incurred at the secondstage.We carry out stability analysis by looking into variations of the ambiguity set under theWassersteinmetric, decision spaces at both stages and the support set of the random variables. In the case when itis risk neutral, the stability result is presented with the variation of the ambiguity set being measuredby generic metrics of ζ-structure, which provides a unified framework for quantitative stability analysisunder various metrics including total variation metric and Kantorovich metric. When the ambiguity set isstructured by a ζ-ball, we find that the Hausdorff distance between two ζ-balls is bounded by the distanceof their centres and difference of their radius. The findings allow us to strengthen some recent convergenceresults on distributionally robust optimization where the centre of the Wasserstein ball is constructed bythe empirical probability distribution.
2017-04-19T00:00:00ZOptimal scenario generation and reduction in stochastic programmingHenrion, RenéRömisch, Wernerhttp://edoc.hu-berlin.de/18452/91062020-03-07T04:14:48Z2017-04-19T00:00:00ZOptimal scenario generation and reduction in stochastic programming
Henrion, René; Römisch, Werner
Higle, Julie L.; Römisch, Werner; Sen, Surrajeet
Scenarios are indispensable ingredients for the numerical solution of stochastic optimization problems. Earlier approaches for optimal scenario generation and reduction are based on stability arguments involving distances of probabilitymeasures. In this paper we review those ideas and suggest to make use of stability estimates based on distances containing minimal information, i.e., on data appearing in the optimization model only. For linear two-stage stochasticprograms we show that the optimal scenario generation problem can be reformulatedas best approximation problem for the expected recourse function and asgeneralized semi-infinite program, respectively. The latter model turns out to beconvex if either right-hand sides or costs are random. We also review the problemsof optimal scenario reduction for two-stage models and of optimal scenario generationfor chance constrained programs. Finally, we consider scenario generationand reduction for the classical newsvendor problem.
2017-04-19T00:00:00Z