Dimitris Bertsimas is currently the Boeing Professor of Operations Research and the co-director of the Operations Research Center at the Massachusetts Institute of Technology. He has received a BS in Electrical Engineering and Computer Science at the National Technical University of Athens, Greece in 1985, a MS in Operations Research at MIT in 1987, and a Ph.D in Applied Mathematics and Operations Research at MIT in 1988. Since 1988, he has been with the MIT faculty. Since the 1990s he has started several successful companies in the areas of financial services, asset management, health care, publishing, analytics and aviation.
His research interests include analytics, optimization and their applications in a variety of industries. He has co-authored more than 170 scientific papers and four textbooks, including the book “The Analytics Edge” published in 2016. He is former area editor in Operations Research in Financial Engineering and in Management Science in Optimization. He has supervised 57 doctoral students and he is currently supervising 16 others.
He is a member of the US National Academy of Engineering, and an INFORMS fellow. He has received several research awards including the Philip Morse lectureship award (2013), the William Peirskalla award for best paper in health care (2013), the best paper award in Transportation Science (2013), the Farkas prize (2008), the Erlang prize (1996), the SIAM prize in optimization (1996), the Bodossaki prize (1998) and the Presidential Young Investigator award (1991-1996).
Machine Learning and Statistics Via A Modern Optimization Lens
The field of Statistics has historically been linked with Probability Theory. However, some of the central problems of classification, regression and estimation can naturally be written as optimization problems. While continuous optimization approaches has had a significant impact in Statistics, mixed integer optimization (MIO) has played a very limited role, primarily based on the belief that MIO models are computationally intractable. The period 1991–2015 has witnessed a) algorithmic advances in mixed integer optimization (MIO), which coupled with hardware improvements have resulted in an astonishing 450 billion factor speedup in solving MIO problems, b) significant advances in our ability to model and solve very high dimensional robust and convex optimization models.
In this talk, we demonstrate that modern convex, robust and especially mixed integer optimization methods, when applied to a variety of classical Machine Learning (ML)/Statistics (S) problems can lead to certifiable optimal solutions for large scale instances that have often significantly improved out of sample accuracy compared to heuristic methods used in ML/S. Specifically, we report results on:
1) The classical variable selection problem in regression currently solved by Lasso heuristically.
2) We show that robustness and not sparsity is the major reason of the success of Lasso in contrast to widely held beliefs in ML/S.
3) A systematic approach to design linear and logistic regression models based on MIO.
4) Optimal trees for classification solved by CART heuristically.
5) Robust classification including robust Logistic regression, robust optimal trees and robust support vector machines.
6) Sparse matrix estimation problems: Principal Component Analysis, Factor Analysis and Covariance matrix estimation.
In all cases we demonstrate that optimal solutions to large scale instances (a) can be found in seconds, (b) can be certified to be optimal in minutes and (c) outperform classical approaches. Most importantly, this body of work suggests that linking ML/S to modern optimization will lead to significant advantages.