Decision theory: an introduction to the mathematics of rationality
Decision theory: an introduction to the mathematics of rationality
Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
Covering number bounds of certain regularized linear function classes
The Journal of Machine Learning Research
Rademacher and gaussian complexities: risk bounds and structural results
The Journal of Machine Learning Research
Multi-Objective Machine Learning (Studies in Computational Intelligence) (Studies in Computational Intelligence)
Robustness and Regularization of Support Vector Machines
The Journal of Machine Learning Research
Margin-based Ranking and an Equivalence between AdaBoost and RankBoost
The Journal of Machine Learning Research
Stochastic models for budget optimization in search-based advertising
WINE'07 Proceedings of the 3rd international conference on Internet and network economics
A process for predicting manhole events in Manhattan
Machine Learning
The machine learning and traveling repairman problem
ADT'11 Proceedings of the Second international conference on Algorithmic decision theory
Machine Learning for theNew York City Power Grid
IEEE Transactions on Pattern Analysis and Machine Intelligence
Universal approximation bounds for superpositions of a sigmoidal function
IEEE Transactions on Information Theory
An algorithmic framework for convex mixed integer nonlinear programs
Discrete Optimization
A framework for input uncertainty analysis
Proceedings of the Winter Simulation Conference
Hi-index | 0.00 |
This work proposes a way to align statistical modeling with decision making. We provide a method that propagates the uncertainty in predictive modeling to the uncertainty in operational cost, where operational cost is the amount spent by the practitioner in solving the problem. The method allows us to explore the range of operational costs associated with the set of reasonable statistical models, so as to provide a useful way for practitioners to understand uncertainty. To do this, the operational cost is cast as a regularization term in a learning algorithm's objective function, allowing either an optimistic or pessimistic view of possible costs, depending on the regularization parameter. From another perspective, if we have prior knowledge about the operational cost, for instance that it should be low, this knowledge can help to restrict the hypothesis space, and can help with generalization. We provide a theoretical generalization bound for this scenario. We also show that learning with operational costs is related to robust optimization.