An Analysis of Bid-Price Controls for Network Revenue Management
Management Science
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Neuro-Dynamic Programming
Revenue Management in a Dynamic Network Environment
Transportation Science
The Linear Programming Approach to Approximate Dynamic Programming
Operations Research
On Constraint Sampling in the Linear Programming Approach to Approximate Dynamic Programming
Mathematics of Operations Research
Fleeting with Passenger and Cargo Origin-Destination Booking Control
Transportation Science
Approximate Dynamic Programming: Solving the Curses of Dimensionality (Wiley Series in Probability and Statistics)
Dynamic Bid Prices in Revenue Management
Operations Research
Single-Leg Air-Cargo Revenue Management
Transportation Science
Call admission control and routing in integrated services networks using neuro-dynamic programming
IEEE Journal on Selected Areas in Communications
Hi-index | 0.00 |
We consider the problem faced by an airline that is flying both passengers and cargo over a network of locations on a fixed periodic schedule. Bookings for many classes of cargo shipments between origin-destination pairs in this network are made in advance, but the weight and volume of aircraft capacity available for cargo as well as the exact weight and volume of each shipment are not known at the time of booking. The problem is to control cargo accept/reject decisions to maximize expected profits while ensuring effective dispatch of accepted shipments through the network. This network stochastic dynamic control problem has very high computational complexity. We propose a linear programming and stochastic simulation-based computational method for learning approximate control policies and discuss their structural properties. The proposed method is flexible and can utilize historical booking data as well as decisions generated by default control policies.