Maple: simplifying SDN programming using algorithmic policies

  • Authors:
  • Andreas Voellmy;Junchang Wang;Y Richard Yang;Bryan Ford;Paul Hudak

  • Affiliations:
  • Yale University, New Haven, CT, USA;Yale University, University of Science and Technology of China, New Haven, USA;Yale University, New Haven, USA;Yale University, New Haven, USA;Yale University, New Haven, USA

  • Venue:
  • Proceedings of the ACM SIGCOMM 2013 conference on SIGCOMM
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Software-Defined Networking offers the appeal of a simple, centralized programming model for managing complex networks. However, challenges in managing low-level details, such as setting up and maintaining correct and efficient forwarding tables on distributed switches, often compromise this conceptual simplicity. In this pa- per, we present Maple, a system that simplifies SDN programming by (1) allowing a programmer to use a standard programming language to design an arbitrary, centralized algorithm, which we call an algorithmic policy, to decide the behaviors of an entire network, and (2) providing an abstraction that the programmer-defined, centralized policy runs, conceptually, "afresh" on every packet entering a network, and hence is oblivious to the challenge of translating a high-level policy into sets of rules on distributed individual switches. To implement algorithmic policies efficiently, Maple includes not only a highly-efficient multicore scheduler that can scale efficiently to controllers with 40+ cores, but more importantly a novel tracing runtime optimizer that can automatically record reusable policy decisions, offload work to switches when possible, and keep switch flow tables up-to-date by dynamically tracing the dependency of policy decisions on packet contents as well as the environment (system state). Evaluations using real HP switches show that Maple optimizer reduces HTTP connection time by a factor of 100 at high load. During simulated benchmarking, Maple scheduler, when not running the optimizer, achieves a throughput of over 20 million new flow requests per second on a single machine, with 95-percentile latency under 10 ms.