Design guidelines for robust Internet protocols

  • Authors:
  • Tom Anderson;Scott Shenker;Ion Stoica;David Wetherall

  • Affiliations:
  • University of Washington;ICSI Center for Internet Research;University of California, Berkeley;University of Washington

  • Venue:
  • ACM SIGCOMM Computer Communication Review
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Robustness has long been a central design goal of the Internet. Much of the initial effort towards robustness focusedon the "fail-stop" model, where node failures are complete and easily detectable by other nodes. The Internet is quite robust against such failures, routinely surviving various catastrophes with only limited outages. This robustness is largely due to the widespread belief in a set of guidelines for critical design decisions such as where to initiate recovery and how to maintain state.However, the Internet remains extremely vulnerable to more arbitrary failures where, through either error or malice, a node issues syntactically correct responses that are not semantically correct. Such failures, some as simple as misconfigured routing state, can seriously undemnine the functioning of the Internet. With the Internet playing such a central role in the global telecommunications infrastructure, this level of vulnerability is no longer acceptable.In this paper we argue that to make the Internet more robust to these kinds of arbitrary failures, we need to change the way we design network protocols. To this end, we propose a set of six design guidelines for improving the network protocol design. These guidelines emerged from a study of past examples of failures, and determining what could have been done to prevent the problem from occurring in the first place. The unifying theme behind the various guidelines is that we need to design protocols more defensively, expecting malicious attack, misimplementation, and misconfiguration at every turn.