The design philosophy of the DARPA internet protocols
SIGCOMM '88 Symposium proceedings on Communications architectures and protocols
ACM Transactions on Computer Systems (TOCS)
Understanding fault-tolerant distributed systems
Communications of the ACM
Writing solid code: Microsoft's techniques for developing bug-free programs
Writing solid code: Microsoft's techniques for developing bug-free programs
Prudent Engineering Practice for Cryptographic Protocols
IEEE Transactions on Software Engineering
IEEE Transactions on Software Engineering - Special issue on formal methods in software practice
Practical Byzantine fault tolerance
OSDI '99 Proceedings of the third symposium on Operating systems design and implementation
The Byzantine Generals Problem
ACM Transactions on Programming Languages and Systems (TOPLAS)
TCP congestion control with a misbehaving receiver
ACM SIGCOMM Computer Communication Review
Understanding BGP misconfiguration
Proceedings of the 2002 conference on Applications, technologies, architectures, and protocols for computer communications
ICNP '01 Proceedings of the Ninth International Conference on Network Protocols
Proceedings of the 2005 conference on Applications, technologies, architectures, and protocols for computer communications
Hi-index | 0.00 |
Robustness has long been a central design goal of the Internet. Much of the initial effort towards robustness focusedon the "fail-stop" model, where node failures are complete and easily detectable by other nodes. The Internet is quite robust against such failures, routinely surviving various catastrophes with only limited outages. This robustness is largely due to the widespread belief in a set of guidelines for critical design decisions such as where to initiate recovery and how to maintain state.However, the Internet remains extremely vulnerable to more arbitrary failures where, through either error or malice, a node issues syntactically correct responses that are not semantically correct. Such failures, some as simple as misconfigured routing state, can seriously undemnine the functioning of the Internet. With the Internet playing such a central role in the global telecommunications infrastructure, this level of vulnerability is no longer acceptable.In this paper we argue that to make the Internet more robust to these kinds of arbitrary failures, we need to change the way we design network protocols. To this end, we propose a set of six design guidelines for improving the network protocol design. These guidelines emerged from a study of past examples of failures, and determining what could have been done to prevent the problem from occurring in the first place. The unifying theme behind the various guidelines is that we need to design protocols more defensively, expecting malicious attack, misimplementation, and misconfiguration at every turn.