Computer Networks: The International Journal of Computer and Telecommunications Networking
A data-oriented (and beyond) network architecture
Proceedings of the 2007 conference on Applications, technologies, architectures, and protocols for computer communications
Proceedings of the 5th international conference on Emerging networking experiments and technologies
Arguments for an information-centric internetworking architecture
ACM SIGCOMM Computer Communication Review
CONET: a content centric inter-networking architecture
Proceedings of the ACM SIGCOMM workshop on Information-centric networking
Information-centric networking: seeing the forest for the trees
Proceedings of the 10th ACM Workshop on Hot Topics in Networks
Supporting the Web with an information centric network that routes by name
Computer Networks: The International Journal of Computer and Telecommunications Networking
Protecting access privacy of cached contents in information centric networks
Proceedings of the 8th ACM SIGSAC symposium on Information, computer and communications security
ACM SIGCOMM Computer Communication Review
Computer Networks: The International Journal of Computer and Telecommunications Networking
Enhancing content-centric networking for vehicular environments
Computer Networks: The International Journal of Computer and Telecommunications Networking
From content delivery today to information centric networking
Computer Networks: The International Journal of Computer and Telecommunications Networking
Hi-index | 0.00 |
Content to be transported over an Information Centric Networking (ICN) infrastructure can be very variable in size, from few bytes to hundreds of gigabytes. Therefore it needs to be segmented in smaller size data units, typically called chunks, in order to be handled by ICN nodes. A chunk is the basic data unit to which caching and security (e.g. encryption and signature) functions are applied. If we consider the overhead and the number of cryptographic operations to be performed by nodes, a good choice for the chunk size would be from hundreds of KBs up to few MBs. However, if the chunk size is bigger than the Maximum Transfer Unit of a link, chunks will be fragmented. We show that if we have more than 3-4 fragments per chunk, and congestion and reliability functions are executed on a chunk by chunk basis, the efficiency of the congestion control algorithm drastically decreases. On the other side, a small chunk size would increase overhead and rate of signature checks. The contribution of this paper is twofold: 1) we propose to segment content in two levels: at the first level the content is segmented in chunks, at the second level the chunks are segmented into smaller data units, handled by an ICN specific Transport Protocol (ICTP), performing reliability and congestion control functions; 2) we propose to adopt a receiver-driven transport protocol, in which the receiver adjusts the sending rate to control congestion, we describe an implementation of this protocol, and evaluate its performance.