Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
Event detection from time series data
KDD '99 Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining
Cluster and Calendar Based Visualization of Time Series Data
INFOVIS '99 Proceedings of the 1999 IEEE Symposium on Information Visualization
Strategies for sound internet measurement
Proceedings of the 4th ACM SIGCOMM conference on Internet measurement
The design principles of PlanetLab
ACM SIGOPS Operating Systems Review
Using PlanetLab for network research: myths, realities, and best practices
ACM SIGOPS Operating Systems Review
CoMon: a mostly-scalable monitoring system for PlanetLab
ACM SIGOPS Operating Systems Review
An active measurement system for shared environments
Proceedings of the 7th ACM SIGCOMM conference on Internet measurement
Clustering of time series data-a survey
Pattern Recognition
The impact of virtualization on network performance of amazon EC2 data center
INFOCOM'10 Proceedings of the 29th conference on Information communications
Explaining packet delays under virtualization
ACM SIGCOMM Computer Communication Review
Measuring bandwidth between planetlab nodes
PAM'05 Proceedings of the 6th international conference on Passive and Active Network Measurement
Hi-index | 0.00 |
Experimental network research is subject to challenges since the experiment outcomes can be influenced by undesired effects from other activities in the network. In shared experiment networks, control over resources is often limited and QoS guarantees might not be available. When the network conditions vary during a series of experiment unwanted artifacts can be introduced in the experimental results, reducing the reliability of the experiments. We propose a novel, systematic, methodology where network conditions are monitored during the experiments and information about the network is collected. This information, known as metadata, is analyzed statistically to identify periods during the experiments when the network conditions have been similar. Data points collected during these periods are valid for comparison. Our hypothesis is that this methodology can make experiments more reliable. We present a proof-of-concept implementation of our method, deployed in the FEDERICA and PlanetLab networks.