Assessing and optimizing microarchitectural performance of event processing systems
TPCTC'10 Proceedings of the Second TPC technology conference on Performance evaluation, measurement and characterization of complex systems
Proceedings of the 5th ACM international conference on Distributed event-based system
A text copy detection system based on complex event processing architecture
ServiceWave'10 Proceedings of the 2010 international conference on Towards a service-based internet
Towards benchmarking stream data warehouses
Proceedings of the fifteenth international workshop on Data warehousing and OLAP
Revenue-Based resource management on shared clouds for heterogenous bursty data streams
GECON'12 Proceedings of the 9th international conference on Economics of Grids, Clouds, Systems, and Services
Towards a standard event processing benchmark
Proceedings of the 4th ACM/SPEC International Conference on Performance Engineering
Overcoming memory limitations in high-throughput event-based applications
Proceedings of the 4th ACM/SPEC International Conference on Performance Engineering
Tutorial: stream processing optimizations
Proceedings of the 7th ACM international conference on Distributed event-based systems
A catalog of stream processing optimizations
ACM Computing Surveys (CSUR)
A performance analysis of system s, s4, and esper via two level benchmarking
QEST'13 Proceedings of the 10th international conference on Quantitative Evaluation of Systems
Hi-index | 0.00 |
Event processing engines are used in diverse mission-critical scenarios such as fraud detection, traffic monitoring, or intensive care units. However, these scenarios have very different operational requirements in terms of, e.g., types of events, queries/patterns complexity, throughput, latency and number of sources and sinks. What are the performance bottlenecks? Will performance degrade gracefully with increasing loads? In this paper we make a first attempt to answer these questions by running several micro-benchmarks on three different engines, while we vary query parameters like window size, window expiration type, predicate selectivity, and data values. We also perform some experiments to assess engines scalability with respect to number of queries and propose ways for evaluating their ability in adapting to changes in load conditions. Lastly, we show that similar queries have widely different performances on the same or different engines and that no engine dominates the other two in all scenarios.