Web proxy caching: the devil is in the details
ACM SIGMETRICS Performance Evaluation Review
Summary cache: a scalable wide-area web cache sharing protocol
IEEE/ACM Transactions on Networking (TON)
XCache: a semantic caching system for XML queries
Proceedings of the 2002 ACM SIGMOD international conference on Management of data
Semantic Data Caching and Replacement
VLDB '96 Proceedings of the 22th International Conference on Very Large Data Bases
Semantic caching of Web queries
The VLDB Journal — The International Journal on Very Large Data Bases
Analysis of a Least Recently Used Cache Management Policy for Web Browsers
Operations Research
Design Considerations for Distributed Caching on the Internet
ICDCS '99 Proceedings of the 19th IEEE International Conference on Distributed Computing Systems
Object serialization analysis and comparison in Java and .NET
ACM SIGPLAN Notices
A survey of Web cache replacement strategies
ACM Computing Surveys (CSUR)
ACM Transactions on Database Systems (TODS)
XTreeNet: Scalable Overlay Networks for XML Content Dissemination and Querying (Synopsis)
WCW '05 Proceedings of the 10th International Workshop on Web Content Caching and Distribution
Queue - AI
Service Discovery in P2P Service-oriented Environments
CEC-EEE '06 Proceedings of the The 8th IEEE International Conference on E-Commerce Technology and The 3rd IEEE International Conference on Enterprise Computing, E-Commerce, and E-Services
Improving the performance of XML based technologies by caching and reusing information
ICWS '06 Proceedings of the IEEE International Conference on Web Services
Fast SOA: The way to use native XML technology to achieve Service Oriented Architecture governance, scalability, and performance
Optimization in Object Caching
INFORMS Journal on Computing
Efficient mining of XML query patterns for caching
VLDB '03 Proceedings of the 29th international conference on Very large data bases - Volume 29
WReX: a scalable middleware architecture to enable XML caching for web-services
Proceedings of the ACM/IFIP/USENIX 2005 International Conference on Middleware
Optimizing Service Systems Based on Application-Level QoS
IEEE Transactions on Services Computing
Improving Scalability of Software Cloud for Composite Web Services
CLOUD '09 Proceedings of the 2009 IEEE International Conference on Cloud Computing
Semantic caching for web services
ICSOC'05 Proceedings of the Third international conference on Service-Oriented Computing
Hi-index | 0.00 |
Organizations are increasingly choosing to implement service-oriented architectures to integrate distributed, loosely coupled applications. These architectures are implemented as services, which typically use XML-based messaging to communicate between service consumers and service providers across enterprise networks. We propose a scheme for caching fragments of service response messages to improve performance and service quality in service-oriented architectures. In our fragment caching scheme, we decompose responses into smaller fragments such that reusable components can be identified and cached in the XML routers of an XML overlay network within an enterprise network. Such caching mitigates processing requirements on providers and moves content closer to users, thus reducing bandwidth requirements on the network as well as improving service times. We describe the system architecture and caching algorithm details for our caching scheme, develop an analysis of the expected benefits of our scheme, and present the results of both simulation and case study-based experiments to show the validity and performance improvements provided by our caching scheme. Our simulation experimental results show an up to 60% reduction in bandwidth consumption and up to 50% response time improvement. Further, our case study experiments demonstrate that when there is no resource bottleneck, the cache-enabled case reduces average response times by 40%--50% and increases throughput by 150% compared to the no-cache and full message caching cases. In experiments contrasting fragment caching and full message caching, we found that full message caching provides benefits when the number of possible unique responses is low while the benefits of fragment caching increase as the number of possible unique responses increases. These experimental results clearly demonstrate the benefits of our approach.