The role of massive memory in knowledge base management systems
On knowledge base management systems: integrating artificial intelligence and d atabase technologies
Efficiently updating materialized views
SIGMOD '86 Proceedings of the 1986 ACM SIGMOD international conference on Management of data
Intelligent database caching through the use of page-answers and page-traces
ACM Transactions on Database Systems (TODS)
Page-query compaction of secondary memory auxiliary databases
Distributed and Parallel Databases
Updating Derived Relations: Detecting Irrelevant and Autonomously Computable Updates
VLDB '86 Proceedings of the 12th International Conference on Very Large Data Bases
Knowledge-based Integrity Constraint Validation
VLDB '86 Proceedings of the 12th International Conference on Very Large Data Bases
A predicate-based caching scheme for client-server database architectures
The VLDB Journal — The International Journal on Very Large Data Bases
Hi-index | 4.10 |
With traditional caching, a system copies data from a slower device to a faster one to improve throughput. With predicate caching, a system applies a predicate to the data on its way from one memory device to another. A predicate, in this context, is any predetermined calculation made by a program. Examples are a numerical integration function, a database SQL query, shortest path calculation in a network, a complex weather calculation, and a sorting program. Predicate caching is especially suited to autonomous systems, such as rule-based systems, expert systems, and autonomous intelligent agents. Some of these systems are data-intensive; that is, they operate against a very large database and respond to numerous, repetitive queries. Predicate caching improves memory utilization and response time of repetitive queries by prestoring partial results in primary memory, which results in minimizing secondary storage access. This article describes the predicate caching technique and, specifically, the page-predicate approach, which is designed to resolve problems of cache partitioning, management, and optimization.