Rotating memory processors for the matching of complex textual patterns
ISCA '78 Proceedings of the 5th annual symposium on Computer architecture
The design of system architectures for information retrieval
ACM '76 Proceedings of the 1976 annual conference
Associative/parallel processors for searching very large textual data bases
CAW '77 Proceedings of the 3rd workshop on Computer architecture : Non-numeric processing
Dynamic information and library processing
Dynamic information and library processing
Planning a computer system: Project Stretch
Planning a computer system: Project Stretch
ACM Computing Surveys (CSUR) - Annals of discrete mathematics, 24
Measuring hardware efficiency by distribution of resources within a single program
ACM-SE 17 Proceedings of the 17th annual Southeast regional conference
Hardware systems for text information retrieval
SIGIR '83 Proceedings of the 6th annual international ACM SIGIR conference on Research and development in information retrieval
SIGIR '83 Proceedings of the 6th annual international ACM SIGIR conference on Research and development in information retrieval
Current research into specialized processors for text information retrieval
VLDB '78 Proceedings of the fourth international conference on Very Large Data Bases - Volume 4
Hi-index | 0.00 |
In inverted file database systems, index lists consisting of pointers to items within the database are combined to form a list of items which potentially satisfy a user's query. This list merging is similar to the common data processing operation of combining two or more sorted input files to form a sorted output file, and generally represents a large percentage of the computer time used by the retrieval system. Unfortunately, a general purpose digital computer is better suited for complicated numeric processing rather than the simple combining of data. The overhead of adjusting and checking pointers, aligning data, and testing for completion of the operation overwhelm the processing of the data.A specialized processor can perform most of these overhead operations in parallel with the processing of the data, thereby offering speed increases by a factor from 10 to 100 over conventional computers, depending on whether a higher speed memory is used for storing the lists. These processors can also be combined into networks capable of directly forming the result of a complex expression, with another order of magnitude speed increase possible. The programming and operation of these processors and networks is discussed, and comparisons are made with the speed and efficiency of conventional general purpose computers.