The merge/purge problem for large databases
SIGMOD '95 Proceedings of the 1995 ACM SIGMOD international conference on Management of data
Efficient clustering of high-dimensional data sets with application to reference matching
Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining
Automating the approximate record-matching process
Information Sciences—Informatics and Computer Science: An International Journal
Information Sciences: an International Journal
Duplicate Record Detection: A Survey
IEEE Transactions on Knowledge and Data Engineering
Adaptive Blocking: Learning to Scale Up Record Linkage
ICDM '06 Proceedings of the Sixth International Conference on Data Mining
XML duplicate detection using sorted neighborhoods
EDBT'06 Proceedings of the 10th international conference on Advances in Database Technology
A Survey of Indexing Techniques for Scalable Record Linkage and Deduplication
IEEE Transactions on Knowledge and Data Engineering
Efficient and Effective Duplicate Detection in Hierarchical Data
IEEE Transactions on Knowledge and Data Engineering
Hi-index | 0.00 |
Duplicate detection consists in finding objects that, although having different representations in a database, correspond to the same real world entity. This is typically achieved by comparing all objects to each other, which can be unfeasible for large datasets. Strategies have been devised to reduce the number of objects to compare, at the cost of loosing some duplicates. However, these strategies typically rely on user knowledge to discover a set of parameters that optimize the comparisons, while minimizing the loss. Also, they do not usually optimize the comparison between each pair of objects. In this paper, we propose a method of combining two optimization strategies: one to select which objects to compare and another to optimize pair-wise object comparisons. In addition, we propose a machine learning approach to determine the required parameters, without the need of user intervention. Experiments performed on several datasets show that not only we are able to effectively determine the optimization parameters, but also to significantly improve efficiency, while maintaining an acceptable loss of recall.