Information Sciences: an International Journal
Term-weighting approaches in automatic text retrieval
Information Processing and Management: an International Journal
Techniques for automatically correcting words in text
ACM Computing Surveys (CSUR)
Machine Learning
ACM Computing Surveys (CSUR)
Extended Boolean information retrieval
Communications of the ACM
Mining frequent neighboring class sets in spatial databases
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
Computation of Normalized Edit Distance and Applications
IEEE Transactions on Pattern Analysis and Machine Intelligence
Discovering Colocation Patterns from Spatial Data Sets: A General Approach
IEEE Transactions on Knowledge and Data Engineering
Density based co-location pattern discovery
Proceedings of the 16th ACM SIGSPATIAL international conference on Advances in geographic information systems
Answering top-k similar region queries
DASFAA'10 Proceedings of the 15th international conference on Database Systems for Advanced Applications - Volume Part I
A supervised machine learning approach for duplicate detection over gazetteer records
GeoS'11 Proceedings of the 4th international conference on GeoSpatial semantics
Linking records in dynamic world
PhD '12 Proceedings of the on SIGMOD/PODS 2012 PhD Symposium
Information retrieval and deduplication for tourism recommender sightsplanner
Proceedings of the 2nd International Conference on Web Intelligence, Mining and Semantics
A Comparison of String Similarity Measures for Toponym Matching
Proceedings of The First ACM SIGSPATIAL International Workshop on Computational Models of Place
Hi-index | 0.00 |
The quality of a local search engine, such as Google and Bing Maps, heavily relies on its geographic datasets. Typically, these datasets are obtained from multiple sources, e.g., different vendors or public yellow-page websites. Therefore, the same location entity, like a restaurant, might have multiple records with slightly different presentations of title and address in different data sources. For instance, 'Seattle Premium Outlets' and 'Seattle Premier Outlet Mall' describe the same Outlet located in the same place while their titles are not identical. This will cause many nearly-duplicated records in a location database, which would bring trouble to data management and make users confused by the various search results of a query. To detect these nearly duplicated records, we propose a machine-learning-based approach, which is comprised of three steps: candidate selection, feature extraction and training/inference. Three key features consisting of name similarity, address similarity and category similarity, as well as corresponding metrics, are proposed to model the differences between two entity records. We evaluate our method with intensive experiments based on a large-scale real dataset. As a result, both the precision and recall of our method exceeded 90%.