An Algorithm that Learns What‘s in a Name
Machine Learning - Special issue on natural language learning
Probabilistic latent semantic indexing
Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval
Modern Information Retrieval
Introduction to topic detection and tracking
Topic detection and tracking
The Journal of Machine Learning Research
Statistical entity-topic models
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
On GMAP: and other transformations
CIKM '06 Proceedings of the 15th ACM international conference on Information and knowledge management
Entity Ranking from Annotated Text Collections Using Multitype Topic Models
Focused Access to XML Documents
Hi-index | 0.00 |
Conveying information about who, what, when and where is a primary purpose of some genres of documents, typically news articles. To handle such information, statistical models that capture dependencies between named entities and topics can serve an important role. Although some relationships between who and where should be mentioned in such a document, no statistical topic models explicitly addressed the textual interactions between a who-entity and a where-entity. This paper presents a statistical model that directly captures dependencies between an arbitrary number of word types, such as who-entities, where-entities and topics, mentioned in each document. We show how this multitype topic model performs better at making predictions on entity networks, in which each vertex represents an entity and each edge weight represents how a pair of entities at the incident vertices is closely related, through our experiments on predictions of who-entities and links between them.