Enhanced hypertext categorization using hyperlinks
SIGMOD '98 Proceedings of the 1998 ACM SIGMOD international conference on Management of data
Why collective inference improves relational classification
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Relational Dependency Networks
The Journal of Machine Learning Research
Classification in Networked Data: A Toolkit and a Univariate Case Study
The Journal of Machine Learning Research
Pseudolikelihood EM for Within-network Relational Learning
ICDM '08 Proceedings of the 2008 Eighth IEEE International Conference on Data Mining
Relational learning via latent social dimensions
Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining
Probabilistic classification and clustering in relational data
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
Cautious Collective Classification
The Journal of Machine Learning Research
Correcting bias in statistical tests for network classifier evaluation
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part III
Collective prediction with latent graphs
Proceedings of the 20th ACM international conference on Information and knowledge management
Transforming graph data for statistical relational learning
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
Many classification tasks involve linked nodes, such as people connected by friendship links. For such networks, accuracy might be increased by including, for each node, the (a) labels or (b) attributes of neighboring nodes as model features. Recent work has focused on option (a), because early work showed it was more accurate and because option (b) fit poorly with discriminative classifiers. We show, however, that when the network is sparsely labeled, "relational classification" based on neighbor attributes often has higher accuracy than "collective classification" based on neighbor labels. Moreover, we introduce an efficient method that enables discriminative classifiers to be used with neighbor attributes, yielding further accuracy gains. We show that these effects are consistent across a range of datasets, learning choices, and inference algorithms, and that using both neighbor attributes and labels often produces the best accuracy.