Fab: content-based, collaborative recommendation
Communications of the ACM
Recommender systems in e-commerce
Proceedings of the 1st ACM conference on Electronic commerce
The Journal of Machine Learning Research
Information-theoretic co-clustering
Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining
Latent semantic models for collaborative filtering
ACM Transactions on Information Systems (TOIS)
A Scalable Collaborative Filtering Framework Based on Co-Clustering
ICDM '05 Proceedings of the Fifth IEEE International Conference on Data Mining
A study of mixture models for collaborative filtering
Information Retrieval
Latent Dirichlet Co-Clustering
ICDM '06 Proceedings of the Sixth International Conference on Data Mining
Bayesian probabilistic matrix factorization using Markov chain Monte Carlo
Proceedings of the 25th international conference on Machine learning
ICDM '08 Proceedings of the 2008 Eighth IEEE International Conference on Data Mining
Regression-based latent factor models
Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining
Latent Dirichlet Bayesian Co-Clustering
ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part II
Nearest-biclusters collaborative filtering with constant values
WebKDD'06 Proceedings of the 8th Knowledge discovery on the web international conference on Advances in web mining and web usage analysis
Incremental collaborative filtering via evolutionary co-clustering
Proceedings of the fourth ACM conference on Recommender systems
Generalized Probabilistic Matrix Factorizations for Collaborative Filtering
ICDM '10 Proceedings of the 2010 IEEE International Conference on Data Mining
Hi-index | 0.00 |
Co-clustering has emerged as an important technique for mining relational data, especially when data are sparse and high-dimensional. Co-clustering simultaneously groups the different kinds of objects involved in a relation. Most co-clustering techniques typically only leverage the entries of the given contingency matrix to perform the two-way clustering. As a consequence, they cannot predict the interaction values for new objects. In many applications, though, additional features associated to the objects of interest are available. The Infinite Hidden Relational Model (IHRM) has been proposed to make use of these features. As such, IHRM has the capability to forecast relationships among previously unseen data. The work on IHRM lacks an evaluation of the improvement that can be achieved when leveraging features to make predictions for unseen objects. In this work, we fill this gap and re-interpret IHRM from a co-clustering point of view. We focus on the empirical evaluation of forecasting relationships between previously unseen objects by leveraging object features. The empirical evaluation demonstrates the effectiveness of the feature-enriched approach and identifies the conditions under which the use of features is most useful, i.e., with sparse data.