Probabilistic latent semantic indexing
Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval
Information retrieval as statistical translation
Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval
On the recommending of citations for research papers
CSCW '02 Proceedings of the 2002 ACM conference on Computer supported cooperative work
The Journal of Machine Learning Research
The mathematics of statistical machine translation: parameter estimation
Computational Linguistics - Special issue on using large corpora: II
Retrieval evaluation with incomplete information
Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval
Improved statistical alignment models
ACL '00 Proceedings of the 38th Annual Meeting on Association for Computational Linguistics
A translation model for sentence retrieval
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
Recommending citations for academic papers
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
Joint latent topic models for text and citations
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
Comparing citation contexts for information retrieval
Proceedings of the 17th ACM conference on Information and knowledge management
A Discriminative Approach to Topic-Based Citation Recommendation
PAKDD '09 Proceedings of the 13th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining
Context-aware citation recommendation
Proceedings of the 19th international conference on World wide web
Using terms from citations for IR: some first results
ECIR'08 Proceedings of the IR research, 30th European conference on Advances in information retrieval
Recommending citations with translation model
Proceedings of the 20th ACM international conference on Information and knowledge management
A simple word trigger method for social tag suggestion
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Context sensitive topic models for author influence in document networks
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three
Automatic tag recommendation for metadata annotation using probabilistic topic modeling
Proceedings of the 13th ACM/IEEE-CS joint conference on Digital libraries
Can't see the forest for the trees?: a citation recommendation system
Proceedings of the 13th ACM/IEEE-CS joint conference on Digital libraries
CV-PCR: a context-guided value-driven framework for patent citation recommendation
Proceedings of the 22nd ACM international conference on Conference on information & knowledge management
A unified graph model for personalized query-oriented reference paper recommendation
Proceedings of the 22nd ACM international conference on Conference on information & knowledge management
Research paper recommender system evaluation: a quantitative literature survey
Proceedings of the International Workshop on Reproducibility and Replication in Recommender Systems Evaluation
Hi-index | 0.00 |
When we write or prepare to write a research paper, we always have appropriate references in mind. However, there are most likely references we have missed and should have been read and cited. As such a good citation recommendation system would not only improve our paper but, overall, the efficiency and quality of literature search. Usually, a citation's context contains explicit words explaining the citation. Using this, we propose a method that "translates" research papers into references. By considering the citations and their contexts from existing papers as parallel data written in two different "languages", we adopt the translation model to create a relationship between these two "vocabularies". Experiments on both CiteSeer and CiteULike dataset show that our approach outperforms other baseline methods and increase the precision, recall and f-measure by at least 5% to 10%, respectively. In addition, our approach runs much faster in the both training and recommending stage, which proves the effectiveness and the scalability of our work.