Using latent semantic analysis to improve access to textual information
CHI '88 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Character N-Gram Tokenization for European Language Text Retrieval
Information Retrieval
An overview of the Trilinos project
ACM Transactions on Mathematical Software (TOMS) - Special issue on the Advanced CompuTational Software (ACTS) Collection
Evaluation of the bible as a resource for cross-language information retrieval
MLRI '06 Proceedings of the Workshop on Multilingual Language Resources and Interoperability
Two heads better than one: pattern discovery in time-evolving multi-aspect data
Data Mining and Knowledge Discovery
Cross-language linking of news stories on the web using interlingual topic modelling
Proceedings of the 2nd ACM workshop on Social web search and mining
Discriminative input stream combination for conditional random field phone recognition
IEEE Transactions on Audio, Speech, and Language Processing
Towards a matrix-based distributional model of meaning
HLT-SRWS '10 Proceedings of the NAACL HLT 2010 Student Research Workshop
From frequency to meaning: vector space models of semantics
Journal of Artificial Intelligence Research
FacetCube: a framework of incorporating prior knowledge into non-negative tensor factorization
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
An information-theoretic, vector-space-model approach to cross-language information retrieval*
Natural Language Engineering
Knowledge transfer across multilingual corpora via latent topics
PAKDD'11 Proceedings of the 15th Pacific-Asia conference on Advances in knowledge discovery and data mining - Volume Part I
Fast metadata-driven multiresolution tensor decomposition
Proceedings of the 20th ACM international conference on Information and knowledge management
Cross-language information retrieval with latent topic models trained on a comparable corpus
AIRS'11 Proceedings of the 7th Asia conference on Information Retrieval Technology
GigaTensor: scaling tensor analysis up by 100 times - algorithms and discoveries
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
ParCube: sparse parallelizable tensor decompositions
ECML PKDD'12 Proceedings of the 2012 European conference on Machine Learning and Knowledge Discovery in Databases - Volume Part I
Understanding and promoting micro-finance activities in Kiva.org
Proceedings of the 7th ACM international conference on Web search and data mining
Hi-index | 0.00 |
A standard approach to cross-language information retrieval (CLIR) uses Latent Semantic Analysis (LSA) in conjunction with a multilingual parallel aligned corpus. This approach has been shown to be successful in identifying similar documents across languages - or more precisely, retrieving the most similar document in one language to a query in another language. However, the approach has severe drawbacks when applied to a related task, that of clustering documents "language-independently", so that documents about similar topics end up closest to one another in the semantic space regardless of their language. The problem is that documents are generally more similar to other documents in the same language than they are to documents in a different language, but on the same topic. As a result, when using multilingual LSA, documents will in practice cluster by language, not by topic. We propose a novel application of PARAFAC2 (which is a variant of PARAFAC, a multi-way generalization of the singular value decomposition [SVD]) to overcome this problem. Instead of forming a single multilingual term-by-document matrix which, under LSA, is subjected to SVD, we form an irregular three-way array, each slice of which is a separate term-by-document matrix for a single language in the parallel corpus. The goal is to compute an SVD for each language such that V (the matrix of right singular vectors) is the same across all languages. Effectively, PARAFAC2 imposes the constraint, not present in standard LSA, that the "concepts" in all documents in the parallel corpus are the same regardless of language. Intuitively, this constraint makes sense, since the whole purpose of using a parallel corpus is that exactly the same concepts are expressed in the translations. We tested this approach by comparing the performance of PARAFAC2 with standard LSA in solving a particular CLIR problem. From our results, we conclude that PARAFAC2 offers a very promising alternative to LSA not only for multilingual document clustering, but also for solving other problems in cross-language information retrieval.