Semi-supervised support vector machines
Proceedings of the 1998 conference on Advances in neural information processing systems II
Topic Detection and Tracking: Event-Based Information Organization
Topic Detection and Tracking: Event-Based Information Organization
Document clustering with cluster refinement and model selection capabilities
SIGIR '02 Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval
Transductive Inference for Text Classification using Support Vector Machines
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
Topic-conditioned novelty detection
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
RCV1: A New Benchmark Collection for Text Categorization Research
The Journal of Machine Learning Research
Unsupervised and semi-supervised multi-class support vector machines
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
Maximum margin clustering made practical
IEEE Transactions on Neural Networks
Bundle Methods for Regularized Risk Minimization
The Journal of Machine Learning Research
Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining
Unsupervised transfer classification: application to text categorization
Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining
Logarithmic regret algorithms for online convex optimization
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Recursive Support Vector Machines for Dimensionality Reduction
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Most traditional supervised learning methods are developed to learn a model from labeled examples and use this model to classify the unlabeled ones into the same label space predefined by the models. However, in many real world applications, the label spaces for both the labeled/training and unlabeled/testing examples can be different. To solve this problem, this paper proposes a novel notion of Serendipitous Learning (SL), which is defined to address the learning scenarios in which the label space can be enlarged during the testing phase. In particular, a large margin approach is proposed to solve SL. The basic idea is to leverage the knowledge in the labeled examples to help identify novel/unknown classes, and the large margin formulation is proposed to incorporate both the classification loss on the examples within the known categories, as well as the clustering loss on the examples in unknown categories. An efficient optimization algorithm based on CCCP and the bundle method is proposed to solve the optimization problem of the large margin formulation of SL. Moreover, an efficient online learning method is proposed to address the issue of large scale data in online learning scenario, which has been shown to have a guaranteed learning regret. An extensive set of experimental results on two synthetic datasets and two datasets from real world applications demonstrate the advantages of the proposed method over several other baseline algorithms. One limitation of the proposed method is that the number of unknown classes is given in advance. It may be possible to remove this constraint if we model it by using a non-parametric way. We also plan to do experiments on more real world applications in the future.