Solving the multiple instance problem with axis-parallel rectangles
Artificial Intelligence
A framework for multiple-instance learning
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Multiple-Instance Learning of Real-Valued Data
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Content-Based Image Retrieval Using Multiple-Instance Learning
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Multiple-Instance Learning for Natural Scene Classification
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
AI '01 Proceedings of the 14th Biennial Conference of the Canadian Society on Computational Studies of Intelligence: Advances in Artificial Intelligence
Solving the Multiple-Instance Problem: A Lazy Learning Approach
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Image Database Retrieval with Multiple-Instance Learning Techniques
ICDE '00 Proceedings of the 16th International Conference on Data Engineering
Image Categorization by Learning and Reasoning with Regions
The Journal of Machine Learning Research
Multi-Instance Learning Based Web Mining
Applied Intelligence
Parallelizing neural network training for cluster systems
PDCN '08 Proceedings of the IASTED International Conference on Parallel and Distributed Computing and Networks
A multiple instance learning based framework for semantic image segmentation
Multimedia Tools and Applications
A Novel Neural Network-Based Approach for Multiple Instance Learning
CIT '10 Proceedings of the 2010 10th IEEE International Conference on Computer and Information Technology
Deterministic convergence of an online gradient method for BP neural networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Multiple instance learning (MIL) has been studied actively in recent years. However, it is facing a computational challenge due to the large scale of data volume. Parallel computing is a good way of overcoming the computational challenge. In this paper, we propose a new MIL method based on a MIL back-propagation neural network (MIBP), which is an extension of the standard back-propagation neural network (BPNN) that uses labeled bags of instances as training data. We use parallel computing to speed up the learning process. The proposed method finds a concept point t in the feature space which is close to instances from positive bags and far from instances in negative bags. The description of our method is as follows: First, train MIBP with positive and negative bags. Second, extract t from the trained MIBP. This is achieved by, for each positive bag, presenting all the instances to the trained MIBP and selecting the one with maximal output value. The t is then obtained by averaging all the extracted instances. Finally, a sensitivity analysis of the trained MIBP is performed to obtain feature relevance/weighting information. Parallel computing is performed during the training of the MIBP. We conduct experiments to measure the performance of the obtained t when used for classification purposes and evaluate the parallel computing method. The experimental results on the MUSK data set show that our method has better classification performance and is more computationally efficient than other well-established MIL methods.