Learning from hints in neural networks
Journal of Complexity
Symbolic-neural systems and the use of hints for developing complex systems
International Journal of Man-Machine Studies
Training knowledge-based neural networks to recognize genes in DNA sequences
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
Machine Learning
Learning internal representations
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Machine Learning - Special issue on inductive transfer
Category learning through multimodality sensing
Neural Computation
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
The Case against Accuracy Estimation for Comparing Induction Algorithms
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Learning One More Thing
Multitask learning
Using the representation in a neural network's hidden layer for task-specific focus of attention
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 1
An introduction to variable and feature selection
The Journal of Machine Learning Research
Feature selection for the SVM: An application to hypertension diagnosis
Expert Systems with Applications: An International Journal
Pairwise vs global multi-class wrapper feature selection
AIKED'07 Proceedings of the 6th Conference on 6th WSEAS Int. Conf. on Artificial Intelligence, Knowledge Engineering and Data Bases - Volume 6
MTForest: Ensemble Decision Trees based on Multi-Task Learning
Proceedings of the 2008 conference on ECAI 2008: 18th European Conference on Artificial Intelligence
Unearth the Hidden Supportive Information for an Intelligent Medical Diagnostic System
HAIS '09 Proceedings of the 4th International Conference on Hybrid Artificial Intelligence Systems
On Feature Selection, Bias-Variance, and Bagging
ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part II
Supportive utility of irrelevant features in data preprocessing
PAKDD'07 Proceedings of the 11th Pacific-Asia conference on Advances in knowledge discovery and data mining
Instance-based classifiers applied to medical databases: Diagnosis and knowledge extraction
Artificial Intelligence in Medicine
Network-scale traffic modeling and forecasting with graphical lasso
ISNN'11 Proceedings of the 8th international conference on Advances in neural networks - Volume Part II
A new hybrid ant colony optimization algorithm for feature selection
Expert Systems with Applications: An International Journal
Ensemble learning based on multi-task class labels
PAKDD'10 Proceedings of the 14th Pacific-Asia conference on Advances in Knowledge Discovery and Data Mining - Volume Part II
Feature selection for dimensionality reduction
SLSFS'05 Proceedings of the 2005 international conference on Subspace, Latent Structure and Feature Selection
Self-taught dimensionality reduction on the high-dimensional small-sized data
Pattern Recognition
A survey on feature selection methods
Computers and Electrical Engineering
Hi-index | 0.00 |
In supervised learning variable selection is used to find a subset of the available inputs that accurately predict the output. This paper shows that some of the variables that variable selection discards can beneficially be used as extra outputs for inductive transfer. Using discarded input variables as extra outputs forces the model to learn mappings from the variables that were selected as inputs to these extra outputs. Inductive transfer makes what is learned by these mappings available to the model that is being trained on the main output, often resulting in improved performance on that main output. We present three synthetic problems (two regression problems and one classification problem) where performance improves if some variables discarded by variable selection are used as extra outputs. We then apply variable selection to two real problems (DNA splice-junction and pneumonia risk prediction) and demonstrate the same effect: using some of the discarded input variables as extra outputs yields somewhat better performance on both of these problems than can be achieved by variable selection alone. This new approach enhances the benefit of variable selection by allowing the learner to benefit from variables that would otherwise have been discarded by variable selection, but without suffering the loss in performance that occurs when these variables are used as inputs.