Optimization of control parameters for genetic algorithms
IEEE Transactions on Systems, Man and Cybernetics
Quantifying inductive bias: AI learning algorithms and Valiant's learning framework
Artificial Intelligence
Generalization by weight-elimination with application to forecasting
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
Neural networks and the bias/variance dilemma
Neural Computation
Simplifying neural networks by soft weight-sharing
Neural Computation
Weight elimination and effective network size
Proceedings of a workshop on Computational learning theory and natural learning systems (vol. 1) : constraints and prospects: constraints and prospects
Learning and evolution in neural networks
Adaptive Behavior
Evaluation and Selection of Biases in Machine Learning
Machine Learning - Special issue on bias evaluation and selection
Inductive Policy: The Pragmatics of Bias Selection
Machine Learning - Special issue on bias evaluation and selection
Adaptive individuals in evolving populations: models and algorithms
Adaptive individuals in evolving populations: models and algorithms
A General Framework for Induction and a Study of Selective Induction
Machine Learning
The Puzzle of the Persistent Question Marks: A Case Study of Genetic Drift
Proceedings of the 5th International Conference on Genetic Algorithms
Lamarckian Evolution, The Baldwin Effect and Function Optimization
PPSN III Proceedings of the International Conference on Evolutionary Computation. The Third Conference on Parallel Problem Solving from Nature: Parallel Problem Solving from Nature
Adding learning to the cellular development of neural networks: Evolution and the baldwin effect
Evolutionary Computation
Journal of Artificial Intelligence Research
Building robust learning systems by combining induction and optimization
IJCAI'89 Proceedings of the 11th international joint conference on Artificial intelligence - Volume 1
Hybrid learning using genetic algorithms and decision trees for pattern classification
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 1
Adequacy of training data for evolutionary mining of trading rules
Decision Support Systems - Special issue: Data mining for financial decision making
Efficient Genetic Algorithm Based Data Mining Using Feature Selection with Hausdorff Distance
Information Technology and Management
Using learning to facilitate the evolution of features for recognizing visual concepts
Evolutionary Computation
A Genetic Algorithm for ANN Design, Training and Simplification
IWANN '09 Proceedings of the 10th International Work-Conference on Artificial Neural Networks: Part I: Bio-Inspired Systems: Computational and Ambient Intelligence
An unorthodox introduction to Memetic Algorithms
ACM SIGEVOlution
Journal of Artificial Intelligence Research
Baldwinian learning in clonal selection algorithm for optimization
Information Sciences: an International Journal
ECAL'07 Proceedings of the 9th European conference on Advances in artificial life
Artificial Intelligence in Medicine
Hi-index | 0.00 |
An inductive learning algorithm takes a set of data as input and generates a hypothesis as output. A set of data is typically consistent with an infinite number of hypotheses; therefore, there must be factors other than the data that determine the output of the learning algorithm. In machine learning, these other factors are called the bias of the learner. Classical learning algorithms have a fixed bias, implicit in their design. Recently developed learning algorithms dynamically adjust their bias as they search for a hypothesis. Algorithms that shift bias in this manner are not as well understood as classical algorithms. In this paper, we show that the Baldwin effect has implications for the design and analysis of bias shifting algorithms. The Baldwin effect was proposed in 1896 to explain how phenomena that might appear to require Lamarckian evolution (inheritance of acquired characteristics) can arise from purely Darwinian evolution. Hinton and Nowlan presented a computational model of the Baldwin effect in 1987. We explore a variation on their model, which we constructed explicitly to illustrate the lessons that the Baldwin effect has for research in bias shifting algorithms. The main lesson is that it appears that a good strategy for shift of bias in a learning algorithm is to begin with a weak bias and gradually shift to a strong bias.