Adaptive signal processing
What size net gives valid generalization?
Neural Computation
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Approximation capabilities of multilayer feedforward networks
Neural Networks
Determination of road directions using feedback neural nets
Signal Processing - Intelligent systems for signal and image understanding
Feedforward nets for interpolation and classification
Journal of Computer and System Sciences
Block-structured recurrent neural networks
Neural Networks
Hi-index | 0.00 |
We provide a discussion of bounded rationality learning behindtraditional learning mechanisms, i.e., Recursive Ordinary Least Squares andBayesian Learning . These mechanisms lack for many reasons a behavioralinterpretation and, following the Simon criticism, they appear to be’substantively rational‘. In this paper, analyzing the Cagan model, weexplore two learning mechanisms which appear to be more plausible from abehavioral point of view and somehow ’procedurally rational‘: Least MeanSquares learning for linear models and Back Propagation for ArtificialNeural Networks . The two algorithms look for a minimum of the variance ofthe error forecasting by means of a steepest descent gradient procedure. Theanalysis of the Cagan model shows an interesting result: non-convergence oflearning to the Rational Expectations Equilibrium is not due to therestriction to linear learning devices; also Back Propagation learning forArtificial Neural Networks may fail to converge to the RationalExpectations Equilibrium of the model.