The computational brain
Similarity learning for graph-based image representations
Pattern Recognition Letters - Special issue: Graph-based representations in pattern recognition
Towards Incremental Parsing of Natural Language Using Recursive Neural Networks
Applied Intelligence
The Journal of Machine Learning Research
Support vector machine learning for interdependent and structured output spaces
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Universal Approximation Capability of Cascade Correlation for Structures
Neural Computation
Introduction: Special issue on neural networks and kernel methods for structured domains
Neural Networks - Special issue on neural networks and kernel methods for structured domains
2005 Special Issue: Learning protein secondary structure from sequential and relational data
Neural Networks - Special issue on neural networks and kernel methods for structured domains
Neural Networks - Special issue on neural networks and kernel methods for structured domains
Confabulation Theory: The Mechanism of Thought
Confabulation Theory: The Mechanism of Thought
A generative theory of shape
A general framework for adaptive processing of data structures
IEEE Transactions on Neural Networks
Contextual processing of structured data by recursive cascade correlation
IEEE Transactions on Neural Networks
Recursive processing of cyclic graphs
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
Recursive Neural Networks are non-linear adaptive models that are able to learn deep structured information. However, these models have not yet been broadly accepted. This fact is mainly due to its inherent complexity. In particular, not only for being extremely complex information processing models, but also because of a computational expensive learning phase. The most popular training method for these models is back-propagation through the structure. This algorithm has been revealed not to be the most appropriate for structured processing due to problems of convergence, while more sophisticated training methods enhance the speed of convergence at the expense of increasing significantly the computational cost. In this paper, we firstly perform an analysis of the underlying principles behind these models aimed at understanding their computational power. Secondly, we propose an approximate second order stochastic learning algorithm. The proposed algorithm dynamically adapts the learning rate throughout the training phase of the network without incurring excessively expensive computational effort. The algorithm operates in both on-line and batch modes. Furthermore, the resulting learning scheme is robust against the vanishing gradients problem. The advantages of the proposed algorithm are demonstrated with a real-world application example.