Natural Language Grammatical Inference with Recurrent Neural Networks
IEEE Transactions on Knowledge and Data Engineering
Universal Approximation Capability of Cascade Correlation for Structures
Neural Computation
Spatiotemporal Connectionist Networks: A Taxonomy and Review
Neural Computation
Hi-index | 0.00 |
In this paper we explore the expressive power of recurrent networks with local feedback connections for symbolic data streams. We rely on the analysis of the maximal set of strings that can be shattered by the concept class associated to these networks (i.e. strings that can be arbitrarily classified as positive or negative), and find that their expressive power is inherently limited, since there are sets of strings that cannot be shattered, regardless of the number of hidden units. Although the analysis holds for networks with hard threshold units, we claim that the incremental computational capabilities gained when using sigmoidal units are severely paid in terms of robustness of the corresponding representation