The cascade-correlation learning architecture
Advances in neural information processing systems 2
C4.5: programs for machine learning
C4.5: programs for machine learning
Two Apparent `Counterexamples' To Marcus: A Closer Look
Minds and Machines
Computational theories of mind, and Fodor's analysis of neural network behaviour
Journal of Experimental & Theoretical Artificial Intelligence
Hi-index | 0.00 |
Computer simulations show that an unstructured neural-network model [Shultz, T. R., & Bale, A. C. (2001). Infancy, 2, 501---536] covers the essential features驴of infant learning of simple grammars in an artificial language [Marcus, G. F., Vijayan, S., Bandi Rao, S., & Vishton, P. M. (1999). Science, 283, 77---80], and generalizes to examples both outside and inside of the range of training sentences. Knowledge-representation analyses confirm that these networks discover that duplicate words in the sentences are nearly identical and that they use this near-identity relation to distinguish sentences that are consistent or inconsistent with a familiar grammar. Recent simulations that were claimed to show that this model did not really learn these grammars [Vilcu, M., & Hadley, R. F. (2005). Minds and Machines, 15, 359---382] confounded syntactic types with speech sounds and did not perform standard statistical tests of results.