Computational geometry: an introduction
Computational geometry: an introduction
A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Machine Learning
The recursive deterministic perceptron neural network
Neural Networks
Current trends on knowledge extraction and neural networks
ICANN'05 Proceedings of the 15th international conference on Artificial neural networks: formal models and their applications - Volume Part II
The reduced nearest neighbor rule (Corresp.)
IEEE Transactions on Information Theory
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
The linear separability problem: some testing methods
IEEE Transactions on Neural Networks
Artificial neural networks application in software testing selection method
HAIS'11 Proceedings of the 6th international conference on Hybrid artificial intelligent systems - Volume Part I
Hi-index | 12.05 |
Several algorithms exist for testing linear separability. The choice of a particular testing algorithm has effects on the performance of constructive neural network algorithms that are based on the transformation of a nonlinear separability classification problem into a linearly separable one. This paper presents an empirical study of these effects in terms of the topology size, the convergence time, and generalisation level of the neural networks. Six different methods for testing linear separability were used in this study. Four out of the six methods are exact methods and the remaining two are approximative ones. A total of nine machine learning benchmarks were used for this study.