Original Contribution: Parity with two layer feedforward nets
Neural Networks
The Minimum Number of Errors in the N-Parity and its Solution with an Incremental Neural Network
Neural Processing Letters
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
A Solution for the N-bit Parity Problem Using a Single Translated Multiplicative Neuron
Neural Processing Letters
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Solving multiclass learning problems via error-correcting output codes
Journal of Artificial Intelligence Research
A new methodology of extraction, optimization and application of crisp and fuzzy logical rules
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
A Comparison of Methods for Learning of Highly Non-separable Problems
ICAISC '08 Proceedings of the 9th international conference on Artificial Intelligence and Soft Computing
Support Vector Machines for Visualization and Dimensionality Reduction
ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part I
Projection Pursuit Constructive Neural Networks Based on Quality of Projected Clusters
ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part II
Constrained Learning Vector Quantization or Relaxed k-Separability
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
Almost Random Projection Machine
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
ICONIP '09 Proceedings of the 16th International Conference on Neural Information Processing: Part II
Learning highly non-separable Boolean functions using constructive feedforward neural network
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
Support feature machine for DNA microarray data
RSCTC'10 Proceedings of the 7th international conference on Rough sets and current trends in computing
An efficient collaborative recommender system based on k-separability
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part III
Almost random projection machine with margin maximization and kernel features
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part II
Learning data structures with inherent complex logic: neurocognitive perspective
CIMMACS'07 Proceedings of the 6th WSEAS international conference on Computational intelligence, man-machine systems and cybernetics
Fast projection pursuit based on quality of projected clusters
ICANNGA'11 Proceedings of the 10th international conference on Adaptive and natural computing algorithms - Volume Part II
ICAISC'12 Proceedings of the 11th international conference on Artificial Intelligence and Soft Computing - Volume Part I
Hi-index | 0.00 |
Neural networks use their hidden layers to transform input data into linearly separable data clusters, with a linear or a perceptron type output layer making the final projection on the line perpendicular to the discriminating hyperplane. For complex data with multimodal distributions this transformation is difficult to learn. Projection on k ≥2 line segments is the simplest extension of linear separability, defining much easier goal for the learning process. The difficulty of learning non-linear data distributions is shifted to separation of line intervals, making the main part of the transformation much simpler. For classification of difficult Boolean problems, such as the parity problem, linear projection combined with k-separability is sufficient.