Recursive distributed representations
Artificial Intelligence - On connectionist symbol processing
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Elements of information theory
Elements of information theory
The nature of statistical learning theory
The nature of statistical learning theory
The Random Subspace Method for Constructing Decision Forests
IEEE Transactions on Pattern Analysis and Machine Intelligence
On the approximability of minimizing nonzero variables or unsatisfied relations in linear systems
Theoretical Computer Science
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Dictionary learning algorithms for sparse representation
Neural Computation
PAC Meditation on Boolean Formulas
Proceedings of the 5th International Symposium on Abstraction, Reformulation and Approximation
Ensemble Methods in Machine Learning
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
BICA: a Boolean Independent Component Analysis Algorithm
HIS '05 Proceedings of the Fifth International Conference on Hybrid Intelligent Systems
Learning rule representations from data
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Hi-index | 0.01 |
We analyze the potentialities of an approach to represent general data records through Boolean vectors in the philosophy of ICA. We envisage these vectors at an intermediate step of a clustering procedure aimed at taking decisions from data. With a "divide et conquer" strategy we first look for a suitable representation of the data and then assign them to clusters. We assume a Boolean coding to be a proper representation of the input of the discrete function computing assignments. We demand the following of this coding: to preserve most information so as to prove appropriate independently of the particular clustering task; to be concise, in order to get understandable assignment rules; and to be sufficiently random, to prime statistical classification methods. In the paper we toss these properties in terms of entropic features and connectionist procedures, whose validation is checked on a series of benchmarks.