C4.5: programs for machine learning
C4.5: programs for machine learning
Hierarchical mixtures of experts and the EM algorithm
Neural Computation
Extracting regression rules from neural networks
Neural Networks
X-means: Extending K-means with Efficient Estimation of the Number of Clusters
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Model selection and weight sharing of multi-layer perceptrons
KES'05 Proceedings of the 9th international conference on Knowledge-Based Intelligent Information and Engineering Systems - Volume Part IV
Extraction of rules from artificial neural networks for nonlinear regression
IEEE Transactions on Neural Networks
Nominally conditioned linear regression
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part III
Hi-index | 0.00 |
We present a method for finding nominally conditioned polynomials to fit multivariate data containing both numeric and nominal variables. Here a polynomial is accompanied with a nominal condition stating when the polynomial is applied. Our method employs a four-layer perceptron (MLP) having shared weights. To get succinct polynomials, we employ weight sharing method called BCW, where each weight is allowed to be one of common weights, and a near-zero common weight can be eliminated. BCW performs bidirectional search to obtain an excellent set of common weights. Moreover, we employ the Bayesian Information Criterion (BIC) to efficiently select the optimal model parameters. In our experiments the proposed method successfully restored the original polynomials for artificial data, and found succinct polynomials for real data sets, showing excellent generalization.