Communications of the ACM - Special issue on parallelism
Synthesizing Statistical Knowledge from Incomplete Mixed-Mode Data
IEEE Transactions on Pattern Analysis and Machine Intelligence
Instance-Based Learning Algorithms
Machine Learning
On changing continuous attributes into ordered discrete attributes
EWSL-91 Proceedings of the European working session on learning on Machine learning
Comparing connectionist and symbolic learning methods
Proceedings of a workshop on Computational learning theory and natural learning systems (vol. 1) : constraints and prospects: constraints and prospects
Similarity metric learning for a variable-kernel classifier
Neural Computation
Stochastic Complexity in Statistical Inquiry Theory
Stochastic Complexity in Statistical Inquiry Theory
A study of instance-based algorithms for supervised learning tasks: mathematical, empirical, and psychological evaluations
A study of distance-based machine learning algorithms
A study of distance-based machine learning algorithms
Toward Global Optimization of Case-Based Reasoning Systems for Financial Forecasting
Applied Intelligence
Improved heterogeneous distance functions
Journal of Artificial Intelligence Research
Simultaneous optimization of artificial neural networks for financial forecasting
Applied Intelligence
Hi-index | 0.00 |
This paper adopts the idea of discretising continuous attributes (Fayyadand Irani 1993) and applies it to lazy learning algorithms (Aha 1990; Aha,Kibler and Albert 1991). This approach converts continuous attributes intonominal attributes at the outset. We investigate the effects of this approachon the performance of lazy learning algorithms and examine it empiricallyusing both real-world and artificial data to characterise the benefits ofdiscretisation in lazy learning algorithms. Specifically, we have showed thatdiscretisation achieves an effect of noise reduction and increases lazylearning algorithms‘ tolerance for irrelevant continuous attributes. The proposed approach constrains the representation space in lazy learningalgorithms to hyper-rectangular regions that are orthogonal to the attributeaxes. Our generally better results obtained using a more restrictedrepresentation language indicate that employing a powerful representationlanguage in a learning algorithm is not always the best choice as it can leadto a loss of accuracy.