Communications of the ACM
Fractals everywhere
A general lower bound on the number of examples needed for learning
Information and Computation
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
What size net gives valid generalization?
Neural Computation
Identification of unions of languages drawn from an identifiable class
COLT '89 Proceedings of the second annual workshop on Computational learning theory
Monotonic and non-monotonic inductive inference
New Generation Computing - Selected papers from the international workshop on algorithmic learning theory,1990
The correct definition of finite elasticity: corrigendum to identification of unions
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
On the role of procrastination in machine learning
Information and Computation
An introduction to computational learning theory
An introduction to computational learning theory
Towards a mathematical theory of machine discovery from facts
Theoretical Computer Science - Special issue on algorithmic learning theory
Characterizations of monotonic and dual monotonic language learning
Information and Computation
On learning from noisy and incomplete examples
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Elementary formal systems, intrinsic complexity, and procrastination
Information and Computation
Machine Learning - Special issue on the ninth annual conference on computational theory (COLT '96)
Learning with restricted focus of attention
Journal of Computer and System Sciences
On the inductive inference of recursive real-valued functions
Theoretical Computer Science - Special issue on computability and complexity in analysis
Computability on subsets of Euclidean space I: closed and compact subsets
Theoretical Computer Science - Special issue on computability and complexity in analysis
Learning to Reason with a Restricted View
Machine Learning
Inference of Reversible Languages
Journal of the ACM (JACM)
Computable analysis: an introduction
Computable analysis: an introduction
A comparison of identification criteria for inductive inference of recursive real-valued functions
Theoretical Computer Science - Algorithmic learning theory
Algorithmic Program DeBugging
Discretization: An Enabling Technique
Data Mining and Knowledge Discovery
Theoretical Computer Science
SODA '03 Proceedings of the fourteenth annual ACM-SIAM symposium on Discrete algorithms
Quantization of Continuous Input Variables for Binary Classification
IDEAL '00 Proceedings of the Second International Conference on Intelligent Data Engineering and Automated Learning, Data Mining, Financial Engineering, and Intelligent Agents
The iRRAM: Exact Arithmetic in C++
CCA '00 Selected Papers from the 4th International Workshop on Computability and Complexity in Analysis
Monotonicity versus Efficiency for Learning Languages from Texts
AII '94 Proceedings of the 4th International Workshop on Analogical and Inductive Inference: Algorithmic Learning Theory
Inferability of Recursive Real-Valued Functions
ALT '97 Proceedings of the 8th International Conference on Algorithmic Learning Theory
Inference of omega-Languages from Prefixes
ALT '01 Proceedings of the 12th International Conference on Algorithmic Learning Theory
Learning Recursive Functions Refutably
ALT '01 Proceedings of the 12th International Conference on Algorithmic Learning Theory
A Thesis in Inductive Inference
Proceedings of the 1st International Workshop on Nonmonotonic and Inductive Logic
Monotonic Versus Nonmonotonic Language Learning
Proceedings of the Second International Workshop on Nonmonotonic and Inductive Logic
Learning from examples with unspecified attribute values
Information and Computation
Refutable language learning with a neighbor system
Theoretical Computer Science - Selected papers in honour of Setsuo Arikawa
Theoretical Computer Science - Selected papers in honour of Setsuo Arikawa
A symbolic representation of time series, with implications for streaming algorithms
DMKD '03 Proceedings of the 8th ACM SIGMOD workshop on Research issues in data mining and knowledge discovery
Computability on subsets of metric spaces
Theoretical Computer Science - Topology in computer science
Necessary and Sufficient Pre-processing in Numerical Range Discretization
Knowledge and Information Systems
Discretization from data streams: applications to histograms and data mining
Proceedings of the 2006 ACM symposium on Applied computing
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Learning indexed families of recursive languages from positive data: A survey
Theoretical Computer Science
Learning recursive functions: A survey
Theoretical Computer Science
Partial observability and learnability
Artificial Intelligence
Learning figures with the hausdorff metric by fractals
ALT'10 Proceedings of the 21st international conference on Algorithmic learning theory
Hypothesis spaces for learning
Information and Computation
Uncountable automatic classes and learning
Theoretical Computer Science
Prediction of recursive real-valued functions from finite examples
JSAI'05 Proceedings of the 2005 international conference on New Frontiers in Artificial Intelligence
Hi-index | 0.00 |
We present learning of figures, nonempty compact sets in Euclidean space, based on Gold's learning model aiming at a computable foundation for binary classification of multivariate data. Encoding real vectors with no numerical error requires infinite sequences, resulting in a gap between each real vector and its discretized representation used for the actual machine learning process. Our motivation is to provide an analysis of machine learning problems that explicitly tackles this aspect which has been glossed over in the literature on binary classification as well as in other machine learning tasks such as regression and clustering. In this paper, we amalgamate two processes: discretization and binary classification. Each learning target, the set of real vectors classified as positive, is treated as a figure. A learning machine receives discretized vectors as input data and outputs a sequence of discrete representations of the target figure in the form of self-similar sets, known as fractals. The generalization error of each output is measured by the Hausdorff metric. Using this learning framework, we reveal a hierarchy of learnable classes under various learning criteria in the track of traditional analysis based on Gold's learning model, and show a mathematical connection between machine learning and fractal geometry by measuring the complexity of learning using the Hausdorff dimension and the VC dimension. Moreover, we analyze computability aspects of learning of figures using the framework of Type-2 Theory of Effectivity (TTE).