Communications of the ACM
On the complexity of inductive inference
Information and Control
Probability and plurality for aggregations of learning machines
Information and Computation
Probabilistic inductive inference
Journal of the ACM (JACM)
Trade-off among parameters affecting inductive inference
Information and Computation
COLT '90 Proceedings of the third annual workshop on Computational learning theory
Relations between probabilistic and team one-shot learners (extended abstract)
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
Breaking the probability ½ barrier in FIN-type learning
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Capabilities of probabilistic learners with bounded mind changes
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
The Power of Pluralism for Automatic Program Synthesis
Journal of the ACM (JACM)
Inductive Inference: Theory and Methods
ACM Computing Surveys (CSUR)
An Introduction to the General Theory of Algorithms
An Introduction to the General Theory of Algorithms
Use of Reduction Arguments in Determining Popperian FIN-Type Learning Capabilities
ALT '93 Proceedings of the 4th International Workshop on Algorithmic Learning Theory
Probabilistic and Pluralistic Learners with Mind Changes
MFCS '92 Proceedings of the 17th International Symposium on Mathematical Foundations of Computer Science
Capabilities of Thoughtful Machines
Fundamenta Informaticae
Taming teams with mind changes
Journal of Computer and System Sciences
Learning Behaviors of Functions with Teams
Fundamenta Informaticae
Hi-index | 0.00 |
We consider the inductive inference model of Gold [15]. Suppose we are given a set of functions that are learnable with certain number of mind changes and errors. What properties of these functions are learnable if we allow fewer number of mind changes or errors? In order to answer this question this paper extends the Inductive Inference model introduced by Gold [15]. Another motivation for this extension is to understand and characterize properties that are learnable for a given set of functions. Our extension considers a wide range of properties of function based on their input-output relationship. Two specific properties of functions are studied in this paper. The first property, which we call modality, explores how the output of a function fluctuates. For example, consider a function that predicts the price of a stock. A brokerage company buys and sells stocks very often in a day for its clients with the intent of maximizing their profit. If the company is able predict the trend of the stock market "reasonably" accurately then it is bound to be very successful. Identification criterion for this property of a function f is called PREX which predicts if f(x) is equal to, less than or greater than f(x+1) for each x. Next, as opposed to a constant tracking by a brokerage company, an individual investor does not often track dynamic changes in stock values. Instead, the investor would like to move the investment to a less risky option when the investment exceeds or falls below certain threshold. We capture this notion using an identification criterion called TREX that essentially predicts if a function value is at, above, or below a threshold value. Conceptually,modality prediction (i.e., PREX) and threshold prediction (i.e., TREX) are "easier" than EX learning. We show that neither the number of errors nor the number of mind-changes can be reduced when we ease the learning criterion from exact learning to learning modality or threshold. We also prove that PREX and TREX are totally different properties to predict. That is, the strategy for a brokerage company may not be a good strategy for individual investor and vice versa.