Mining association rules between sets of items in large databases
SIGMOD '93 Proceedings of the 1993 ACM SIGMOD international conference on Management of data
Lazy learning
Rough Sets: Theoretical Aspects of Reasoning about Data
Rough Sets: Theoretical Aspects of Reasoning about Data
Rough Sets in Knowledge Discovery 2: Applications, Case Studies, and Software Systems
Rough Sets in Knowledge Discovery 2: Applications, Case Studies, and Software Systems
Discovery of Decision Rules by Matching New Objects Against Data Tables
RSCTC '98 Proceedings of the First International Conference on Rough Sets and Current Trends in Computing
Modelling Medical Diagnostic Rules Based on Rough Sets
RSCTC '98 Proceedings of the First International Conference on Rough Sets and Current Trends in Computing
Some Remarks on Extensions and Restrictions of Information Systems
RSCTC '00 Revised Papers from the Second International Conference on Rough Sets and Current Trends in Computing
Induction of Classification Rules by Granular Computing
TSCTC '02 Proceedings of the Third International Conference on Rough Sets and Current Trends in Computing
Scalable Classification Method Based on Rough Sets
TSCTC '02 Proceedings of the Third International Conference on Rough Sets and Current Trends in Computing
Data Mining and Knowledge Discovery Approaches Based on Rule Induction Techniques (Massive Computing)
Maximal consistent extensions of information systems relative to their theories
Information Sciences: an International Journal
Information and Complexity in Statistical Modeling
Information and Complexity in Statistical Modeling
Minimal Templates and Knowledge Discovery
RSEISP '07 Proceedings of the international conference on Rough Sets and Intelligent Systems Paradigms
Inhibitory Rules in Data Analysis: A Rough Set Approach
Inhibitory Rules in Data Analysis: A Rough Set Approach
Two Families of Classification Algorithms
RSFDGrC '07 Proceedings of the 11th International Conference on Rough Sets, Fuzzy Sets, Data Mining and Granular Computing
Comparison of lazy classification algorithms based on deterministic and inhibitory decision rules
RSKT'08 Proceedings of the 3rd international conference on Rough sets and knowledge technology
Approximation spaces and information granulation
Transactions on Rough Sets III
Analogy-based reasoning in classifier construction
Transactions on Rough Sets IV
Rough sets and vague concept approximation: from sample approximation to adaptive learning
Transactions on Rough Sets V
Application of the Method of Editing and Condensing in the Process of Global Decision-making
Fundamenta Informaticae
A novel feature selection method and its application
Journal of Intelligent Information Systems
Hi-index | 0.02 |
We discuss two, in a sense extreme, kinds of nondeterministic rules in decision tables. The first kind of rules, called as inhibitory rules, are blocking only one decision value (i.e., they have all but one decisions from all possible decisions on their right hand sides). Contrary to this, any rule of the second kind, called as a bounded nondeterministic rule, can have on the right hand side only a few decisions. We show that both kinds of rules can be used for improving the quality of classification. In the paper, two lazy classification algorithms of polynomial time complexity are considered. These algorithms are based on deterministic and inhibitory decision rules, but the direct generation of rules is not required. Instead of this, for any new object the considered algorithms extract from a given decision table efficiently some information about the set of rules. Next, this information is used by a decision-making procedure. The reported results of experiments show that the algorithms based on inhibitory decision rules are often better than those based on deterministic decision rules. We also present an application of bounded nondeterministic rules in construction of rule based classifiers. We include the results of experiments showing that by combining rule based classifiers based on minimal decision rules with bounded nondeterministic rules having confidence close to 1 and sufficiently large support, it is possible to improve the classification quality.