Predicting and explaining success and task duration in the Phoenix planner
Proceedings of the first international conference on Artificial intelligence planning systems
Learning decision lists using homogeneous rules
AAAI '94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 1)
Efficient enumeration of frequent sequences
Proceedings of the seventh international conference on Information and knowledge management
Feature generation for sequence categorization
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
Sequence Mining in Categorical Domains: Algorithms and Applications
Sequence Learning - Paradigms, Algorithms, and Applications
Discovery of temporal patterns from process instances
Computers in Industry - Special issue: Process/workflow mining
Incremental personalized web page mining utilizing self-organizing HCMAC neural network
Web Intelligence and Agent Systems
Mining Minimal Distinguishing Subsequence Patterns with Gap Constraints
ICDM '05 Proceedings of the Fifth IEEE International Conference on Data Mining
Learning recurrent behaviors from heterogeneous multivariate time-series
Artificial Intelligence in Medicine
A Dichotomic Search Algorithm for Mining and Learning in Domain-Specific Logics
Fundamenta Informaticae - Advances in Mining Graphs, Trees and Sequences
Mining minimal distinguishing subsequence patterns with gap constraints
Knowledge and Information Systems
Mining sequential patterns for protein fold recognition
Journal of Biomedical Informatics
Data & Knowledge Engineering
Nearest-neighbor-based approach to time-series classification
Decision Support Systems
A Dichotomic Search Algorithm for Mining and Learning in Domain-Specific Logics
Fundamenta Informaticae - Advances in Mining Graphs, Trees and Sequences
Hi-index | 0.00 |
Classification algorithms are difficult to apply to sequential examples, such as text or DNA sequences, because a vast number of features are potentially useful for describing each example. Past work on feature selection has focused on searching the space of all subsets of the available features, which is intractable for large feature sets. The authors adapt data mining techniques to act as a preprocessor to select features for standard classification algorithms such as Naive Bayes and Winnow. They apply their algorithm to a number of data sets and experimentally show that the features produced by the algorithm improve classification accuracy up to 20%.