Elements of information theory
Elements of information theory
Selection of relevant features and examples in machine learning
Artificial Intelligence - Special issue on relevance
Wrappers for feature subset selection
Artificial Intelligence - Special issue on relevance
Correlation-based Feature Selection for Discrete and Numeric Class Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Reduced feature-set based parallel CHMM speech recognition systems
Information Sciences—Informatics and Computer Science: An International Journal - Special issue: Spoken language analysis, modeling and recognition-statistical and adaptive connectionist approaches
Feature Selection with Decision Tree Criterion
HIS '05 Proceedings of the Fifth International Conference on Hybrid Intelligent Systems
GMDH-based feature ranking and selection for improved classification of medical data
Journal of Biomedical Informatics
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Genetic programming for simultaneous feature selection and classifier design
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
A tabu search approach to generating test sheets for multiple assessment criteria
IEEE Transactions on Education
Fast generic selection of features for neural network classifiers
IEEE Transactions on Neural Networks
Using mutual information for selecting features in supervised neural net learning
IEEE Transactions on Neural Networks
AI'10 Proceedings of the 23rd Canadian conference on Advances in Artificial Intelligence
Information Sciences: an International Journal
Hi-index | 0.01 |
Item ranking and selection plays a key role in constructing concise and informative educational tests. Traditional techniques based on the item response theory (IRT) have been used to automate this task, but they require model parameters to be determined a priori for each item and their application becomes more tedious with larger item banks. Machine-learning techniques can be used to build data-based models that relate the test result as output to the examinees' responses to various test items as inputs. With this approach, test item selection can benefit from the vast amount of literature on feature selection in many areas of machine learning and artificial intelligence that are characterized by high data dimensionality. This paper describes a novel technique for item ranking and selection using abductive network pass/fail classifiers based on the group method of data handling (GMDH). Experiments were carried out on a dataset consisting of the response of 2000 examinees to 45 test items together with the examinee's true ability level. The approach utilizes the ability of GMDH-based learning algorithms to automatically select optimum input features from a set of available inputs. Rankings obtained by iteratively applying this procedure are similar to those based on the average item information function (IIF) at the pass-fail ability threshold, IIF (@q=0), and the average information gain (IG). An optimum item subset derived from the GMDH-based ranking contains only one third of the test items and performs pass/fail classification with 91.2% accuracy on a 500-case evaluation subset, compared to 86.8% for a randomly selected item subset of the same size and 92% for a subset of the 15 items having the largest values for IIF (@q=0). Item rankings obtained with the proposed approach compare favorably with those obtained using neural network modeling and popular filter type feature selection methods, and the proposed approach is much faster than wrapper methods employing genetic search.