Algorithmic information theory
Algorithmic information theory
An introduction to Kolmogorov complexity and its applications (2nd ed.)
An introduction to Kolmogorov complexity and its applications (2nd ed.)
Wrappers for feature subset selection
Artificial Intelligence - Special issue on relevance
Unsupervised Feature Selection Using Feature Similarity
IEEE Transactions on Pattern Analysis and Machine Intelligence
Information, Randomness and Incompleteness
Information, Randomness and Incompleteness
An introduction to variable and feature selection
The Journal of Machine Learning Research
IEEE Transactions on Pattern Analysis and Machine Intelligence
Correlation-based Feature Selection Strategy in Neural Classification
ISDA '06 Proceedings of the Sixth International Conference on Intelligent Systems Design and Applications - Volume 01
FS_SFS: A novel feature selection method for support vector machines
Pattern Recognition
A hybrid approach for feature subset selection using neural networks and ant colony optimization
Expert Systems with Applications: An International Journal
Improved Lempel-Ziv Algorithm Based on Complexity Measurement of Short Time Series
FSKD '07 Proceedings of the Fourth International Conference on Fuzzy Systems and Knowledge Discovery - Volume 02
Normalized mutual information feature selection
IEEE Transactions on Neural Networks
A robust extended Elman backpropagation algorithm
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Discriminative metric design for robust pattern recognition
IEEE Transactions on Signal Processing
A neuro-fuzzy scheme for simultaneous feature selection and fuzzy rule-based classification
IEEE Transactions on Neural Networks
Estimating optimal feature subsets using efficient estimation of high-dimensional mutual information
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
In this paper, a new filter based feature selection algorithm using Lempel-Ziv Complexity (LZC) measure, called 'Lempel Feature Selection' (LFS), is proposed. LZC finds the number of unique patterns in a time series. A time series is produced from the values of a feature and LZC of the feature is computed from the time series. The features are ranked according to their values of LZCs. Features with higher valued LZCs are selected and lower ones are deleted. LFS requires very less computation, since it just computes the LZCs of all features once. LFS is tested on several real world benchmark problems such as soybean, diabetes, ionosphere, card, thyroid, cancer, wine, and heart disease. The selected features are applied to a neural network (NN) learning model. NN produces better results with the selected features than that of randomly selected features.