Effects of Sample Size in Classifier Design
IEEE Transactions on Pattern Analysis and Machine Intelligence
Letter Recognition Using Holland-Style Adaptive Classifiers
Machine Learning
Statistical Pattern Recognition: A Review
IEEE Transactions on Pattern Analysis and Machine Intelligence
Two Variations on Fisher's Linear Discriminant for Pattern Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Linear Dimensionality Reduction via a Heteroscedastic Extension of LDA: The Chernoff Criterion
IEEE Transactions on Pattern Analysis and Machine Intelligence
Biostatistical Analysis (5th Edition)
Biostatistical Analysis (5th Edition)
Multi-class pattern classification using neural networks
Pattern Recognition
Information Discriminant Analysis: Feature Extraction with an Information-Theoretic Objective
IEEE Transactions on Pattern Analysis and Machine Intelligence
Cost-sensitive boosting for classification of imbalanced data
Pattern Recognition
Structured large margin machines: sensitive to data distributions
Machine Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Do unbalanced data have a negative effect on LDA?
Pattern Recognition
Bayes Optimality in Linear Discriminant Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Between Classification-Error Approximation and Weighted Least-Squares Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Theoretical Analysis of Bagging as a Linear Combination of Classifiers
IEEE Transactions on Pattern Analysis and Machine Intelligence
Geometry-Based Ensembles: Toward a Structural Characterization of the Classification Boundary
IEEE Transactions on Pattern Analysis and Machine Intelligence
Classification Based on Hybridization of Parametric and Nonparametric Classifiers
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Knowledge and Data Engineering
Maximum Likelihood Model Selection for 1-Norm Soft Margin SVMs with Multiple Parameters
IEEE Transactions on Pattern Analysis and Machine Intelligence
Pairwise Costs in Multiclass Perceptrons
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Neural Networks
On the Dual Formulation of Boosting Algorithms
IEEE Transactions on Pattern Analysis and Machine Intelligence
Kernel Optimization in Discriminant Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Mixing linear SVMs for nonlinear classification
IEEE Transactions on Neural Networks
Resampling approach for cluster model selection
Machine Learning
Maximum Margin Bayesian Network Classifiers
IEEE Transactions on Pattern Analysis and Machine Intelligence
A balanced neural tree for pattern classification
Neural Networks
Latent Log-Linear Models for Handwritten Digit Classification
IEEE Transactions on Pattern Analysis and Machine Intelligence
Novel Fisher discriminant classifiers
Pattern Recognition
The use of an adaptive threshold element to design a linear optimal pattern classifier
IEEE Transactions on Information Theory
Considerations of sample and feature size
IEEE Transactions on Information Theory
The linear separability problem: some testing methods
IEEE Transactions on Neural Networks
Generalized Linear Discriminant Analysis: A Unified Framework and Efficient Model Selection
IEEE Transactions on Neural Networks
Nesting One-Against-One Algorithm Based on SVMs for Pattern Classification
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
This paper studies Fisher linear discriminants (FLDs) based on classification accuracies for imbalanced datasets. An optimal threshold is found out from a series of empirical formulas developed, which is related not only to sample sizes but also to distribution regions. A mixed binary-decimal coding system is suggested to make the very dense datasets sparse and enlarge the class margins on condition that the neighborhood relationships of samples are nearly preserved. The within-class scatter matrices being or approximately singular should be moderately reduced in dimensionality but not added with tiny perturbations. The weight vectors can be further updated by a kind of epoch-limited (three at most) iterative learning strategy provided that the current training error rates come down accordingly. Putting the above ideas together, this paper proposes a type of integrated FLDs. The extensive experimental results over real-world datasets have demonstrated that the integrated FLDs have obvious advantages over the conventional FLDs in the aspects of learning and generalization performances for the imbalanced datasets.