Evidential reasoning using stochastic simulation of causal models
Artificial Intelligence
Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Real-world applications of Bayesian networks
Communications of the ACM
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Learning Bayesian networks from data: an information-theory based approach
Artificial Intelligence
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
A Guide to the Literature on Learning Probabilistic Networks from Data
IEEE Transactions on Knowledge and Data Engineering
Artificial Intelligence: A Modern Approach
Artificial Intelligence: A Modern Approach
Training Support Vector Machines: an Application to Face Detection
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Support Vector Machines for Text Categorization
HICSS '03 Proceedings of the 36th Annual Hawaii International Conference on System Sciences (HICSS'03) - Track 4 - Volume 4
Training ν-Support Vector Classifiers: Theory and Algorithms
Neural Computation
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Learning and Inferences of the Bayesian Network with Maximum Likelihood Parameters
ADMA '08 Proceedings of the 4th international conference on Advanced Data Mining and Applications
Hi-index | 12.07 |
Computing the posterior probability distribution for a set of query variables by search result is an important task of inferences with a Bayesian network. Starting from real applications, it is also necessary to make inferences when the evidence is not contained in training data. In this paper, we are to augment the learning function to Bayesian network inferences, and extend the classical ''search''-based inferences to ''search+learning''-based inferences. Based on the support vector machine, we use a class of hyperplanes to construct the hypothesis space. Then we use the method of solving an optimal hyperplane to find a maximum likelihood hypothesis for the value not contained in training data. Further, we give a convergent Gibbs sampling algorithm for approximate probabilistic inference with the presence of maximum likelihood parameters. Preliminary experiments show the feasibility of our proposed methods.