Diffusions for global optimizations
SIAM Journal on Control and Optimization
A practical Bayesian framework for backpropagation networks
Neural Computation
On-Line Learning Fokker-Planck Machine
Neural Processing Letters
Numerical Recipes in C++: the art of scientific computing
Numerical Recipes in C++: the art of scientific computing
Bayesian Learning for Neural Networks
Bayesian Learning for Neural Networks
Genetic Algorithms in Search, Optimization and Machine Learning
Genetic Algorithms in Search, Optimization and Machine Learning
A Survey of Optimization by Building and Using Probabilistic Models
Computational Optimization and Applications
Recent approaches to global optimization problems through Particle Swarm Optimization
Natural Computing: an international journal
Introduction to Evolutionary Computing
Introduction to Evolutionary Computing
Linearly Constrained Global Optimization and Stochastic Differential Equations
Journal of Global Optimization
Principles of Optimal Design
The Interplay of Optimization and Machine Learning Research
The Journal of Machine Learning Research
IWANN'07 Proceedings of the 9th international work conference on Artificial neural networks
Stationary Fokker: planck learning for the optimization of parameters in nonlinear models
MICAI'07 Proceedings of the artificial intelligence 6th Mexican international conference on Advances in artificial intelligence
Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images
IEEE Transactions on Pattern Analysis and Machine Intelligence
Bayesian inference based on stationary fokker-planck sampling
Neural Computation
Hi-index | 0.01 |
The convergence properties of the stationary Fokker-Planck algorithm for the estimation of the asymptotic density of stochastic search processes is studied. Theoretical and empirical arguments for the characterization of convergence of the estimation in the case of separable and nonseparable nonlinear optimization problems are given. Some implications of the convergence of stationary Fokker-Planck learning for the inference of parameters in artificial neural network models are outlined.