Approximation capabilities of multilayer feedforward networks
Neural Networks
Approximation by superposition of sigmoidal and radial basis functions
Advances in Applied Mathematics
Approximation by ridge functions and neural networks with one hidden layer
Journal of Approximation Theory
A note on different covering numbers in learning theory
Journal of Complexity
An efficient boosting algorithm for combining preferences
The Journal of Machine Learning Research
Query chains: learning to rank from implicit feedback
Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining
Learning to rank using gradient descent
ICML '05 Proceedings of the 22nd international conference on Machine learning
Simultaneous Lp-approximation order for neural networks
Neural Networks
Magnitude-preserving ranking algorithms
Proceedings of the 24th international conference on Machine learning
Generalization Bounds for Ranking Algorithms via Algorithmic Stability
The Journal of Machine Learning Research
Journal of Artificial Intelligence Research
Subset ranking using regression
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Stability and generalization of bipartite ranking algorithms
COLT'05 Proceedings of the 18th annual conference on Learning Theory
IEEE Transactions on Information Theory
Universal approximation bounds for superpositions of a sigmoidal function
IEEE Transactions on Information Theory
Approximation bounds for smooth functions in C(Rd) by neural and mixture networks
IEEE Transactions on Neural Networks
Approximation capability in C(R¯n) by multilayer feedforward networks and related problems
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Generalization ability of fractional polynomial models
Neural Networks
Hi-index | 0.00 |
The ranking problem is to learn a real-valued function which gives rise to a ranking over an instance space, which has gained much attention in machine learning in recent years. This article gives analysis of the convergence performance of neural networks ranking algorithm by means of the given samples and approximation property of neural networks. The upper bounds of convergence rate provided by our results can be considerably tight and independent of the dimension of input space when the target function satisfies some smooth condition. The obtained results imply that neural networks are able to adapt to ranking function in the instance space. Hence the obtained results are able to circumvent the curse of dimensionality on some smooth condition.