Using and designing massively parallel computers for artificial neural networks
Journal of Parallel and Distributed Computing - Special issue on neural computing on massively parallel processing
The anatomy of a large-scale hypertextual Web search engine
WWW7 Proceedings of the seventh international conference on World Wide Web 7
IR evaluation methods for retrieving highly relevant documents
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
Modern Information Retrieval
Learning to rank using gradient descent
ICML '05 Proceedings of the 22nd international conference on Machine learning
FPGA Implementations of Neural Networks
FPGA Implementations of Neural Networks
FPGA Acceleration of RankBoost in Web Search Engines
ACM Transactions on Reconfigurable Technology and Systems (TRETS)
The Impact of Arithmetic Representation on Implementing MLP-BP on FPGAs: A Study
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
This paper describes a FPGA-based hardware acceleration system for LambdaRank algorithm. LambdaRank Algorithm is a Neural Network (NN)-based learning to rank algorithm. It is intensively used by web search engine companies to increase the search relevance. Since i) the cost function for the ranking problem is much more complex than that of traditional Back-Propagation(BP) NNs, and ii) no coarse-grained parallelism exists, LambdaRank is hard to be efficiently accelerated by GPU or computer clusters. We presents a FPGA-based accelerator solution to provide high computing performance. A compact deep pipeline is proposed to handle the complex computing in the batch updating. The area scales linearly with the number of hidden nodes in the NN model. We also carefully design a data format to enable streaming consumption of the training data from host computer. The accelerator shows up to 24.6 speedup compared with the pure software implementation on datasets from a commercial search engine.