Depth-Size Tradeoffs for Neural Computation
IEEE Transactions on Computers - Special issue on artificial neural networks
Bounds for the computational power and learning complexity of analog neural nets
STOC '93 Proceedings of the twenty-fifth annual ACM symposium on Theory of computing
Size-depth trade-offs for threshold circuits
STOC '93 Proceedings of the twenty-fifth annual ACM symposium on Theory of computing
Powering requires threshold depth 3
Information Processing Letters
Design tools for artificial nervous systems
Proceedings of the 49th Annual Design Automation Conference
A 2D nearest-neighbor quantum architecture for factoring in polylogarithmic depth
Quantum Information & Computation
Hi-index | 754.84 |
An artificial neural network (ANN) is commonly modeled by a threshold circuit, a network of interconnected processing units called linear threshold gates. It is shown that ANNs can be much more powerful than traditional logic circuits, assuming that each threshold gate can be built with a cost that is comparable to that of AND/OR logic gates. In particular, the main results indicate that powering and division can be computed by polynomial-size ANNs of depth 4, and multiple product can be computed by polynomial-size ANNs of depth 5. Moreover, using the techniques developed, a previous result can be improved by showing that the sorting of n n-bit numbers can be carried out in a depth-3 polynomial-size ANN. Furthermore, it is shown that the sorting network is optimal in depth