On the Time Required to Perform Multiplication
Journal of the ACM (JACM)
Space and Time Hierarchies for Classes of Control Structures and Data Structures
Journal of the ACM (JACM)
Information transfer and area-time tradeoffs for VLSI multiplication
Communications of the ACM
Introduction to VLSI Systems
STOC '79 Proceedings of the eleventh annual ACM symposium on Theory of computing
Some complexity questions related to distributive computing(Preliminary Report)
STOC '79 Proceedings of the eleventh annual ACM symposium on Theory of computing
The chip complexity of binary arithmetic
STOC '80 Proceedings of the twelfth annual ACM symposium on Theory of computing
A complexity theory for VLSI
The VLSI Complexity of Selected Graph Problems
Journal of the ACM (JACM)
Aspects of information flow in VLSI circuits
STOC '86 Proceedings of the eighteenth annual ACM symposium on Theory of computing
The power of randomness for communication complexity
STOC '87 Proceedings of the nineteenth annual ACM symposium on Theory of computing
Optimal VLSI circuits for sorting
Journal of the ACM (JACM)
Size-time complexity of Boolean networks for prefix computations
Journal of the ACM (JACM)
On the communication complexity of graph properties
STOC '88 Proceedings of the twentieth annual ACM symposium on Theory of computing
The communication complexity of several problems in matrix computation
SPAA '89 Proceedings of the first annual ACM symposium on Parallel algorithms and architectures
Multi-level logic synthesis using communication complexity
DAC '89 Proceedings of the 26th ACM/IEEE Design Automation Conference
AT/sup 2/-Optimal Galois Field Multiplier for VLSI
IEEE Transactions on Computers
Extreme Area-Time Tradeoffs in VLSI
IEEE Transactions on Computers
The computational complexity of universal hashing
STOC '90 Proceedings of the twenty-second annual ACM symposium on Theory of computing
Upper and lower bounds on switching energy in VLSI
Journal of the ACM (JACM)
Randomized versus nondeterministic communication complexity
STOC '92 Proceedings of the twenty-fourth annual ACM symposium on Theory of computing
A coding theorem for distributed computation
STOC '94 Proceedings of the twenty-sixth annual ACM symposium on Theory of computing
Information Transfer in Distributed Computing with Applications to VLSI
Journal of the ACM (JACM)
Polynomial Time Testability of Circuits Generated by Input Decomposition
IEEE Transactions on Computers
Cellular automata: energy consumption and physical feasibility
Fundamenta Informaticae - Special issue on cellular automata
On Multipartition Communication Complexity
STACS '01 Proceedings of the 18th Annual Symposium on Theoretical Aspects of Computer Science
Two applications of information complexity
Proceedings of the thirty-fifth annual ACM symposium on Theory of computing
STOC '82 Proceedings of the fourteenth annual ACM symposium on Theory of computing
Las Vegas is better than determinism in VLSI and distributed computing (Extended Abstract)
STOC '82 Proceedings of the fourteenth annual ACM symposium on Theory of computing
Lower bounds on communication complexity
STOC '84 Proceedings of the sixteenth annual ACM symposium on Theory of computing
On notions of information transfer in VLSI circuits
STOC '83 Proceedings of the fifteenth annual ACM symposium on Theory of computing
Algorithmics – is there hope for a unified theory?
CSR'10 Proceedings of the 5th international conference on Computer Science: theory and Applications
Cellular Automata: Energy Consumption and Physical Feasibility
Fundamenta Informaticae - Cellular Automata
Space-bounded communication complexity
Proceedings of the 4th conference on Innovations in Theoretical Computer Science
Hi-index | 0.01 |
In this paper we will explore the limitations imposed by entropic constraints, both in generality and for specific problems. We list below the main questions that we will address. (1) In the binary number system, addition is easy while multiplication is hard for VLSI. Is there an “ideal” number representation, in which all arithmetic operations have efficient VLSI implementations? (2) Can one build multipliers for binary numbers, which achieve both small area and fast average computation time? (3) Thompson's technique applies only to multiple output functions. How can one prove area-time bounds for single output functions? (4) What other ways are there for deriving entropic constraints from consideration of data movement? Answers to these questions will be discussed in the ensuing sections.