FPGA '02 Proceedings of the 2002 ACM/SIGDA tenth international symposium on Field-programmable gate arrays
Gigahertz Reconfigurable Computing Using SiGe HBT BiCMOS FPGAs
FPL '01 Proceedings of the 11th International Conference on Field-Programmable Logic and Applications
Proceedings of the conference on Design, automation and test in Europe - Volume 2
Mapping arbitrary logic functions into synchronous embedded memories for area reduction on FPGAs
Proceedings of the 2006 IEEE/ACM international conference on Computer-aided design
FPGA Design Automation: A Survey
Foundations and Trends in Electronic Design Automation
International Journal of Reconfigurable Computing - Regular issue
FPGA Architecture: Survey and Challenges
Foundations and Trends in Electronic Design Automation
Hi-index | 0.03 |
It has become clear that large embedded configurable memory arrays will be essential in future field programmable gate arrays (FPGAs). Embedded arrays provide high-density high-speed implementations of the storage parts of circuits, Unfortunately, they require the FPGA vendor to partition the device into memory and logic resources at manufacture-time. This leads to a waste of chip area for customers that do not use all of the storage provided. This chip area need not be wasted, and can in fact be used very efficiently, if the arrays are configured as multioutput ROMs, and used to implement logic, In this paper, we describe two versions of a new technology mapping algorithm that identifies parts of circuits that can be efficiently mapped to an embedded array and performs this mapping, The first version of the algorithm places no constraints on the depth of the final circuit; on a set of 29 sequential and combinational benchmarks, the tool is able to map, on average, 59.7 4-LUTs into a single 2-Kbit memory array, while increasing the critical path by 7%, The second version of the algorithm places a constraint on the depth of the final circuit; it maps, on average, 56.7 4-LUTs into the same memory array, while increasing the critical path by only 2.3%. This paper also considers the effect of the memory array architecture on the ability of the algorithm to pack logic into memory, It is shown that the algorithm performs best when each array has between 512 and 2048 bits, and has a word width that can be configured as 1, 2, 4, or 8