Compilation for a high-performance systolic array
SIGPLAN '86 Proceedings of the 1986 SIGPLAN symposium on Compiler construction
Warp architecture and implementation
ISCA '86 Proceedings of the 13th annual international symposium on Computer architecture
Low-level vision on warp and the apply programming model
Parallel computation and computers for artificial intelligence
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Parallel compilation for a parallel machine
PLDI '89 Proceedings of the ACM SIGPLAN 1989 Conference on Programming language design and implementation
Hi-index | 0.00 |
Warp is a programmable, systolic array computer developed by Carnegie Mellon and produced by GE. A 10-cell Warp machine can perform 100 million floating-point operations per second (10 MFLOPS). A variety of applications have been mapped onto Warp. The experience has been that the mapping is not a real problem; in fact, usually near-optimal mapping is relatively easy to obtain, and the actual implementation of the mapping on the machine can often be automated. This paper explains why this is the case by examining some computational models which are frequently used on Warp. Carnegie Mellon and Intel are jointly developing a VLSI version of Warp, called iWarp. It is expected that many applications can be efficiently mapped onto low-cost iWarp arrays to achieve an effective computation bandwidth of about one GigaFLOPS.