A portable global optimizer and linker
PLDI '88 Proceedings of the ACM SIGPLAN 1988 conference on Programming Language design and Implementation
Formal Methods for Protocol Testing: A Detailed Study
IEEE Transactions on Software Engineering
A code generation interface for ANSI C
Software—Practice & Experience
Test Selection Based on Finite State Models
IEEE Transactions on Software Engineering
The advantages of machine-dependent global optimization
Proceedings of the international conference on Programming languages and system architectures
A formal model and specification language for procedure calling conventions
POPL '95 Proceedings of the 22nd ACM SIGPLAN-SIGACT symposium on Principles of programming languages
Testing finite state machines: fault detection
Selected papers of the 23rd annual ACM symposium on Theory of computing
Architecture validation for processors
ISCA '95 Proceedings of the 22nd annual international symposium on Computer architecture
Switching and Finite Automata Theory: Computer Science Series
Switching and Finite Automata Theory: Computer Science Series
Random testing of C calling conventions
Proceedings of the sixth international symposium on Automated analysis-driven debugging
Conference record of the 33rd ACM SIGPLAN-SIGACT symposium on Principles of programming languages
Hi-index | 0.00 |
Building compilers that generate correct code is difficult. In this paper we present a compiler testing technique that closes the gap between actual compiler implementations and correct compilers. Using formal specifications of procedure calling conventions, we have built a target-sensitive test suite generator that builds test cases for a specific aspect of compiler code generators the procedure calling sequence generator. By exercising compilers with these target-specific test suites, our automated testing tool has exposed bugs in every compiler tested. These compilers include ones that have been in heavy use for many years. The detected bugs cause more than 14,000 test cases to fail.