Fault tolerant network on chip switching with graceful performance degradation
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems - Special issue on the 2009 ACM/IEEE international symposium on networks-on-chip
Digital preservation: communicating across cyberspace and time
Proceedings of the 1st International Digital Preservation Interoperability Framework Symposium
Digital preservation: communicating across cyberspace and time
Proceedings of the 2010 Roadmap for Digital Preservation Interoperability Framework Workshop
Extending Amdahl's law and Gustafson's law by evaluating interconnections on multi-core processors
The Journal of Supercomputing
Hi-index | 0.00 |
Progress in computer technology over the last four decades has been spectacular, driven by Moore's law which, though initially an observation, has become a self-fulfilling prophecy and a boardroom planning tool. Although Gordon Moore expressed his vision of progress simply in terms of the number of transistors that could be manufactured economically on an integrated circuit, the means of achieving this progress was based principally on shrinking transistor dimensions, and with that came collateral gains in performance, power-efficiency and, last but not least, cost. The semiconductor industry appears to be confident in its ability to continue to shrink transistors, at least for another decade or so, but the game is already changing. We can no longer assume that smaller circuits will go faster, or be more power-efficient. As we approach atomic limits, device variability is beginning to hurt, and design costs are going through the roof. These are impacting the economics of design in ways that will affect the entire computing and communications industries. For example, on the desktop there is a trend away from high-speed uniprocessors towards multi-core processors, despite the fact that general-purpose parallel programming remains one of the greatest unsolved problems of computer science. If computers are to benefit from future advances in technology then there are major challenges ahead, involving understanding how to build reliable systems on increasingly unreliable technology and how to exploit parallelism increasingly effective, not only to improve performance, but also to mask the consequences of component failure. Biological systems demonstrate many of the properties we aspire to incorporate into our engineered technology, so perhaps that suggests a possible source of ideas that we could seek to incorporate into future novel computation systems?