Software Optimization for High Performance Computers

  • Authors:
  • Isom L. Crawford;Kevin R. Wadleigh

  • Affiliations:
  • -;-

  • Venue:
  • Software Optimization for High Performance Computers
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

From the Book:PREFACE: Preface Once you start asking questions, innocence is gone. - Mary Astor This purpose of this book is to document many of the techniques used by people who implement applications on modern computers and want their programs to execute as quickly as possible. There are four major components that determine the speed of an application: the architecture, the compiler, the source code, and the algorithm. You usually don't have control over the architecture you use, but you need to understand it so you'll know what it is capable of achieving. You do have control over your source code and how compilers are used on it. This book discusses how to perform source code modifications and use the compiler to generate better performing applications. The final and arguably the most important part is the algorithms used. By replacing the algorithms you have or were given with better performing ones, or even tweaking the existing ones, you can reap huge performance gains and perform problems that had previously been unachievable. There are many reasons to want applications to execute quickly. Sometimes it is the only way to make sure that a program finishes execution in a reasonable amount of time. For example, the decision to bid or no-bid an oil lease is often determined by whether a seismic image can be completed before the bid deadline. A new automotive body design may or may not appear in next year's model depending on whether the structural and aerodynamic analysis can be completed in time. Since developers of applications would like an advantage over their competitors, speed can sometimes be thedifferentiatorbetween two similar products. Thus, writing programs to run quickly can be a good investment. P.1 A Tool Box We like to think of this book as a tool box. The individual tools are the various optimization techniques discussed. As expected, some tools are more useful than others. Reducing the memory requirements of an application is a general tool that frequently results in better single processor performance. Other tools, such as the techniques used to optimize a code for parallel execution, have a more limited scope. These tools are designed to help applications perform well on computer system components. You can apply them to existing code to improve performance or use them to design efficient code from scratch. As you become proficient with the tools, some general trends become apparent. All applications have a theoretical performance limit on any computer. The first attempts at optimization may involve choosing between basic compiler options. This doesn't take much time and can help performance considerably. The next steps may involve more complicated compiler options, modifying a few lines of source code, or reformulating an algorithm. The theoretical peak performance is like the speed of light. As more and more energy, or time, is expended, the theoretical peak is approached, but never quite achieved. Before optimizing applications, it is prudent to consider how much time you can, or should, commit to optimization. In the past, one of the problems with tuning code was that even with a large investment of a time the optimizations quickly became outdated. For example, there were many applications that had been optimized for vector computers which subsequently had to be completely reoptimized for massively parallel computers. This sometimes took many person-years of effort. Since massively parallel computers never became plentiful, much of this effort had very short-term benefit. In the 1990s, many computer companies either went bankrupt or were purchased by other companies as the cost of designing and manufacturing computers skyrocketed. As a result, there are very few computer vendors left today and most of today's processors have similar characteristics. For example, they nearly all have high-speed caches. Thus, making sure that code is structured to run well on cache-based systems ensures that the code runs well across almost all modern platforms. The examples in this book are biased in favor of the UNIX operating system and RISC processors. This is because they are most characteristic of modern high performance computing. The recent EPIC (IA64) processors have cache structures identical to those of RISC processors, so the examples also apply to them. P.2 Language Issues This book uses lots of examples. They are written in Fortran, C, or in a language-independent pseudocode. Fortran examples use uppercase letters while the others use lowercase. For example, DO I = 1,N Y(I) = Y(I) + A * X(I) ENDDO takes a scalar A, multiplies it by a vector x of length N and adds it to a vector Y of length N. Languages such as Fortran 90/95 and C++ are very powerful and allow vector or matrix notation. For example, if x and Y are two-dimensional arrays and A is a scalar, writing Y = Y + A * X means to multiple the array x by A and add the result to the matrix Y This notation has been avoided since it can obscure the analysis performed. The notation may also make it more difficult to compilers to optimize the source code. There is an entire chapter devoted to language specifics, but pseudo-code and Fortran examples assume that multidimensional arrays such as Y (200, 100) have the data stored in memory in column-major order. Thus the elements of Y (200, 100) are stored as Y(1,1), Y(2,1), Y(3,1),..., Y(200,1), Y(1,2), Y(1,3),... This is the opposite of C data storage where data is stored in row-major order. P.3 Notation When terms are defined, we'll use italics to set the term apart from other text. Courier font will be used for all examples. Mathematical terms and equations use italic font. We'll use lots of prefixes for the magnitude of measurements, so the standard ones are defined in the following table. Table P-1: Standard Prefixes Prefix Factor Factor tera 1012 240 giga 109 230 mega 106 220 kilo 103 210 milli 10-3 micro 10-6 nano 10-9 Note that some prefixes are defined using both powers of 10 and powers of two. The exact arithmetic values are somewhat different. Observe that 106 = 1,000,000 while 210 = 1,048,576. This can be confusing, but when quantifying memory, cache, or data in general, associate the prefixes with powers of two. Otherwise, use the more common powers of 10. Finally, optimizing applications should be fun. It's really a contest between you and the computer. Computers sometimes give up performance grudgingly, so understand what the computer is realistically capable of and see that you get it. Enjoy the challenge! nnn