Jump to content

Loop unrolling

From Wikipedia, the free encyclopedia
(Redirected from Loop unwinding)

Loop unrolling, also known as loop unwinding, is a loop transformation technique that attempts to optimize a program's execution speed at the expense of its binary size, which is an approach known as space–time tradeoff. The transformation can be undertaken manually by the programmer or by an optimizing compiler. On modern processors, loop unrolling is often counterproductive, as the increased code size can cause more cache misses; cf. Duff's device.[1]

The goal of loop unwinding is to increase a program's speed by reducing or eliminating instructions that control the loop, such as pointer arithmetic and "end of loop" tests on each iteration;[2] reducing branch penalties; as well as hiding latencies, including the delay in reading data from memory.[3] To eliminate this computational overhead, loops can be re-written as a repeated sequence of similar independent statements.[4]

Loop unrolling is also part of certain formal verification techniques, in particular bounded model checking.[5]

Advantages

[edit]

The overhead in "tight" loops often consists of instructions to increment a pointer or index to the next element in an array (pointer arithmetic), as well as "end of loop" tests. If an optimizing compiler or assembler is able to pre-calculate offsets to each individually referenced array variable, these can be built into the machine code instructions directly, therefore requiring no additional arithmetic operations at run time.

  • Significant gains can be realized if the reduction in executed instructions compensates for any performance reduction caused by any increase in the size of the program.
  • Branch penalty is minimized.[6]
  • If the statements in the loop are independent of each other (i.e. where statements that occur earlier in the loop do not affect statements that follow them), the statements can potentially be executed in parallel.
  • Can be implemented dynamically if the number of array elements is unknown at compile time (as in Duff's device).

Optimizing compilers will sometimes perform the unrolling automatically, or upon request.

Disadvantages

[edit]
  • Increased program code size, which can be undesirable—particularly for embedded applications—can also cause an increase in instruction cache misses, which may adversely affect performance.
  • Unless performed transparently by an optimizing compiler, the code may become less readable.
  • If the code in the body of the loop involves function calls, it may not be possible to combine unrolling with inlining, since the increase in code size might be excessive. Thus, there can be a trade-off between the two optimizations.
  • On hardware where software pipelining is necessary to improve performance alongside loop unrolling (i.e. hardware which lacks register renaming or implements in-order superscalar execution), additional registers may need to be used to store temporary variables from multiple iterations that could otherwise reuse the same register.[7]
  • Apart from very small and simple code, unrolled loops that contain branches are even slower than recursions.[8]

Static/manual loop unrolling

[edit]

Manual (or static) loop unrolling involves the programmer analyzing the loop and interpreting the iterations into a sequence of instructions which will reduce the loop overhead. This is in contrast to dynamic unrolling which is accomplished by the compiler.

Simple manual example in C

[edit]

A procedure in a computer program is to delete 100 items from a collection. This is normally accomplished by means of a for-loop which calls the function delete(item_number). If this part of the program is to be optimized, and the overhead of the loop requires significant resources compared to those for the delete(x) function, unwinding can be used to speed it up.

Normal loop After loop unrolling
 int x;
 for (x = 0; x < 100; x++)
 {
     delete(x);
 }
 int x; 
 for (x = 0; x < 100; x += 5)
 {
     delete(x);
     delete(x + 1);
     delete(x + 2);
     delete(x + 3);
     delete(x + 4);
 }

As a result of this modification, the new program has to make only 20 iterations, instead of 100. Afterwards, only 20% of the jumps and conditional branches need to be taken, and represents, over many iterations, a potentially significant decrease in the loop administration overhead. To produce the optimal benefit, no variables should be specified in the unrolled code that require pointer arithmetic. This usually requires "base plus offset" addressing, rather than indexed referencing.

On the other hand, this manual loop unrolling expands the source code size from 3 lines to 7, that have to be produced, checked, and debugged, and the compiler may have to allocate more registers to store variables in the expanded loop iteration [dubiousdiscuss]. In addition, the loop control variables and number of operations inside the unrolled loop structure have to be chosen carefully so that the result is indeed the same as in the original code (assuming this is a later optimization on already working code). For example, consider the implications if the iteration count were not divisible by 5. The manual amendments required also become somewhat more complicated if the test conditions are variables. See also Duff's device.

Early complexity

[edit]

In the simple case, the loop control is merely an administrative overhead that arranges the productive statements. The loop itself contributes nothing to the results desired, merely saving the programmer the tedium of replicating the code a hundred times which could have been done by a pre-processor generating the replications, or a text editor. Similarly, if-statements and other flow control statements could be replaced by code replication, except that code bloat can be the result. Computer programs easily track the combinations, but programmers find this repetition boring and make mistakes. Consider:

Normal loop After loop unrolling
for i := 1:8 do
    if i mod 2 = 0 then do_even_stuff(i) 
                   else do_odd_stuff(i);
    next i;
do_odd_stuff(1); do_even_stuff(2);
do_odd_stuff(3); do_even_stuff(4);
do_odd_stuff(5); do_even_stuff(6);
do_odd_stuff(7); do_even_stuff(8);

But of course, the code performed need not be the invocation of a procedure, and this next example involves the index variable in computation:

Normal loop After loop unrolling
x(1) := 1;
for i := 2:9 do
    x(i) := x(i - 1) * i;
    print i, x(i);
    next i;
x(1) := 1;
x(2) := x(1) * 2; print 2, x(2);
x(3) := x(2) * 3; print 3, x(3);
x(4) := x(3) * 4; print 4, x(4);
... etc.

which, if compiled, might produce a lot of code (print statements being notorious) but further optimization is possible. This example makes reference only to x(i) and x(i - 1) in the loop (the latter only to develop the new value x(i)) therefore, given that there is no later reference to the array x developed here, its usages could be replaced by a simple variable. Such a change would however mean a simple variable whose value is changed whereas if staying with the array, the compiler's analysis might note that the array's values are constant, each derived from a previous constant, and therefore carries forward the constant values so that the code becomes

print 2, 2;
print 3, 6;
print 4, 24;
...etc.

In general, the content of a loop might be large, involving intricate array indexing. These cases are probably best left to optimizing compilers to unroll. Replicating innermost loops might allow many possible optimisations yet yield only a small gain unless n is large.

Unrolling WHILE loops

[edit]

Consider a pseudocode WHILE loop similar to the following:

Normal loop After loop unrolling Unrolled & "tweaked" loop
WHILE (condition) DO
    action
ENDWHILE
.
.
.
.
.
.
WHILE (condition) DO
    action
    IF NOT(condition) THEN GOTO break
    action
    IF NOT(condition) THEN GOTO break
    action
ENDWHILE
LABEL break:
.
IF (condition) THEN
    REPEAT
        action
        IF NOT(condition) THEN GOTO break
        action
        IF NOT(condition) THEN GOTO break
        action
    WHILE (condition)
LABEL break:

In this case, unrolling is faster because the ENDWHILE (a jump to the start of the loop) will be executed 66% less often.

Even better, the "tweaked" pseudocode example, that may be performed automatically by some optimizing compilers, eliminating unconditional jumps altogether.

Dynamic unrolling

[edit]

Since the benefits of loop unrolling are frequently dependent on the size of an array—which may often not be known until run time—JIT compilers (for example) can determine whether to invoke a "standard" loop sequence or instead generate a (relatively short) sequence of individual instructions for each element. This flexibility is one of the advantages of just-in-time techniques versus static or manual optimization in the context of loop unrolling. In this situation, it is often with relatively small values of n where the savings are still useful—requiring quite small (if any) overall increase in program size (that might be included just once, as part of a standard library).

Assembly language programmers (including optimizing compiler writers) are also able to benefit from the technique of dynamic loop unrolling, using a method similar to that used for efficient branch tables. Here, the advantage is greatest where the maximum offset of any referenced field in a particular array is less than the maximum offset that can be specified in a machine instruction (which will be flagged by the assembler if exceeded).

Assembler example (IBM/360 or Z/Architecture)

[edit]

This example is for IBM/360 or Z/Architecture assemblers and assumes a field of 100 bytes (at offset zero) is to be copied from array FROM to array TO—both having 50 entries with element lengths of 256 bytes each.

* The return address is in R14.
* Initialize registers R15, R0, R1, and R2 from data defined at the end of 
* the program starting with label INIT/MAXM1.
         LM    R15,R2,INIT                  Set R15 = maximum number of MVC
*                                           instructions (MAXM1 = 16), 
*                                           R0 = number of entries of array,
*                                           R1 = address of 'FROM' array, and
*                                           R2 = address of 'TO' array.
*
* The loop starts here.
LOOP     EQU   *                            Define LOOP label.
* At this point, R15 will always contain the number 16 (MAXM1).
         SR    R15,R0                       Subtract the remaining number of 
*                                           entries in the array (R0) from R15.
         BNP   ALL                          If R15 is not positive, meaning we
*                                           have more than 16 remaining entries
*                                           in the array, jump to do the entire
*                                           MVC sequence and then repeat.
*
* Calculate an offset (from start of MVC sequence) for unconditional branch to 
* the 'unwound' MVC loop below.
* If the number of remaining entries in the arrays is zero, R15 will be 16, so 
* all the MVC instructions will be bypassed.
         MH    R15,=AL2(ILEN)               Multiply R15 by the length of one
*                                           MVC instruction.
         B     ALL(R15)                     Jump to ALL+R15, the address of the
*                                           calculated specific MVC instruction 
*                                           with drop through to the rest of them.
*
* MVC instruction 'table'. 
* First entry has maximum allowable offset with single register = hexadecimal F00
* (15*256) in this example.
* All 16 of the following MVC ('move character') instructions use base-plus-offset 
* addressing and each to/from offset decreases by the length of one array element
* (256). This avoids pointer arithmetic being required for each element up to a 
* maximum permissible offset within the instruction of hexadecimal FFF 
* (15*256+255). The instructions are in order of decreasing offset, so the last 
* element in the set is moved first.
ALL      MVC   15*256(100,R2),15*256(R1)    Move 100 bytes of 16th entry from 
*                                           array 1 to array 2 (with 
*                                           drop-through).
ILEN     EQU   *-ALL                        Set ILEN to the length of the previous
*                                           MVC instruction.
         MVC   14*256(100,R2),14*256(R1)    Move 100 bytes of 15th entry.
         MVC   13*256(100,R2),13*256(R1)    Move 100 bytes of 14th entry.
         MVC   12*256(100,R2),12*256(R1)    Move 100 bytes of 13th entry.
         MVC   11*256(100,R2),11*256(R1)    Move 100 bytes of 12th entry.
         MVC   10*256(100,R2),10*256(R1)    Move 100 bytes of 11th entry.
         MVC   09*256(100,R2),09*256(R1)    Move 100 bytes of 10th entry.
         MVC   08*256(100,R2),08*256(R1)    Move 100 bytes of 9th entry.
         MVC   07*256(100,R2),07*256(R1)    Move 100 bytes of 8th entry.
         MVC   06*256(100,R2),06*256(R1)    Move 100 bytes of 7th entry.
         MVC   05*256(100,R2),05*256(R1)    Move 100 bytes of 6th entry.
         MVC   04*256(100,R2),04*256(R1)    Move 100 bytes of 5th entry.
         MVC   03*256(100,R2),03*256(R1)    Move 100 bytes of 4th entry.
         MVC   02*256(100,R2),02*256(R1)    Move 100 bytes of 3rd entry.
         MVC   01*256(100,R2),01*256(R1)    Move 100 bytes of 2nd entry.
         MVC   00*256(100,R2),00*256(R1)    Move 100 bytes of 1st entry.
*
         S     R0,MAXM1                     Reduce the number of remaining entries
*                                           to process.
         BNPR  R14                          If no more entries to process, return
*                                           to address in R14.
         AH    R1,=AL2(16*256)              Increment 'FROM' array pointer beyond
*                                           first set.
         AH    R2,=AL2(16*256)              Increment 'TO' array pointer beyond
*                                           first set.
         L     R15,MAXM1                    Reload the maximum number of MVC 
*                                           instructions per batch into R15
*                                           (destroyed by the calculation in the 
*                                           first instruction of the loop).
         B     LOOP                         Execute loop again.
*
* Static constants and variables (these could be passed as parameters, except 
* MAXM1).
INIT     DS    0A                           4 addresses (pointers) to be 
*                                           pre-loaded with the 'LM' instruction
*                                           in the beginning of the program.
MAXM1    DC    A(16)                        Maximum number of MVC instructions
*                                           executed per batch.
N        DC    A(50)                        Number of actual entries in array (a 
*                                           variable, set elsewhere).
         DC    A(FROM)                      Address of start of array 1 
*                                           ("pointer").
         DC    A(TO)                        Address of start of array 2 
*                                           ("pointer").
*
* Static arrays (these could be dynamically acquired).
FROM     DS    50CL256                      Array of 50 entries of 256 bytes each.
TO       DS    50CL256                      Array of 50 entries of 256 bytes each.

In this example, approximately 202 instructions would be required with a "conventional" loop (50 iterations), whereas the above dynamic code would require only about 89 instructions (or a saving of approximately 56%). If the array had consisted of only two entries, it would still execute in approximately the same time as the original unwound loop. The increase in code size is only about 108 bytes – even if there are thousands of entries in the array.

Similar techniques can of course be used where multiple instructions are involved, as long as the combined instruction length is adjusted accordingly. For example, in this same example, if it is required to clear the rest of each array entry to nulls immediately after the 100 byte field copied, an additional clear instruction, XC xx*256+100(156,R1),xx*256+100(R2), can be added immediately after every MVC in the sequence (where xx matches the value in the MVC above it).

It is, of course, perfectly possible to generate the above code "inline" using a single assembler macro statement, specifying just four or five operands (or alternatively, make it into a library subroutine, accessed by a simple call, passing a list of parameters), making the optimization readily accessible.

C example

[edit]

The following example demonstrates dynamic loop unrolling for a simple program written in C. Unlike the assembler example above, pointer/index arithmetic is still generated by the compiler in this example because a variable (i) is still used to address the array element. Full optimization is only possible if absolute indexes are used in the replacement statements.

#include <stdio.h>

/* The number of entries processed per loop iteration.                        */
/* Note that this number is a 'constant constant' reflecting the code below.  */
#define BUNCHSIZE (8)

int main(void)
{ 
  int i = 0;                                    /* counter */
  int entries = 50;                             /* total number to process    */
  int repeat;                                   /* number of while repetitions*/
  int left = 0;                                 /* remainder (process later)  */ 
 
  /* If the number of elements is not divisible by BUNCHSIZE,              */ 
  /* get repeat times required to do most processing in the while loop        */

  repeat = (entries / BUNCHSIZE);                /* number of times to repeat */
  left   = (entries % BUNCHSIZE);                /* calculate remainder       */

  /* Unroll the loop in 'bunches' of 8                                        */ 
  while (repeat--) 
  { 
    printf("process(%d)\n", i    );
    printf("process(%d)\n", i + 1); 
    printf("process(%d)\n", i + 2); 
    printf("process(%d)\n", i + 3); 
    printf("process(%d)\n", i + 4); 
    printf("process(%d)\n", i + 5); 
    printf("process(%d)\n", i + 6); 
    printf("process(%d)\n", i + 7);

    /* update the index by amount processed in one go                         */ 
    i += BUNCHSIZE;
  }

  /* Use a switch statement to process remaining by jumping to the case label */ 
  /* at the label that will then drop through to complete the set             */ 
  switch (left) 
  {
     case 7 : printf("process(%d)\n", i + 6);   /* process and rely on drop 
                                                   through                    */
     case 6 : printf("process(%d)\n", i + 5); 
     case 5 : printf("process(%d)\n", i + 4);  
     case 4 : printf("process(%d)\n", i + 3);  
     case 3 : printf("process(%d)\n", i + 2); 
     case 2 : printf("process(%d)\n", i + 1);   /* two left                   */
     case 1 : printf("process(%d)\n", i);       /* just one left to process   */ 
     case 0 : ;                                 /* none left                  */
  } 
}

Code duplication could be avoided by writing the two parts together as in Duff's device.

C to MIPS assembly language loop unrolling example[9]

[edit]

The following example will compute a dot product of two 100-entry vectors A and B of type double. Here is the code in C:

double dotProduct = 0;
for (int i = 0; i < 100; i++) {
  dotProduct += A[i]*B[i];
}

Converting to MIPS assembly language

[edit]

The following is MIPS assembly code that will compute the dot product of two 100-entry vectors, A and B, before implementing loop unrolling. The code below omits the loop initializations:

  • Initialize loop count ($7) to 100.
  • Initialize dot product ($f8) to 0.
  • Initialize A[i] pointer ($5) to the base address of A.
  • Initialize B[i] pointer ($6) to the base address of B.

Note that the size of one element of the arrays (a double) is 8 bytes.

    loop3:
            l.d     $f10, 0($5)       ; $f10 ← A[i]
            l.d     $f12, 0($6)       ; $f12 ← B[i]
            mul.d   $f10, $f10, $f12  ; $f10 ← A[i]*B[i]
            add.d   $f8, $f8, $f10    ; $f8 ← $f8 + A[i]*B[i]
            addi    $5, $5, 8         ; increment pointer for A[i] by the size
                                      ; of a double.
            addi    $6, $6, 8         ; increment pointer for B[i] by the size
                                      ; of a double.
            addi    $7, $7, -1        ; decrement loop count
    test:
            bgtz    $7, loop3         ; Continue if loop count > 0

Unrolling the Loop in MIPS

[edit]

The following is the same as above, but with loop unrolling implemented at a factor of 4. Note again that the size of one element of the arrays (a double) is 8 bytes; thus the 0, 8, 16, 24 displacements and the 32 displacement on each loop.

    loop3:
            l.d     $f10, 0($5)         ; iteration with displacement 0
            l.d     $f12, 0($6)
            mul.d   $f10, $f10, $f12
            add.d   $f8, $f8, $f10

            l.d     $f10, 8($5)         ; iteration with displacement 8
            l.d     $f12, 8($6)
            mul.d   $f10, $f10, $f12
            add.d   $f8, $f8, $f10

            l.d     $f10, 16($5)        ; iteration with displacement 16
            l.d     $f12, 16($6)
            mul.d   $f10, $f10, $f12
            add.d   $f8, $f8, $f10

            l.d     $f10, 24($5)        ; iteration with displacement 24
            l.d     $f12, 24($6)
            mul.d   $f10, $f10, $f12
            add.d   $f8, $f8, $f10

            addi    $5, $5, 32
            addi    $6, $6, 32
            addi    $7, $7, -4
    test:
            bgtz    $7, loop3           ; Continue loop if $7 > 0

See also

[edit]

References

[edit]
  1. ^ Tso, Ted (August 22, 2000). "Re: [PATCH] Re: Move of input drivers, some word needed from you". lkml.indiana.edu. Linux kernel mailing list. Retrieved August 22, 2014. Jim Gettys has a wonderful explanation of this effect in the X server. It turns out that with branch predictions and the relative speed of CPU vs. memory changing over the past decade, loop unrolling is pretty much pointless. In fact, by eliminating all instances of Duff's Device from the XFree86 4.0 server, the server shrunk in size by _half_ _a_ _megabyte_ (!!!), and was faster to boot, because the elimination of all that excess code meant that the X server wasn't thrashing the cache lines as much.
  2. ^ Ullman, Jeffrey D.; Aho, Alfred V. (1977). Principles of compiler design. Reading, Mass: Addison-Wesley Pub. Co. pp. 471–2. ISBN 0-201-10073-8.
  3. ^ Petersen, W.P., Arbenz, P. (2004). Introduction to Parallel Computing. Oxford University Press. p. 10.{{cite book}}: CS1 maint: multiple names: authors list (link)
  4. ^ Nicolau, Alexandru (1985). "Loop Quantization: Unwinding for Fine-Grain Parallelism Exploitation". Dept. of Computer Science Technical Report. Ithaca, NY: Cornell University. OCLC 14638257. {{cite journal}}: Cite journal requires |journal= (help)
  5. ^ Model Checking Using SMT and Theory of Lists
  6. ^ Fog, Agner (2012-02-29). "Optimizing subroutines in assembly language" (PDF). Copenhagen University College of Engineering. p. 100. Retrieved 2012-09-22. 12.11 Loop unrolling
  7. ^ Sarkar, Vivek (2001). "Optimized Unrolling of Nested Loops". International Journal of Parallel Programming. 29 (5): 545–581. doi:10.1023/A:1012246031671. S2CID 3353104.
  8. ^ Adam Horvath "Code unwinding - performance is far away"
  9. ^ "Loop Unrolling". University of Minnesota.

Further reading

[edit]
[edit]