Copyright © 2001 David Schmidt

8.7.3 Sorting

When an array is used as a database, where elements are fetched and updated frequently, there is a distinct advantage to ordering the elements by their keys---it becomes far easier to locate an element. The process of ordering an array's elements is called sorting.

Algorithms for sorting have a rich history, and we cannot do justice here. Instead, we focus upon the development of two traditional sorting methods, selection sort and insertion sort. To simplify the algorithms that follow, we work with arrays of integers, where we sort the elements so that they are ordered in value from smallest integer to largest. (Of course, we can use the same techniques to sort elements by their key values.)

The idea behind selection sort is simple: Locate the least integer in the array, and move it to the front. Then, find the next least integer, and move it second to the front. Repeat this process until all integers have been selected in order of size. The algorithm that sorts array r in this manner goes

for ( i = 0;  i != r.length;  i = i+1 )
    { Find the least element in  r  within the range
        r[i]  to  r[r.length-1]; say that it is at  r[j].
      Exchange  r[i]  with  r[j].

Here is the algorithm in action. Say that we have this array, r:

When selection sorting starts, its loop finds the least element in the range r[0]..r[4] at index 2 and exchanges the elements at indexes 0 and 2:

The second loop iteration locates the least element in the range r[1]..r[4] at index 3, and the elements at indexes 1 and 3 are exchanged:

The algorithm next considers the elements in range r[2]..r[4] and so on until it reaches this end result:

Figure 15 shows the method.
FIGURE 15: selection sort================================================

/** selectionSort sorts the elements of its array parameter
  * @param  r - the array to be sorted  */
public void selectionSort(int[] r)
{ for ( int i = 0;  i != r.length;  i = i+1 )
      // invariant: subarray r[0]..r[i-1] is sorted
      { int j = findLeast(r, i, r.length-1);  // get index of least element
        int temp = r[i]; 
        r[i] = r[j]; 
        r[j] = temp; 

/** findLeast finds the index of the least element in  r[start]..r[end]
  * @param  r - the array to be searched
  * @param  start - the starting element for the search
  * @param  end - the ending element for the search
  * @return the index of the smallest element in  r[start]..r[end]  */
private int findLeast(int[] r, int start, int end)
{ int least = start;
  for ( int i = start+1;  i <= end;  i = i+1 )
      // invariant: least is index of least element in range r[start]..r[i-1]
      { if ( r[i] < r[least] ) { least = i; } }
  return least; }


There is more than one way to sort an array; a second classic approach, called insertion sort, rearranges elements the way most people sort a hand of playing cards: Start with the first card (element), then take the second card (element) and insert it either before or after the first card, so that the two cards are in order; then take the third card and insert it in its proper position so that the three cards are ordered, and so on. Eventually, all the cards are inserted where they belong in the ordering.

The algorithm based on this idea is simply stated as:

for ( i=1;  i < r.length;  i = i+1 )
    { Insert  r[i] in its proper place within the already sorted prefix, 
If we apply the algorithm to the example array, r, seen above,

we see that the algorithm first inserts the 8 where it belongs with respect to the 11:

This makes the prefix, r[0]..r[1], correctly sorted. Next, the -2 must be inserted in its proper place with respect to the sorted prefix:

To make room for -2 at its proper position, r[0], the two elements, 8 and 11, must be shifted one position to the right. Now, r[0]..r[2] is correctly sorted. The last two elements are inserted similarly.

Figure 16 show the insertion sorting method.

FIGURE 16: insertion sort===============================================

/** insertionSort sorts the elements of its array parameter
  * @param  r - the array to be sorted  */
public static void insertionSort(int[] r)
{ for ( int i = 1;  i < r.length;  i = i+1 )
      // invariant: prefix  r[0]..r[i-1] is sorted
      { int v = r[i];  // v  is the next element to insert into the prefix
        int j = i;
        while ( j != 0  &&  r[j-1] > v )
              // invariants:
              // (i) the original prefix, r[0]..r[i-1],
              //     is now arranged as  r[0]..r[j-1], r[j+1]..r[i];
              // (ii) all of  r[j+1]..r[i]  are greater than  v
              { r[j] = r[j-1];
                j = j-1;
        r[j] = v;

The method's most delicate step is searching the sorted prefix to find a space for v---the while-loop searches from right to left, shifting values one by one, until it encounters a value that is not larger than v. At all iterations, position r[j] is reserved for v; when the iterations stop, v is inserted at r[j].


  1. Try selection sort and insertion sort on these arrays: {4, 3, 2, 2}; {1, 2, 3, 4}; {1}; { } (the array of length 0).
  2. Explain which of the two sorting methods might finish faster when the array to be sorted is already or nearly sorted; when the array's elements are badly out of order.
  3. Explain why the for-loop in method selectionSort iterates one more time than it truly needs.
  4. Why is the test expression, j != 0, required in the while-loop in method insertionSort?
  5. Another sorting technique is bubble sort: over and over, compare pairs of adjacent elements and exchange them if the one on the right is less than the one on the left. In this way, the smaller elements move like ``bubbles'' to the left (``top'') of the array. The algorithm goes:
    boolean did_exchanges = true;
    while ( did_exchanges )
          { did_exchanges = false;
            for ( int i = 1;  i < r.length;  i = i+1 )
                { If  r[i] < r[i-1},
                      then exchange them and assign  did_exchanges = true.
    Program this sorting method.

8.7.4 Searching

Once an array is sorted, it becomes simpler to locate an element within it---rather than examining items one by one, from left to right, we can start searching in the middle, at approximately where the item might appear in the sorted collection. (This is what we do when we search for a word in a dictionary.) A standard searching algorithm, called binary search, exploits this idea.

Given a sorted array of integers, r, we wish to determine where a value, item, lives in r. We start searching in the middle of r; if item is not exactly the middle element, we compare what we found to it: If item is less than the middle element, then we next search the lower half of the array; if item is greater than the element, we search the upper half of the array. We repeat this strategy until item is found or the range of search narrows to nothing, which means that item is not present.

The algorithm goes

Set searching = true.
Set the lower bound of the search to be 0 and the upper bound of the
    search to be the last index of array,  r.
while ( searching  &&  lower bound <= upper bound )
      { index = (lower bound + upper bound) / 2;
        if   ( item == r[index] ) { found the item---set  searching = false; }
        else if ( item < r[index] )  { reset  upper bound = index-1; }
        else { reset  lower bound = index+1; }
Figure 17 shows the method, which is a standard example of the searching pattern of iteration.
FIGURE 17: binary search=================================================

/** binarySearch  searches for an item in a sorted array
  * @param  r - the array to be searched
  * @param  item - the desired item in array  r
  * @return the index where  item  resides in  r;  if  item  is not
  *   found, then return  -1  */
public int binarySearch(int[] r, int item)
{ int lower = 0; 
  int upper = r.length - 1; 
  int index = -1;
  boolean searching = true;
  while ( searching  &&  lower <= upper )
        // (1) searching == true  implies  item is in range  r[lower]..r[upper],
        //       if it exists in  r  at all.
        // (2) searching == false  implies that  r[index] == item.
        { index = (lower + upper) / 2;
          if ( r[index] == item )    
               { searching = false; }
          else if ( r[index] < item )
               { lower = index + 1; }
          else { upper = index - 1; }
  if ( searching )  
       { index = -1; }  // implies  lower > upper,  hence  item  not in  r
  return index;


If we searched for the item 10 in the sorted array r seen in the examples in the previous section, the first iteration of the loop in binarySearch gives us this configuration:

The search starts exactly in the middle, and the loop examines r[2] to see if it is 10. It is not, and since 10 is larger than 8, the value found at r[2], the search is revised as follows:

Searching the upper half of the array, which is just two elements, moves the search to r[3], which locates the desired item.

Notice that a linear search, that is,

int index = 0;
boolean searching = true;
while ( searching  &&  index != r.length )
      { if ( r[index] == item )
             { searching = false; }
        else { index = index + 1; }
would examine four elements of the array to locate element 10. The binary search examined just two. Binary search's speedup for larger arrays is enormous and is discussed in the next section.

Binary search is a well-known programming challenge because it is easy to formulate incorrect versions. (Although the loop in Figure 17 is small, its invariant suggests that a lot of thought is embedded within it.) Also, small adjustments lead to fascinating variations. Here is a clever reformulation, due to N. Wirth:

public int binarySearch(int[] r, int item)
{ int lower = 0; 
  int upper = r.length-1; 
  int index = -1;
  while ( lower <= upper )
        // (1) lower != upper+2  implies that  item   is in range
        //     r[lower]..r[upper],  if it exists in  r  at all
        // (2) lower == upper+2  implies that  r[index] == item
        { index = (lower + upper) / 2;
          if ( item <= r[index] )
             { upper = index - 1; };
          if ( item >= r[index] )
             { lower = index + 1; };
  if ( lower != upper+2 ) 
     { index = -1; }
  return index; 
This algorithm merges variable searching in Figure 17 with the lower and upper bounds of the search so that the loop's test becomes simpler. This alters the loop invariant so that the discovery of item is indicated by lower == upper+2. Both searching algorithms must terminate, because the expression, upper-lower decreases in value at each iteration, ensuring that the loop test will eventually go false.


  1. Use the binary search method in Figure 17 on the sorted array, {1, 2, 2, 4, 6}: Ask the method to search for 6; for 2; for 3. Write execution traces for these searches.
  2. Here is a binary search method due to R. Howell:
    public int search(int[] r, int item)
    { int answer = -1;
      if ( r.length > 0 )
         { int lower = 0; 
           int upper = r.length;
           while ( upper - lower > 1 )
                 // item  is in  r[lower]..r[upper-1], if it is in  r
                 { int index = (lower + upper) / 2;
                   if ( r[index] > item )
                        { upper = index; }
                   else { lower = index; }
           if ( r[lower]== item ) { answer = lower; }
      return answer;
    Explain why the invariant and the termination of the loop ensure that the method returns a correct answer. Explain why the loop must terminate. (This is not trivial because the loop makes one extra iteration before it quits.)

8.7.5 Time-Complexity Measures

The previous section stated that binary search computes its answer far faster than does linear search. We can state how much faster by doing a form of counting analysis on the respective algorithms. The analysis will introduce us to a standard method for computing the time complexity of an algorithm. We then apply the method to analyze the time complexity of selection sort and insertion sort.

To analyze a searching algorithm, one counts the number of elements the algorithm must examine to find an item (or to report failure). Consider linear search: If array r has, say, N elements, we know in the very worst case that a linear search must examine all N elements to find the desired item or report failure. Of course, over many randomly generated test cases, the number of elements examined will average to about N/2, but in any case, the number of examinations is directly proportional to the the array' length, and we say that the algorithm has performance of order N (also known as linear) time complexity.

For example, a linear search of an array of 256 elements will require at most 256 examinations and 128 examinations on the average.

Because it halves its range of search at each element examination, binary search does significantly better than linear time complexity: For example, a worst case binary search of a 256-element array makes one examination in the middle of the 256 elements, then one examination in the middle of the remaining 128 elements, then one examination in the middle of the remaining 64 elements, and so on---a maximum of only 9 examinations are required!

We can state this behavior more precisely with a recursive definition. Let E(N) stand for the number of examinations binary search makes (in worst case) to find an item in an array of N elements.

Here is the exact number of examinations binary search does:

E(N) = 1 + E(N/2),  for  N > 1
E(1) = 1
The first equation states that a search of an array with multiple elements requires an examination of the array's middle element, and assuming the desired item is not found in the middle, a subsequent search of an array of half the length. An array of length 1 requires just one examination to terminate the search.

To simplify our analysis of the above equations, say the array's length is a power of 2, that is, N = 2M, for some positive M. (For example, for N = 256, M is 8. Of course, not all arrays have a length that is exactly a power of 2, but we can always pretend that an array is ``padded'' with extra elements to make its length a power of 2.)

Here are the equations again:

E(2M) = 1 + E(2M-1),  for  M > 0
E(20) = 1
After several calculations with this definition (and a proof by induction---see the Exercises), we can convince ourselves that
E(2M) = M + 1
a remarkably small answer!

We say that the binary search algorithm has order log N (or logarithmic) time complexity. (Recall that log N, or more precisely, log2 N, is N's base-2 logarithm, that is, the exponent, M, such that 2M equals N. For example, log 256 is 8, and log 100 falls between 6 and 7.) Because we started our analysis with the assumption that N = 2M, we conclude that

E(N) = (log N) + 1
which shows that binary search has logarithmic time complexity.

It takes only a little experimentation to see, for large values of N, that log N is significantly less than N itself. This is reflected in the speed of execution of binary search, which behaves significantly better than linear search for large-sized arrays.

Analysis of Sorting Algorithms

Of course, binary search assumes that the array it searches is sorted, so we should calculate as well the time complexity of the sorting algorithms we studied. The two factors in the performance of a sorting algorithm are (i) the number of comparisons of array elements, and (ii) the number of exchanges of array elements. If either of these measures is high, this slows the algorithm.

Consider selection sort first (Figure 15); it locates and exchanges the smallest element, then the next smallest element, and so on. For an array of length N, it uses N-1 comparisons to find the smallest element, N-2 comparisons to find the next smallest element, and so on. The total number of comparisons is therefore

(N-1) + (N-2) + ...downto... + 2 + 1
From number theory (and an induction proof), we can discover that this sequence totals
 N * (N - 1)
that is, (1/2)N2 - (1/2)N. When N has a substantial positive value, only the N2 factor matters, so we say that the algorithm has order N2 (quadratic) time complexity.

Algorithms with quadratic time complexity perform significantly slower than logarithmic and linear algorithms, and this slowness can be annoying when N is very large (e.g., for N equals 100, N2 is 10,000).

It is easy to see that selection sort does exactly N-1 exchanges of elements---a linear time complexity---so the exchanges are not the costly part of the algorithm.

Next, we consider insertion sort (Figure 16); recall that it shifts elements, one by one, from right to left into their proper places. In worst case, insertion sort encounters an array whose elements are in reverse order. In this case, the algorithm's first iteration makes one comparison and one exchange; the second iteration makes two comparisons and two exchanges; and so on. The total number of comparisons and exchanges are the same, namely,

1 + 2 + ... + (N-2) + N-1
This is the same sequence we encountered in our analysis of selection sort, so we conclude that insertion sort also has quadratic time complexity.

Although selection sort's time complexity is stable across all possible permutations of arrays to be sorted, insertion sort executes much faster when it is given an almost completely sorted array to sort. This is because insertion sort shifts elements only when they are out of order. For example, if insertion sort is given an array of length N+1 where only one element is out of order, it will take only order N (linear) time to shift the element to its proper position. For this reason, insertion sort is preferred for sorting almost-sorted arrays.

In contrast, insertion sort does badly at exchanging elements when sorting an arbitrary array---it makes order N2 exchanges, whereas selection sort limits its exchanges to at worst order N. Therefore, selection sort is preferred if there is substantial difficulty in moving elements of the array. (But this is not normally the case for Java arrays, because the elements of a Java array are either primitive values, like numbers, or addresses of objects. These values are easy to exchange.)


  1. To get intuition about time complexities, calculate the values of N, 5*N, log N, N2, and (1/2)(N2) - (1/2)N for each of the following values of N: 4; 64; 128; 512; 1024; 16384.

    Then, reexamine the time complexities of the searching and sorting algorithms and describe how the algorithms would behave on arrays of size N, for the above values of N. (To give some perspective to the analysis, pretend that your computer is very slow and takes 0.1 seconds to perform a comparison or exchange operation.)

  2. Modify class Database in Figure 3 so that its insert method sorts the base array after a new record is added. (Warning---watch for null values in the array!) Because the contents of base are already sorted when a new element is inserted, does this simplify the sorting process? What form of sorting is better for this application---selection sort or insertion sort?

    Next, modify locationOf so that it uses binary search.

  3. Perform time-complexity analyses of the following methods:
    1. For Figure 1, Chapter 7, measure the time complexity of summation(N), depending on the value of N. Count the number of assignments the method makes.
    2. For Figure 3, Chapter 7, measure the time complexity of findChar(c, s), depending on the lengths of string s. Count the number of charAt operations the method makes.
    3. For Figure 13, measure the time complexity of paint, depending on the size of array that must be painted. Count the number of invocations of paintPiece.
  4. Our time-complexity analyses are a bit simplistic: a precise time-complexity analysis would count every operation that a computer's processor makes, that is, every arithmetic operation, every comparison operation, every variable reference, every assignment, every method invocation, every method return, etc. Perform such a detailed analysis for the algorithms in the previous Exercise; for linear search; for binary search. Are your answers significantly different than before?
  5. Use mathematical induction to prove that E(2M) = M + 1, for all nonnegative values of M. This requires that you prove these two claims:
  6. Use mathematical induction to prove that (N-1) + (N-2) + ...downto... + 2 + 1 equals (1/2)(N2) - (1/2)N, for all values of N that are 2 or larger. This requires that you prove these two claims:

8.7.6 Divide-and-Conquer Algorithms

In the previous section, we saw that the binary search algorithm has a significantly better time complexity than the linear search algorithm. The time measurement for binary search was expressed by a recursive definition, which suggests that a recursion might be a factor in binary search's performance. This is indeed the case---binary search is an example of a style of recursion known as divide and conquer, which we study in this section.

First, Figure 18 shows binary search written in recursive style.

FIGURE 18: binary search by recursion=================================

/** binarySearch  searches for an item within a segment of a sorted array
  * @param  r - the array to be searched
  * @param item - the desired item
  * @param lower - the lower bound of the segment
  * @param upper - the upper bound of the segment
  * @return the index where  item  resides in  r[lower]..r[upper];
  *  return -1, if  item  is not present in the segment of  r  */
public int binarySearch(int[] r, int item, int lower, int upper)
{ int answer = -1;
  if ( lower <= upper )
     { int index = (lower + upper) / 2;
       if ( r[index] == item )
               { answer = index; }
       else if ( r[index] < item )
               { answer = binarySearch(r, item, index + 1, upper); }
       else { answer = binarySearch(r, item, lower, index - 1); }
  return answer;

To search an entire array, a, for a value, v, the method is invoked as binarySearch(a, v, 0, a.length-1). The method clearly shows that, at each recursive invocation, the segment searched is divided in half. Eventually, the desired item is found or the segment is divided into nothing.

The method in the Figure is an example of a divide-and-conquer algorithm, so called because the algorithm divides its argment, the array, into smaller segments at each invocation. The divide-and-conquer pattern uses recursion correctly, because each recursive invocation operates on parameters (the array segments) that grow smaller until they reach a stopping value (size 0).

Merge sort

Sorting can be accomplished with a divide-and-conquer algorithm, which proceeds as follows: To sort a complete array, r,

  1. Divide the array into two smaller segments, call them s1 and s2.
  2. Sort s1.
  3. Sort s2.
  4. Merge the two sorted segments to form the completely sorted array.
The merge step goes as follows: Say that you have a deck of cards you wish to sort. You divide the deck in half and somehow sort each half into its own pile. You merge the two piles by playing this ``game'': Turn over the top card from each pile. (The top cards represent the lowest-valued cards of the two piles.) Take the lower-valued of the two cards, form a new pile with it, and turn over the next card from the pile from which you took the lower-valued card. Repeat the game until all the cards are moved into the third pile, which will be the entire deck, sorted.

Figure 19 shows the method based on this algorithm, called merge sort.

FIGURE 19: merge sort=================================================

/** mergeSort builds a sorted array segment
  * @param r - the array
  * @param lower - the lower bound of the segment to be sorted
  * @param upper - the upper bound of the segment to be sorted
  * @return a sorted array whose elements are those in  r[lower]..r[upper]  */
public int[] mergeSort(int[] r, int lower, int upper)
{ int[] answer;
  if ( lower > upper )  // is it an empty segment?
     { answer = new int[0]; }  
  else if ( lower == upper ) // is it a segment of just one element?
     { answer = new int[1];
       answer[0] = r[lower];
  else // it is a segment of length 2 or more, so divide and conquer:
     { int middle = (lower + upper) / 2;
       int[] s1 = mergeSort(r, lower, middle);
       int[] s2 = mergeSort(r, middle+1, upper);
       answer = merge(s1, s2);
  return answer;

/** merge builds a sorted array by merging its two sorted arguments
  * @param r1 - the first sorted array
  * @param r2 - the second sorted array
  * @return a sorted array whose elements are exactly those of  r1 and r2  */
private int[] merge(int[] r1, int[] r2)
{ int length = r1.length + r2.length;
  int[] answer = new int[length];
  int index1 = 0;
  int index2 = 0;
  for ( int i = 0;  i != length;  i = i+1 )
      // invariant: answer[0]..answer[i-1]  is sorted and holds the elements of 
      //   r1[0]..r1[index1-1]  and  r2[0]..r2[index2-1]
      { if (    index1 == r1.length
           || ( index2 != r2.length  &&  r2[index2] < r1[index1] ) )
             { answer[i] = r2[index2];
               index2 = index2 + 1;
        else { answer[i] = r1[index1];
               index1 = index1 + 1;
  return answer;

Like the recursive version of binary search, mergeSort is first invoked as mergeSort(a, 0, a.length-1) to indicate that all the elements in array a should be sorted. The method returns a new array that contains a's elements reordered.

Method mergeSort first verifies that the segment of the array it must sort has at least two elements; if it does, the segment is divided in two, the subsegments are sorted, and merge combines the two sorted subarrays into the answer.

The time complexity of merge sort is is significantly better than the other sorting algorithms seen so far; we consider the number of comparisons the algorithm makes. (The analysis of element exchanges goes the same.)

First, we note that merge(r1, r2) makes as many comparisons as there are elements in the shorter of its two array parameters, but it will be convenient to overestimate and state that no more than r1.length + r2.length comparisons are ever made.

Next, we define the comparisons made by mergeSort on an array of length N as the quantity, C(N):

C(N) = C(N / 2) + C(N / 2) + N,  if  N > 1
C(1) = 0
The first equation states that the total comparisons to sort an array of length 2 or more is the sum of the comparisons needed to sort the left segment, the comparisons needed to sort the right segment, and the comparisons needed to merge the two sorted segments. Of course, an array of length 1 requires no comparisons.

Our analysis of these equations goes simpler if we we pretend the array's length is a power of 2, that is N = 2M, for some nonnegative M:

C(2M) = C(2M-1) + C(2M-1) +  2M
C(20) = 0

These equations look like the ones discovered in the analysis of binary search. Indeed, if we divide both sides of the first equation by 2M, we see the pattern in the binary search equation:

C(2M)       C(2M-1)
------  =  -------  + 1
  2M         2M-1
As with the binary search equation, we can conclude that
-------  =  M
When we multiply both sides of the above solution by 2M, we see that
C(2M) = 2M * M
and since N = 2M, we have that
C(N) = N * log N
We say that merge sort has order N log N time complexity. Such algorithms perform almost as well as linear-time algorithms, so our discovery is significant.

Alas, mergeSort suffers from a significant flaw: When it sorts an array, it creates additional arrays for merging---this will prove expensive when sorting large arrays. The method in Figure 19 freely created many extra arrays, but if we are careful, we can write a version of mergeSort that creates no more than one extra array the same size as the original, unsorted array. For arrays that model large databases, even this might be unacceptable, unfortunately.


A brilliant solution to the extra-array problem was presented by C.A.R. Hoare in the guise of the ``quicksort'' algorithm. Like merge sort, quicksort uses the divide-and-conquer technique, but it cleverly rebuilds the sorted array segments within the original array: It replaces the merge step, which occurred after the recursive invocations, with a partitioning step, which occurs before the recursive invocations.

The idea behind partitioning can be understood this way: Say that you have a deck of unsorted playing cards. You partition the cards by (i) choosing a card at random from the deck and (ii) creating two piles from the remaining cards by placing those cards whose values are less than the chosen card in one pile and placing those cards whose values are greater than the chosen card in the other.

It is a small step from partitioning to sorting: If you sort the cards in each pile, then the entire deck is sorted by just concatenating the piles. This is a classic divide-and-conquer strategy and forms the algorithm for quicksort. Given an array, r, whose elements are numbered r[lower] to r[upper]:

  1. Rearrange (partition) r into two nonempty subarrays so that there is an index, m, such that all the elements in r[lower]..r[m] are less than or equal to all elements in r[m+1]..r[upper].
  2. Sort the partition r[lower]..r[m].
  3. Sort the partition r[m+1]..r[upper].
The end result must be the array entirely sorted.

Figure 20 gives the quickSort method, which is invoked as quickSort(r, 0, r.length-1), for array r. The hard work is done by partition(r, lower, upper), which partitions the elements in the range r[lower]..r[upper] into two groups. The method uses the element at r[lower] as the ``pivot'' value for partitioning as it scans the elements from left to right, moving those values less than the pivot to the left side of the subarray. Once all the elements are scanned, the ones less than the pivot form the first partition, and the ones greater-or-equal to the pivot form the second partition.

FIGURE 20: quicksort===================================================

/** quickSort sorts an array within the indicated bounds
  * @param r - the array to be sorted
  * @param lower - the lower bound of the elements to be sorted
  * @param upper - the upper bound of the elements to be sorted  */
public void quickSort(int[] r, int lower, int upper)
{  if ( lower < upper )
     { int middle = partition(r, lower, upper);
       quickSort(r, lower, middle);
       quickSort(r, middle+1, upper);

/** partition rearranges an array's elements into two nonempty partitions
  * @param r - an array of length 2 or more
  * @param lower - the lower bound of the elements to be partitioned
  * @param upper - the upper bound of the elements to be partitioned
  * @return the index,  m,  such that all elements in the nonempty partition,
  *   r[lower]..r[m],  are  <=  all elements in the nonempty partition, 
  *   r[m+1]..r[upper]  */
private int partition(int[] r, int lower, int upper)
{ int v = r[lower];   // the ``pivot'' value used to make the partitions
  int m = lower - 1;  // marks the right end of the first partition
  int i = lower + 1;  // marks the right end of the second partition
  while ( i <= upper )
        // invariant: (i) all of  r[lower]..r[m]  are < v
        //           (ii) all of  r[m+1]..r[i-1]  are >= v, 
        //                and the partition is nonempty
        { if ( r[i] < v )
             { // insert r[i] at the end of the first partition
               // by exchanging it with  r[m+1]:
               m = m + 1;
               int temp = r[i];
               r[i] = r[m];
               r[m] = temp;
          i = i + 1;
  if ( m == lower - 1 )  // after all the work, is the first partition empty?
     { m = m + 1; }      // then place  r[lower], which is  v,  into it
  return m;


It is essential that both of the partitions created by partition are nonempty. For this reason, a conditional statement after the while-loop asks whether the partition of elements less than the pivot is empty. If it is, this means the pivot is the smallest value in the subarray, no exchanges were made, and the pivot remains at r[lower]. In this case, the pivot value itself becomes the first partition.

We can see partitioning at work in an example. Say that we invoke quickSort(r, 0 ,6), which immediately invokes partition(r, 0, 6) for the array r shown below. The variables in partition are initialized as follows:

We position m and i under the array to indicate the variables' values. The pivot value is r[0]---5. Values less than the pivot will be moved to the left; the other values will move to the right.

Within partition's while-loop, i moves right, searching for a value less than 5; it finds one at element 2. This causes r[2] to be moved to the end of the first partition---it is exchanged with r[m+1], and both m and i are incremented. Here is the resulting situation:

A check of the loop invariant verifies that the elements in the range r[0] to r[m] are less than the pivot, and the values in the range r[m+1] to r[i-1] are greater-or-equal to the pivot.

Immediately, i has located another value to be moved to the first partition. An exchange is undertaken between r[1] and r[3], producing the following:

The process continues; one more exchange is made. When the method finishes, here is the partitioned array:

Since m is 2, the partitions are r[0]..r[2] and r[3]..r[6].

Once a partitioning step is complete, quickSort recursively sorts the two partitions. This causes each subarray, r[0]..r[2] and r[3]..r[6], to be partitioned and recursively sorted. (That is, the invocation, quicksort(r, 0, 2) invokes partition(r, 0, 2), and quicksort(r, 3, 6) invokes partition(r, 3, 6), and so on.) Eventually, partitions of size 1 are reached, stopping the recursive invocations.

For quickSort to perform at its best, the partition method must generate partitions that are equally sized. In such a case, each recursive invocation of quickSort operates on an array segment half the size of the previous one, and the time complexity is the same as mergesort---order N log N. But there is no guarantee that partition will always break an array into two equally sized partitions---if the pivot value, v, is the largest (or smallest) value in an array segment of size N, then partition creates one partition of size 1 and one of size N-1. For example, if array r was already sorted

and we invoked partition(r, 0, 6), then partition would choose the pivot to be 1 and would create the partitions r[0] and r[1]..r[6]. The subsequent recursive invocation to quickSort(r, 1, 6) causes another such partitioning: r[1] and r[2]..r[6]. This behavior repeats for all the recursive calls.

In a case as the above, quickSort degenerates into a variation of insertion sort and operates with order N2 time complexity. Obviously, if quickSort is applied often to sorted or almost-sorted arrays, then partition should choose a pivot value from the middle of the array rather than from the end (see the Exercises below). Studies of randomly generated arrays shows that quicksort behaves, on the average, with order N log N time complexity.


  1. To gain understanding, apply iterative binarySearch in Figure 17 and recursive binarySearch in Figure 18 to locate the value, 9, in the array, int[] r = {-2, 5, 8, 9, 11, 14}. Write execution traces.
  2. Write an execution trace of mergeSort applied to the array int[] r = {5, 8, -2, 11, 9}.
  3. Rewrite mergeSort in Figure 19 so that it does not create multiple new arrays. Instead, use this variant:
    /** mergeSort sorts a segment of an array, r
      * @param r - the array whose elements must be sorted
      * @param scratch - an extra array that is the same length as  r
      * @param lower - the lower bound of the segment to be sorted
      * @param upper - the upper bound of the segment to be sorted  */
    public void mergeSort(int[] r, int[] scratch, int lower, int upper)
    { ...
      mergeSort(r, scratch, lower, middle);
      mergeSort(r, scratch, middle+1, upper);
    The method is initially invoked as follows: mergeSort(a, new int[a.length], 0, a.length-1).
  4. Finish the execution traces for the example in this section that uses quickSort.
  5. Write a partition algorithm for use by quickSort that chooses a pivot value in the middle of the subarray to be partitioned.
  6. Because quickSort's partition method is sensitive to the pivot value it chooses for partitioning, a standard improvement is to revise partition so that, when it partitions a subarray of size 3 or larger, partition chooses 3 array elements from the subarray and picks the median (the ``middle value'') as the pivot. Revise partition in this way.