Showing posts with label Quick Sort. Show all posts
Showing posts with label Quick Sort. Show all posts

Friday, 10 February 2012

Sorting algorithms


Introduction

I grew up with the bubble sort, in common, I am sure, with many colleagues. Having learnt one sorting algorithm, there seemed little point in learning any others, it was hardly an exciting area of study. Mundane sorting may be, but it is also central to many tasks carried out on computer. Prompted by its inclusion in the AEB 'A' level syllabus, I looked at the process again in detail. The efficiency with which sorting is carried out will often have a significant impact on the overall efficiency of a program. Consequently there has been much research and it is interesting to see the range of alternative algorithms that have been developed.
It is not always possible to say that one algorithm is better than another, as relative performance can vary depending on the type of data being sorted. In some situations, most of the data are in the correct order, with only a few items needing to be sorted. In other situations the data are completely mixed up in a random order and in others the data will tend to be in reverse order. Different algorithms will perform differently according to the data being sorted. Four common algorithms are the exchange or bubble sort, the selection sort, the insertion sort and the quick sort.
The selection sort is a good one to use with students. It is intuitive and very simple to program. It offers quite good performance, its particular strength being the small number of exchanges needed. For a given number of data items, the selection sort always goes through a set number of comparisons and exchanges, so its performance is predictable.
procedure SelectionSort ( d: DataArrayType; n: integer) {n is the number of elements}
for k = 1 to n-1 do
begin
small = k
for j = k+1 to n do
if d[ j ] < d[small] then small = j
{Swap elements k and small}
Swap(d, k, small)
end

The below are different sorting algorithms:
  1. Exchange (Bubble) Sort
  2. Insertion Sort
  3. Selection Sort
  4. Shell Sort
  5. Merge Sort
  6. Heap Sort
  7. Quick Sort
  8. Quick3 Sort

Comparing the Algorithms

There are two important factors when measuring the performance of a sorting algorithm. The algorithms have to compare the magnitude of different elements and they have to move the different elements around. So counting the number of comparisons and the number of exchanges or moves made by an algorithm offer useful performance measures. When sorting large record structures, the number of exchanges made may be the principal performance criterion, since exchanging two records will involve a lot of work. When sorting a simple array of integers, then the number of comparisons will be more important.
It has been said that the only thing going for the bubble (exchange) sort is its catchy name. The logic of the algorithm is simple to understand and it is fairly easy to program. It can also be programmed to detect when it has finished sorting. The selection sort, by comparison, always goes through the same amount of work regardless of the data and the quick sort performs particularly badly with ordered data. However, in general the bubble sort is a very inefficient algorithm.
The insertion sort is a little better and whilst it cannot detect that it has finished sorting, the logic of the algorithm means that it comes to a rapid conclusion when dealing with sorted data.
The selection sort is a good one to use with students. It is intuitive and very simple to program. It offers quite good performance, its particular strength being the small number of exchanges needed. For a given number of data items, the selection sort always goes through a set number of comparisons and exchanges, so its performance is predictable.
The first three algorithms all offer O(n2) performance, that is sorting times increase with the square of the number of elements being sorted. That means that if you double the number of elements being sorted, then there will be a four-fold increase in the time taken. Ten times more elements increases the time taken by a factor of 100! This is not a problem with small data sets, but with hundreds or thousands of elements, this becomes very significant. With most large data sets, the quick sort is a vastly superior algorithm (although as you might expect, it is much more complex), as the table below shows.

Random Data Set: Number of comparisons made

Sort/Elements50100200300400500
Selection Sort12254950199004485079800124750
Exchange Sort14105335203004565079866126585
Insertion Sort13915399204734444978779123715
Quick Sort3999901954338450666256
It should be pointed out that the methods above all belong to one family, they are all internal sorting algorithms. This means that they can only be used when the entire data structure to be sorted can be held in the computer's main memory. There will be situations where this is not possible, for example when sorting a very large transaction file which is stored on, say, magnetic tape or disc. Then an external sorting algorithm will be needed.

Quick Sort


Quick Sort

Element12345678
Data27631726458149
1st pass19637264581427
2nd pass19142764587263
3rd pass19142758637264
4th pass19142758636472 sorted!
The quick sort takes the last element (9) and places it such that all the numbers in the left sub-list are smaller and all the numbers in the right sub-list are bigger. It then quick sorts the left sub-list ({1}) and then quick sorts the right sub-list ({63,72,64,58,14,27}). This is a recursive algorithm, since it is defined in terms of itself. This reduces the complexity of programming it, however it is the least intuitive of the four algorithms.

Discussion

When carefully implemented, quick sort is robust and has low overhead. When a stable sort is not needed, quick sort is an excellent general-purpose sort -- although the 3-way partitioning version should always be used instead.
The 2-way partitioning code shown above is written for clarity rather than optimal performance; it exhibits poor locality, and, critically, exhibits O(n2) time when there are few unique keys. A more efficient and robust 2-way partitioning method is given in Quicksort is Optimal by Robert Sedgewick and Jon Bentley. The robust partitioning produces balanced recursion when there are many values equal to the pivot, yielding probabilistic guarantees of O(n·lg(n)) time and O(lg(n)) space for all inputs.
With both sub-sorts performed recursively, quick sort requires O(n) extra space for the recursion stack in the worst case when recursion is not balanced. This is exceedingly unlikely to occur, but it can be avoided by sorting the smaller sub-array recursively first; the second sub-array sort is a tail recursive call, which may be done with iteration instead. With this optimization, the algorithm uses O(lg(n)) extra space in the worst case.

Algorithm

# choose pivot
swap a[1,rand(1,n)]

# 2-way partition
k = 1
for i = 2:n, if a[i] < a[1], swap a[++k,i]
swap a[1,k]
→ invariant: a[1..k-1] < a[k] <= a[k+1..n]

# recursive sorts
sort a[1..k-1]
sort a[k+1,n]

Properties

  • Not stable
  • O(lg(n)) extra space (see discussion)
  • O(n2) time, but typically O(n·lg(n)) time
  • Not adaptive

References

Programming Pearls by Jon Bentley. Addison Wesley, 1986.
Quicksort is Optimal by Robert Sedgewick and Jon Bentley, Knuthfest, Stanford University, January, 2002.
Bubble-sort with Hungarian ("Csángó") folk dance YouTube video, created at Sapientia University, Tirgu Mures (Marosvásárhely), Romania.
Select-sort with Gypsy folk dance YouTube video, created at Sapientia University, Tirgu Mures (Marosvásárhely), Romania.
The Beauty of Sorting YouTube video, Dynamic Graphics Project, Computer Systems Research Group, University of Toronto.