Bubble Sort
This sort makes a pass from low to high and simply swaps the element with the higher value to the next position. You can see how it gets its name with the viewer as it eventually percolates or bubbles up the element with maximum value to the highest position. Passes are continually made until a pass when it detects that no swaps have been made.
Since on each pass, the highest value is moved to the top you will see how each succeeding pass does not look at (compare) the previous high element. This routine has been further modified to compare only a single element below the first swap of the previous pass since this the previous passes have ordered lower elements. This is my contribution to the classic routine.
Although generally the bubble sort is the slowest, it is still frequently used because of the ease of programming it. Interestingly, it is one of the fastest sorts if very few elements are out of order. This may be useful in a situation where an array is to be ordered after the addition of a single element.
Bi-Bubble Sort
Imagine this sort like a bubble sort that goes in both directions. First the highest value is bubbled up and then the lowest element is bubbled down. This continues until no swaps are made in one direction. Note that in this sort both the highest and the lowest element of the previous pass need not be compared because it is guaranteed that the highest and the lowest will be bubbled into their respective end positions.
Count Sort
This is an intriguingly simple and rocket fast sort. It makes two passes through the array. The first examines or counts the values, and the second pass distributes them.
To do this a second array is dimensioned large enough to contain all the possible values. The array position then becomes the temp value with the actual value of the array becoming the number of elements having that value.
While it is really beyond the scope of the viewer to include sort methods that involve multiple arrays (since you can't see the other array), this method seemed so novel to me (and by far the fastest) that I was compelled to include it.
Heap Sort
The technical discussion of this sort rapidly gets into vocabulary such as trees, nodes, leaves, parents, and children. Suffice it to say that the largest element is magically sifted down and then placed at the other end of the array, thus making the array to sort one element smaller on the next pass.
It seems like the oddest sort because you get the feeling that it is going in the wrong direction, and then suddenly it changes its mind and goes the right way. Fantastically it is one of the fastest sorts going and has no stack problem to worry about like the recursive quick sort. This combination has made it about the most popular with modern applications.
Insertion Sort
The insertion sort is one step up the evolution ladder from the bubble sort. It works by kind of a reverse osmosis process of making a single foreward pass but when encountering a lower value it will take it all the way back down to its lowest position. You will see that the one big adantage of this is that once that low element is dropped off in its lowest position, the sort resumes way back up where it first found that element.
It is faster than a bubble sort and has use if the array has only a few elements out of position.
Interpolation Sort
This is totally my idea that I wrote from scratch after seeing the phrase interpolation sort but being unable to find a definition or an example on the net. First a single pass selection sort is performed to place minimum and maximum values. Then the positions of the remainder values are determined as a simple interpolation of these max and min values.
Only one pass can effectively be made to put as many elements as possible into the approximate position. Then a cleanup sort is required. It happens to work here without cleanup because no two elements have the same value. In the Sort Timer the followup is accomplished with an insertion sort .
Merge Sort
This sort is totally my idea that I wrote after viewing other so called merge sorts. Doubtless someone somewhere has invented it first. This is an inherently recursive sort which keeps halving the array and then works back up to merge the halves. Like the recursive quick sort, this may produce stack problems for very large arrays.
Quick Sort
The procedure chooses a midpoint value - not a mid element with the hope that it will be a median value and then swaps elements on either side of it. Variations of this method attempt to choose median elements other than by midpoint. Still it is one of the fastest and most popular methods to be found with one significant drawback...
It is inherently recursive. That means that in order to accomplish its goal, it continues to call itself to sort parts of the array. At a point in the sort it may be hundreds of instances deep. When such a recursive call is made, parameter values and return addresses must be placed on the stack - a finite allocation of your computer's memory. This can prove to be a fatal problem with very large arrays.
Selection Sort
This is a double selection sort sometimes called a shaker sort. The routine simply goes through the whole array and finds the minimum and maximum values and then swaps them to the appropriate end positions. Subsequent passes do not have to look at these end positions previously placed. A classic selection sort finds only the minimum in each pass.
Although interesting, it is not widey used since bubble and insertion sorts are easier to write and about as fast.
Shell Sort
This might be considered an insertion sort that compares items further than one element apart. This distance or difference is initially half the array in the classic shell sort, and is halved again on each subsequent pass. Also the routine classically continues carrying a swapped element down as far as possible like in an insertion sort.
In my study of this classic approach, the effect of continuing to carry down the swapped element proved to be very unproductive and time consuming, so I eliminated it until the difference gets to one (swapping contiguous elements). Also I found it considerably more effective to use a divisor of 1.3 instead of the classic 2.
This is still a popular sorting method because next to the heap sort this is the fastest and safest (with respect to the stack) sort.
General Notes:
1. Shuffling vs. Swapping
When you view many of these sorts you will note that it appears that most movement of elements is accomplished by a series of next element swaps. This is a bit misleading.
A considerably faster method is to remember an element first as a temp variable, then shuffle or push up the contiguous elements by simple replacement as long as the compare of the next element holds with the temp value. When the compare fails, the temp is placed in that ending position and the shuffle ends. Although it sounds complicated, it is rather simple once you get the idea and can save up to 66% of assignment operations.
2. In Place Sorts
Note that all these sorts with the exception of the count sort are in place sorts. That is, they sort all the elements of the array within the bounds of the array. Except for the temp variable mentioned above, no additional elements or members are required. There are other sort methods that rely on dimensioning one or more additional arrays to hold the sorted or partially sorted results. One such sort would be a radix (or postman) sort. These are beyond the scope of this tutorial since additional windows would have to be opened to show the additional arrays.