## Heap Algorithms

Definition:

A Heap is a binary tree storing keys with the following properties:

1. key( child ) <= key ( parent )
2. left-filled levels
• the last level is left-filled
• the other levels are full

Properties:

• The cost of Heapify is proportional to the number of level visited. T( n ) = O( n ).
• Cost of HeapInsert is T ( n ) = O ( lgn )
• Cost of RemoveMax is T ( n ) = O ( lgn )
• Cost of HeapSort is T ( n ) = O ( n lgn )
• leftchild ( i ) = 2i
• rightchild ( i ) = 2i +1
• parent ( i )= floor ( i/2 )

Heapfy Algorithm

Assumes L and R subtrees of i are already heaps.

Pseudecode of Heapify:

```Heapify ( A, i)
l = LEFT (i);
r = RIGHT (i);
if  ( l <= heapSize ( A ) and A[ l ] > A[ i ] )
then largest = l ;
else largest=i ;
if ( r <= heapSize ( A ) and A[ l ]> A[ i ] )
then largest = r ;
if  ( largest != i )
then exchange A [ i ] and A[ largest ] ;
Heapify( A, largest );
End

```

Pseudecode of  Building A Heap:

```BuildingHeap ( A )
for i = floor ( n/2 ) down to 1 do
Heapify ( A , i );
End``` In  figure, A is written as binary heap. There are 10 nodes in the tree. From 5th node,  Heapify algorithm starts to run.

Step A: i=5, There is no change.

Step B: i=4, 2 exchanges with 14.

Step C: i=3, 3 exchanges with 10.

Step D: i=2, 1 exchanges with 16, then 1 exchanges with 7.

Step E: i=1, 4 exchanges with 16, then 4 exchanges with 14, then 4 exchanges with 8.

Step F: Final Heap.

Pseudecode of  Remove Max:

```HeapRemoveMax(A)
Remove A [ 1 ]
A= A[ heapSize [ A ] ]
heapSize--
Heapify( A , 1 )
End
```

Pseudecode of Heap Insert:

```HeapInsert ( A, key )
heapSize ( A ) ++
i = heapSize ( A )
while ( i > 1 and A[ parent ( i ) ] < key )
A[ i ] = A [ parent ( i ) ]
i=parent( i )
A[ i ] = key```

Heap Sort:

Pseudecode of Heap Sort:

```HeapSort ( A )
BuildHeap ( A )
for i = n down to 2 do
Exchange A [ 1 ] and A [ i ]
heapSize --
Heapify ( A, 1 )
endfor
End```

How does Heap Sort work? Step A: 1 exchanges with 16. HeapSize decreases (write 16 at the end of the array now), then Heapify.

Step B: 1 exchanges with 14. HeapSize decreases (write 14 after 16), then Heapify.

Step C: 2 exchanges with 10. HeapSize decreases (write 10 after 14), then Heapify.

Step D: 2 exchanges with 9. HeapSize decreases (write 9 after 10), then Heapify.

Step E: 1 exchanges with 8. HeapSize decreases (write 8 after 9), then Heapify.

Step F: 2 exchanges with 7. HeapSize decreases (write 7 after 8), then Heapify.

Step G: 1 exchanges with 4. HeapSize decreases (write 4 after 7), then Heapify.

Step H: 1 exchanges with 3. HeapSize decreases (write 3 after 4), then Heapify.

Step I: 1 exchanges with 2. HeapSize decreases (write 2 after 3), then Heapify.

Step J: Only 1 remains, write 1 after 2. And list is sorted.

## Depth First Search Algorithm (DFS)

Definition:

Depth First Search is an algorithm that traverse the nodes in the graph or tree. It always expand the deepest node in the tree. It uses stack data structure (LIFO queue – last in first out).

In the worst case, time complecity of DFS is O (b^m) and space complecity is O (bm). (b: braching factor, m: maximum depth of any node)

In backtracking search, a different type of DFS,  memory usage is less than BFS. Its space complecity is O (m).

Properties of DFS:

• Completed, if the graph or tree  is finite. (Incomplete in infinite tree or graph),
• Uses stack (LIFO),
• O (b^m) is time complecity of DFS,
• O (bm) is space complecity of DFS.

How Does It Work?

1. The root node is expanded.

2. The successors nodes are expanded.

3. First node in the stack (last node entered the stack) is popped and expanded.

4. This goes on until the last node is expanded (visited).

In the figure below, you can see the order of nodes expanded. Order of Nodes Expanded in DFS

Animation:

For better understanding, go to this animation page.

Pseudocode:

A recursive implementation of DFS: 

```procedure DFS(G,v):
label v as discovered
for all edges from v to w in G.adjacentEdges(v) do
if vertex w is not labeled as discovered then
recursively call DFS(G,w)
```

A non-recursive implementation of DFS: 

```procedure DFS-iterative(G,v):
let S be a stack
S.push(v)
while S is not empty
v ← S.pop()
if v is not labeled as discovered:
label v as discovered
for all edges from v to w in G.adjacentEdges(v) do
S.push(w)

```

References:

 Goodrich and Tamassia; Cormen, Leiserson, Rivest, and Stein

 Kleinberg and Tardos

Artificial Intelligence  A Modern Approach, S.Russell, P. Norvig, Third Edition,

Figures from:  http://en.wikipedia.org

Definition:

Breadth-First Search Algorithm is a graph search algorithm. It begins from root node and continues to expand all neighbour node 1- level below.

In artificial intelligence, it is categorized as Uninformed (Blind) Search.

In worst case, the time complexity of this algorithm is O ( b ^d ). ( b: maximum branching factor of the search tree ;  d: depth of the least-cost solution )

In worst case, the space complexity of this algorithm is O ( b ^d ).

Properties of BFS:

• Completed,
• Finds the shallowest path,
• Generates b + b^2 + b^3 + … + b^d = O( b^d ) nodes,
• Space complexity (frontier size) is O (b ^ d ) .

How Does It Work?

1. The root node is expanded.
2. Then,  all successors of the root node are expanded,
3. Then all their successors.

In detail: (from Wikipedia)

1. Enqueue the root node,
2. Dequeue a node
• If the element sought is found in this node, quit the search and return a result.
• Otherwise enqueue any successors (the direct child nodes) that have not yet been discovered.
3. If the queue is empty, every node on the graph has been examined – quit the search and return “not found”.
4. If the queue is not empty, repeat from Step 2.

In the figure below, you can see the animated figure of BFS. Animated Figure of BFS

In the figure below, you can see the order of nodes expanded (visited) Order of Nodes Expandend in BFS

Animation:

For better understanding, go to this animation page.

Pseudocode:

```1  procedure BFS(G,v) is
2      create a queue Q
3      create a vector set V
4      enqueue v onto Q
6      while Q is not empty loop
7         t ← Q.dequeue()
8         if t is what we are looking for then
9            return t
10        end if
11        for all edges e in G.adjacentEdges(t) loop
13           if u is not in V then
15               enqueue u onto Q
16           end if
17        end loop
18     end loop
19     return none
20 end BFS```

References:

Artificial Intelligence  A Modern Approach, S.Russell, P. Norvig, Third Edition,

Figures from:  http://en.wikipedia.org

## Quick Sort

Definition:

Quick Sort is an algorithm that sorts unsorted lists and arrays.

In worst case, time complexity of Quick Sort is O ( n^2). In average and best case, time complexity of that algorithm is  O (n logn).

It is divide and conquer algorithm and invented by Tony Hoare.

It is also in-place algorithm, so it doesn’t require extra space of memory. This property make this algorithm more popular in practice.

Properties of Quick Sort:

• Not Stable,
• O ( logn) extra space,
• Often faster algorithm than other O ( n logn) algoirthms in practice.
• In-place algorithm.

How does it work?

1.  Pick an element from array. This element is called Pivot.

2. Reorder the array according to pivot. Elements smaller than pivot come before pivot and elements bigger than pivot come after the pivot.  This operation is called partition.  Pivot gets its final position after partitioning.

3. Recursively do it again for subarrays of the list.

In the figure below, you can see the animated figure of this algorithm. Animated Figure of Quick Sort

In the example below, you can see pivot (4), partition algorithm and partioning in sub arrays. Quick Sort

Animation:

For better understanding, go to this animation page.

Pseudocode:

```QuickSort:
quicksort(A, i, k):
if i < k:
p := partition(A, i, k)
quicksort(A, i, p - 1)
quicksort(A, p + 1, k)

Partition:
// left is the index of the leftmost element of the subarray
// right is the index of the rightmost element of the subarray (inclusive)
// number of elements in subarray = right-left+1
partition(array, left, right)
pivotIndex := choose-pivot(array, left, right)
pivotValue := array[pivotIndex]
swap array[pivotIndex] and array[right]
storeIndex := left
for i from left to right - 1
if array[i] ≤ pivotValue
swap array[i] and array[storeIndex]
storeIndex := storeIndex + 1
swap array[storeIndex] and array[right]  // Move pivot to its final place
return storeIndex```

References:

Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001), Introduction to Algorithms (second ed.), MIT Press and McGraw-Hill,

Figures from:  http://en.wikipedia.org

## Bubble Sort

Definition:

Bubble sort is another type of sorting algorithm that sort unsorted lists and arrays.

In worst and  average case its time complexity is O(n^2) (n square). However, In best case, time complexity of this algorithm is O (n).

Properties of Bubble Sort:

• Stable,
• O(n^2) comparison and swap,
• O(1) memory space.

Its properties are mostly same as insertion sort.

How does it work?

In bubble sort,

1. It starts at the beginning of unsorted list and it compares every adjacent pair.

2. If the elements in the pair are not right position, swap them.

3. The largest (or smallest ) one goes to the at the end of the list.

4. This iteration goes on until list is getting sort.

Elements in the list move like bubbles in the water.  They move upward to reach surface of the water.

In the figure below, you can see the animated bubble sort example. Animated Bubble Sort Example

Another example for bubble sort: Bubble Sort Example

Animation:

For better understanding, go to this animation page.

Pseudocode:

```for i = 1:n,
swapped = false
for j = n:i+1,
if a[j] < a[j-1],
swap a[j,j-1]
swapped = true
→ invariant: a[1..i] in final position
break if not swapped
end```

References:

Algorithms in Java, Parts 1-4, 3rd edition by Robert Sedgewick. Addison Wesley, 2003.

Programming Pearls by Jon Bentley. Addison Wesley, 1986.

http://www.sorting-algorithms.com/

Figure from:  http://en.wikipedia.org

For more algorithms, go to Algorithms Section.