7

Breadth-First Search Algorithm (BFS)

Definition:

Breadth-First Search Algorithm is a graph search algorithm. It begins from root node and continues to expand all neighbour node 1- level below.

In artificial intelligence, it is categorized as Uninformed (Blind) Search.

In worst case, the time complexity of this algorithm is O ( b ^d ). ( b: maximum branching factor of the search tree ;  d: depth of the least-cost solution )

In worst case, the space complexity of this algorithm is O ( b ^d ).

Properties of BFS:

  • Completed,
  • Finds the shallowest path,
  • Generates b + b^2 + b^3 + … + b^d = O( b^d ) nodes,
  • Space complexity (frontier size) is O (b ^ d ) .

How Does It Work?

  1. The root node is expanded.
  2. Then,  all successors of the root node are expanded,
  3. Then all their successors.

In detail: (from Wikipedia)

  1. Enqueue the root node,
  2. Dequeue a node
    • If the element sought is found in this node, quit the search and return a result.
    • Otherwise enqueue any successors (the direct child nodes) that have not yet been discovered.
  3. If the queue is empty, every node on the graph has been examined – quit the search and return “not found”.
  4. If the queue is not empty, repeat from Step 2.

In the figure below, you can see the animated figure of BFS.

Animated_BFS

Animated Figure of BFS

In the figure below, you can see the order of nodes expanded (visited)

300px-Breadth-first-tree.svg

Order of Nodes Expandend in BFS

Animation:

For better understanding, go to this animation page.

Pseudocode:

1  procedure BFS(G,v) is
2      create a queue Q
3      create a vector set V
4      enqueue v onto Q
5      add v to V
6      while Q is not empty loop
7         t ← Q.dequeue()
8         if t is what we are looking for then
9            return t
10        end if
11        for all edges e in G.adjacentEdges(t) loop
12           u ← G.adjacentVertex(t,e)
13           if u is not in V then
14               add u to V
15               enqueue u onto Q
16           end if
17        end loop
18     end loop
19     return none
20 end BFS

 

References:

Artificial Intelligence  A Modern Approach, S.Russell, P. Norvig, Third Edition,

Figures from:  http://en.wikipedia.org

 

1

Quick Sort

Definition:

Quick Sort is an algorithm that sorts unsorted lists and arrays.

In worst case, time complexity of Quick Sort is O ( n^2). In average and best case, time complexity of that algorithm is  O (n logn).

It is divide and conquer algorithm and invented by Tony Hoare.

It is also in-place algorithm, so it doesn’t require extra space of memory. This property make this algorithm more popular in practice.

Properties of Quick Sort:

  • Not Stable,
  • Not Adaptive,
  • O ( logn) extra space,
  • Often faster algorithm than other O ( n logn) algoirthms in practice.
  • In-place algorithm.

How does it work?

1.  Pick an element from array. This element is called Pivot.

2. Reorder the array according to pivot. Elements smaller than pivot come before pivot and elements bigger than pivot come after the pivot.  This operation is called partition.  Pivot gets its final position after partitioning.

3. Recursively do it again for subarrays of the list.

In the figure below, you can see the animated figure of this algorithm.

Sorting_quicksort_anim

Animated Figure of Quick Sort

 In the example below, you can see pivot (4), partition algorithm and partioning in sub arrays.

Quick

Quick Sort

Animation:

For better understanding, go to this animation page.

Pseudocode:

QuickSort:
quicksort(A, i, k):
  if i < k:
    p := partition(A, i, k)
    quicksort(A, i, p - 1)
    quicksort(A, p + 1, k)


Partition:
  // left is the index of the leftmost element of the subarray
  // right is the index of the rightmost element of the subarray (inclusive)
  // number of elements in subarray = right-left+1
  partition(array, left, right)
     pivotIndex := choose-pivot(array, left, right)
     pivotValue := array[pivotIndex]
     swap array[pivotIndex] and array[right]
     storeIndex := left
     for i from left to right - 1
         if array[i] ≤ pivotValue
             swap array[i] and array[storeIndex]
             storeIndex := storeIndex + 1
     swap array[storeIndex] and array[right]  // Move pivot to its final place
     return storeIndex

References:

Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001), Introduction to Algorithms (second ed.), MIT Press and McGraw-Hill,

Figures from:  http://en.wikipedia.org

0

Bubble Sort

Definition:

Bubble sort is another type of sorting algorithm that sort unsorted lists and arrays.

In worst and  average case its time complexity is O(n^2) (n square). However, In best case, time complexity of this algorithm is O (n).

Properties of Bubble Sort:

  • Stable,
  • Adaptive,
  • O(n^2) comparison and swap,
  • O(1) memory space.

Its properties are mostly same as insertion sort.

How does it work?

In bubble sort,

1. It starts at the beginning of unsorted list and it compares every adjacent pair.

2. If the elements in the pair are not right position, swap them.

3. The largest (or smallest ) one goes to the at the end of the list.

4. This iteration goes on until list is getting sort.

Elements in the list move like bubbles in the water.  They move upward to reach surface of the water.

In the figure below, you can see the animated bubble sort example.

Bubble-sort-example-300pxAnimated Bubble Sort Example

Another example for bubble sort:

Bubble_sort_animation

Bubble Sort Example

Animation:

For better understanding, go to this animation page.

Pseudocode:

for i = 1:n,
    swapped = false
    for j = n:i+1, 
        if a[j] < a[j-1], 
            swap a[j,j-1]
            swapped = true
    → invariant: a[1..i] in final position
    break if not swapped
end

 

References:

Algorithms in Java, Parts 1-4, 3rd edition by Robert Sedgewick. Addison Wesley, 2003.

Programming Pearls by Jon Bentley. Addison Wesley, 1986.

http://www.sorting-algorithms.com/

Figure from:  http://en.wikipedia.org

 

For more algorithms, go to Algorithms Section.

27

Selection Sort

Definition:

Selection sort is a type of sort algorithm that sort the unsorted arrays.

In worst, average and best case its time complexity is O(n^2) (n square). It is inefficient in large data sets. However, it has simple implementation and it does not require extra memory.

Its properties:

  • Not stable,
  • O(n^2) time complexity,
  • Not adaptive,
  • O (1) memory space,
  • In-place comparison sort,
  • O ( n) for swaps.

How does it work?

In selection sort, array is divided into 2 parts : sorted and unsorted list.

  1. Firstly, sorted list is empty, unsorted list has entire input.
  2. Find the smallest element in the unsorted list, and put it into sorted list.
  3. After all, array is sorted.

Example:

70  52  45  5   8  // initially
5  52  45  70  8 // sorted list= 5 
5  8  45  70 52 // sorted list= 5 8
5  8  45 70 52  // sorted list= 5 8 45 
5  8  45 52 70 // sorted list= 5 8 45 52 
5  8 45 52 70 // sorted list=  5 8 45 52 70

 

A figure below shows the selection sort:

Selection_sort_animation

Animated Selection Sort

Animation:

For better understanding, go to this animation page.

Pseudocode:

for i = 1:n,
    k = i
    for j = i+1:n, if a[j] < a[k], k = j
    → invariant: a[k] smallest of a[i..n]
    swap a[i,k]
    → invariant: a[1..i] in final position
end

References:

Algorithms in Java, Parts 1-4, 3rd edition by Robert Sedgewick. Addison Wesley, 2003.

Programming Pearls by Jon Bentley. Addison Wesley, 1986.

http://www.sorting-algorithms.com/

Figure from:  http://en.wikipedia.org

 

For more algorithms, go to Algorithms Section.

3

Merge Sort

Definition:

Merge Sort is a type of sorting algorithm. It is a divide and conquer algorithm and  it was invented by John von Neumann.

In worst, average and best case its time complexity is O(n logn).

Its Properties in short:

  • Stable,
  • Completed,
  • O(n logn) time complexity,
  • O ( n ) extra space for arrays,
  • O ( logn ) extra space for linked lists,
  • Does not require random access data,
  • Not adaptive.

How does it work?

In merge sort,

  1. Firstly, unsorted array is divided into n sublist, until each sublist has only 1 element,
  2. Then all elements are merged according to their value and new sorted arrays are created.

In the figure below, you can see animated merge sort example.

Merge-sort

Animated Merge Sort Example

Another example for merge sort is:

Merge_sort_algorithm_diagram.svg

          Example For Merge Sort

Animation:

For better understanding, go to this animation page.

Pseudocode:

# split in half
m = n / 2

# recursive sorts
sort a[1..m]
sort a[m+1..n]

# merge sorted sub-arrays using temp array
b = copy of a[1..m]
i = 1, j = m+1, k = 1
while i <= m and j <= n,
    a[k++] = (a[j] < b[i]) ? a[j++] : b[i++]
    → invariant: a[1..k] in final position
while i <= m,
    a[k++] = b[i++]
    → invariant: a[1..k] in final position

 

References:

Algorithms in Java, Parts 1-4, 3rd edition by Robert Sedgewick. Addison Wesley, 2003.

Programming Pearls by Jon Bentley. Addison Wesley, 1986.

http://www.sorting-algorithms.com/merge-sort

Figures from:  http://en.wikipedia.org

 

For more algorithms, go to Algorithms Section.