Appendix C. COMPUTATIONAL METHODS

C.2. Graph Search

One primitive operation that will be used over and over in motion planning is known as graph search. In this context, the vertices of the graph are usually points in C-space and edges indicate some notion of local connectivity between points (such as "a straight line exists from point A to point B"). In other contexts, the vertices denote a region of C-space, and edges indicate connectivity between regions.

Observe that a graph only encodes a set of alternatives (edges) at each vertex, but does not contain any information about which edge to take. The purpose of graph search is to determine, exactly and efficiently, the path --- a sequence of edges --- in the graph that connects a start vertex to a goal vertex. Search is a process of systematically exploring a sequence of alternatives, and we will use it to find paths in discretized versions of continuous free spaces.

Big-O notation

Throughout this book we use Big-O notation to describe the performance of various quantities, including the size of data structures and running times of calculations. This notation allows us to specify the most dominant factors affecting performance while ignoring constant terms. Specifically, the statement: $$g(n) \text{ is } O(f(n))$$ means that above a certain value of $n$, $g(n)$ will be less than some constant times $f(n)$. More specifically, it is defined as the following statement: there exists a value $N$ and a constant $c$ such that for all $n > N$, $g(n) < c f(n)$. In other words, $g$ is asymptotically bounded by $f$

Even if $g(n)$ is a complex expression, it is possible to determine a simple expression for $f(n)$. For example, if $g(n) = 30 n^4 - 12 n^2 + 1000 n$, the dominant term as $n$ grows is the $n^4$ term, and we can say $g(n)$ is $O(n^4)$. We will seek the simplest expression for $f$ that fulfills the big-O requirement while also remaining a tight bound.

Some common Big-O expressions include:

  • $O(1)$: constant.
  • $O(n)$: upper bounded by a linear function.
  • $O(n^2)$: upper bounded by a quadratic function.

Big-O notation also generalizes to functions of multiple variables. For example, the statement "$4 mn - 30 m^{1.5} + 200n$ is $O(mn + m^{1.5})$" holds because for any fixed value of $m > 1$, the big-O expression holds for $n$, and likewise for any fixed $n > 1$ the expression holds for $m$.

Given start and goal vertices, respectively $s,g \in V$, the goal of graph search is to find a sequence of vertices: $$v_0 = s, v_1, \ldots, v_k = g \text{ such that }(v_{i-1},v_{i}) \in E\text{ for all }i=1,...k.$$ The number of steps $k$ in the path is not fixed. If there does not exist a path (that is, $s$ and $g$ are disconnected), then search should return "no path."

We may also ascribe a notion of cost $c(u,v) > 0$ to each edge, in which case our goal is to find the optimal path, that is, the sequence of vertices such that the total path cost $$\sum_{i=1}^{k} c(v_{i-1},v_i)$$ is minimized among all paths connecting the start and goal. If no cost is given, then all edges are assumed to have uniform cost and we wish to optimize the total number of edges in a path.

Dijkstra's algorithm

The most famous graph search method is Dijkstra's algorithm. It calculates an optimal path when one exists, and works in $O(|E| + |V| \log |V|$) time and $O(|V|$) space, when implemented with an appropriate priority queue data structure. The general idea is to iterate through all unexplored vertices, ordered in increasing cost from the start (cost-to-come) $d[v]$ . All vertices have estimated cost-to-come set to $d[v] = \infty$ at the beginning, except for the start vertex which has cost-to-come 0. At each iteration, the vertex $v$ with the lowest cost-to-come is marked as explored, and the costs of all of $v$'s unexplored neighbors $w$ in the graph are updated if the path to $w$ through $(v,w)$ has a lower cost than the previous value $d[w]$. Pseudocode is given in Algorithm Dijkstra's.


Algorithm Dijkstras$(G=(V,E),s,g)$

  1. $d[v] \gets \infty$ for all $v\in V$
  • $d[s] \gets 0$
  • $p[v] \gets nil$ for all $v\in V$
  • $Q \gets \{ s \}$
  • while $Q$ is not empty do
  •    $v \gets \text{vertex in } Q \text{ that minimizes }d[v]$
  •    $Q \gets Q \setminus \{ v \}$
  •    if $v = g$
  •        return the path leading to $g$ via the predecessor list $p$
  •    for all $w$ such that $(v,w) \in E$ do
  •        $d_{cand} \gets d[v] + d(v,w)$
  •        if $d_{cand} < d[w]$
  •            $d[w] \gets d_{cand}$
  •            $p[w] \gets v$
  •            $Q \gets Q \cup \{w\}$
  • return "no path"

Here the predecessor list $p$ stores the previous vertex on the optimal path to a vertex, and at the end of the algorithm, it is traversed to provide the optimal path from $s$ to $g$. This traversal is given in Algorithm Predecessor-Traversal:


Algorithm PredecessorTraversal$(p,s,g)$

  1. $L \gets$ empty list
  • $v \gets g$
  • while $v \neq nil$ do
  •    Prepend $v$ to $L$
  •    $v \gets p[v]$
  • return $L$

It can be proven that Dijkstra's algorithm satisfies the invariant that any time a vertex is removed from $Q$, then its cost-to-come is the optimal amongst all paths to it. Furthermore, since costs are never negative, then the cost of the vertex in $Q$ with minimum cost always increases. In this way, Dijkstra's algorithm can be likened to a "brush fire" that fills in correct costs in a progressively expanding ring surrounding the start vertex. It is also guaranteed to terminate in finite time, since it will never expand the same vertex more than once.

In [1]:
#Demonstration of Dijkstra's algorithm. Note that this code can be found in rsbook_code.utilities.search
from __future__ import print_function,division

#dijkstra's algorithm
import heapq  #for a fast priority queue implementation

class AdjListGraph:
    """A very simple adjacency list graph structure.  For higher performance use
    in Python, you will probably want to learn a library like networkx, which will
    have graph search algorithms built in."""
    def __init__(self,vertices,edges):
        self.vertices = vertices
        self.edges = dict((v,[]) for v in vertices)
        for (v,w) in edges:
            self.edges[v].append(w)
    def neighbors(self,v):
        return self.edges[v]

def predecessor_traverse(p,s,g):
    """Used by dijkstra's algorithm to traverse a predecessor dictionary"""
    L = []
    v = g
    while v is not None:
        L.append(v)
        v = p.get(v,None)
    #rather than prepending, we appended and now we'll reverse.  This is a more efficient than prepending
    return L[::-1]
    
def dijkstras(G,s,g,cost=(lambda v,w:1),verbose=1):
    """Completes a shortest-path search on graph G.
    
    Args:
        G (AdjListGraph): the graph, given by an AdjListGraph
        s: the start node
        g: the goal node
        cost (optional): a callback function c(v,w) that returns the edge cost
        verbose (optional): if nonzero, will print information about search
            progress.
    
    Returns:
        list or None: either the path of nodes from s to g with minimum cost,
        or None if no path exists.
    """
    d = dict((v,float('inf')) for v in G.vertices)
    p = dict((v,None) for v in G.vertices)
    d[s] = 0
    Q = [(0,s)]   #each element is a tuple (c,v) with c=cost from start, v=vertex
    nnodes = 0
    while len(Q) > 0:
        c,v = heapq.heappop(Q)  #get the element in the queue with the least value of c
        nnodes += 1
        if v == g:
            #found a path
            if verbose: print("Dijkstra's succeeded in",nnodes,"iterations")
            return predecessor_traverse(p,s,g)
        for w in G.neighbors(v):
            dcand = d[v] + cost(v,w)   #this is the cost of going through v to w
            if dcand < d[w]:
                #going through v is optimal
                #if the predecessor of w is not None, then we'll have to adjust the heap
                if p[w] is not None:
                    Q = [(c,x) for (c,x) in Q if x is not w]
                    heapq.heapify(Q)
                d[w] = dcand
                p[w] = v
                #put w on the queue
                heapq.heappush(Q,(dcand,w))
    #no path found
    if verbose: print("Dijkstra's failed in",nnodes,"iterations")
    return None

G = AdjListGraph(['A','B','C','D','E'],[('A','B'),('A','C'),('C','B'),('C','D'),('B','E'),('E','B'),('E','D')])
print("Simple graph adj list:",G.edges)
print("Running dijkstra's...")
path = dijkstras(G,'A','D')
print("Path in simple graph:",path)
Simple graph adj list: {'C': ['B', 'D'], 'D': [], 'A': ['B', 'C'], 'E': ['B', 'D'], 'B': ['E']}
Running dijkstra's...
Dijkstra's succeeded in 4 iterations
Path in simple graph: ['A', 'C', 'D']
In [2]:
def make_grid_graph(M,N):
    """Makes a grid graph"""
    G = AdjListGraph([],[])
    for i in range(M):
        for j in range(N):
            n = (i,j)
            G.vertices.append(n)
            G.edges[n] = []
            if i > 0:
                G.edges[n].append((i-1,j))
            if j > 0:
                G.edges[n].append((i,j-1))
            if i+1 < M:
                G.edges[n].append((i+1,j))
            if j+1 < N:
                G.edges[n].append((i,j+1))
    return G

G = make_grid_graph(10,10)
path = dijkstras(G,(0,0),(5,5))
print("Path on grid from (0,0) to (5,5):",path)
Dijkstra's succeeded in 60 iterations
Path on grid from (0,0) to (5,5): [(0, 0), (0, 1), (0, 2), (0, 3), (0, 4), (0, 5), (1, 5), (2, 5), (3, 5), (4, 5), (5, 5)]

In many cases, the uniformly expanding strategy of Dijkstra's algorithm is a waste because it is known that the goal lies in a particular direction. To make search faster, it is possible to bias the search ordering using a heuristic that encodes an estimated distance to the goal. In particular, suppose we develop a heuristic function $h(v)$ that evaluates an approximation of the cost from $v$ to $g$ (e.g., the length of the line segment from a configuration to the goal). Then, by replacing line 6 in Dijkstra's algorithm with the line:

  1. $v \gets$ vertex in $Q$ that minimizes $d[v] + h(v)$

we obtain a method called $A^ search$*. This method is proven to calculate optimal paths under the conditions that $h(v)$ is *admissible* and *consistent* . Admissibility means that $h(v)$ is a lower bound on the true cost from $v$ to $g$ in the graph, and consistency means that $h(v)$ becomes more accurate as $v$ approaches $g$. (Specifically, $h(u) \leq h(v) + c(u,w)$ for all edges $(u,v) \in E$.)

In [3]:
#Below is code for the A* algorithm. Note that this code can be found in rsbook_code.utilities.search

from IPython.display import display,Markdown

def astar(G,s,g,cost=(lambda v,w:1),heuristic=(lambda v:0),verbose=1):
    """Completes an A* search on graph G.
    
    Args:
        G (AdjListGraph): the graph, given by an AdjListGraph
        s: the start node
        g: the goal node
        cost (optional): a callback function c(v,w) that returns the edge cost
        heuristic (optional): a callback function h(v) that returns the 
            heuristic cost-to-go between v and g
        verbose (optional): if nonzero, will print information about search
            progress.
    
    Returns:
        list or None: either the path of nodes from s to g with minimum cost,
        or None if no path exists.
    """
    d = dict((v,float('inf')) for v in G.vertices)
    p = dict((v,None) for v in G.vertices)
    d[s] = 0
    Q = [(0,0,s)]   #each element is a tuple (f,-c,v) with f=c + heuristic(v), c=cost from start, v=vertex
    nnodes = 0
    while len(Q) > 0:
        f,minus_c,v = heapq.heappop(Q)  #get the element in the queue with the least value of c
        nnodes += 1
        if v == g:
            #found a path
            if verbose: print("A* succeeded in",nnodes,"iterations")
            return predecessor_traverse(p,s,g)
        for w in G.neighbors(v):
            dcand = d[v] + cost(v,w)   #this is the cost of going through v to w
            if dcand < d[w]:
                #going through v is optimal
                #if the predecessor of w is not None, then we'll have to adjust the heap
                if p[w] is not None:
                    Q = [(f,c,x) for (f,c,x) in Q if x is not w]
                    heapq.heapify(Q)
                d[w] = dcand
                p[w] = v
                #put w back on the queue, with the heuristic value as its priority
                heapq.heappush(Q,(dcand+heuristic(w),-dcand,w))
    #no path found
    if verbose: print("A* failed in",nnodes,"iterations")
    return None

import math
G = make_grid_graph(10,10)
start = (0,0)
goal = (5,5)

display(Markdown("### Euclidean distance heuristic"))
def h1(x):
    """Euclidean distance heuristic"""
    return math.sqrt((x[0]-goal[0])**2+(x[1]-goal[1])**2)
path = astar(G,start,goal,heuristic=h1)
print("Path, heuristic 1:",path)

display(Markdown("### Manhattan distance heuristic"))
def h2(x):
    """Manhattan distance heuristic"""
    return abs(x[0]-goal[0])+abs(x[1]-goal[1])
path = astar(G,start,goal,heuristic=h2)
print("Path, heuristic 2:",path)

Euclidean distance heuristic

A* succeeded in 27 iterations
Path, heuristic 1: [(0, 0), (0, 1), (1, 1), (1, 2), (2, 2), (2, 3), (3, 3), (3, 4), (4, 4), (4, 5), (5, 5)]

Manhattan distance heuristic

A* succeeded in 11 iterations
Path, heuristic 2: [(0, 0), (0, 1), (0, 2), (0, 3), (0, 4), (0, 5), (1, 5), (2, 5), (3, 5), (4, 5), (5, 5)]

Search on infinite or large graphs

Search can also be performed on an implicit graph that may be infinite or impractically large to construct in advance. The hope is that only a small portion of the graph needs to be explored in order to find a solution (or to prove that no path exists). We can do this by dynamically generating portions of the graph using the successor function $N(v)$ and using sparse data structures for $p$ and $d$ that do not store $nil$ and $\infty$ values.

In this way we only need to provide the start, goal, successor function $N$, and costs $c$. The algorithm will generate as little of the graph as necessary to find a solution path. However, if the graph is infinite and the goal cannot be reached, then the search may not terminate.

Alternatively, one can construct a search tree that reconstructs a small part of $G$. Each node of the tree stores its successors, parent, and depth. However, to construct a search tree properly it is important to detect when the same vertex can be reached with multiple paths so as to keep only the node whose ancestors trace out the shortest path. This requires auxiliary techniques for revisited state detection.

In [4]:
#Below is code for the implicit A* algorithm. Note that this code can be found in rsbook_code.utilities.search

def astar_implicit(successors,s,g,cost=(lambda v,w:1),heuristic=(lambda v:0),verbose=1):
    """Completes an A* search on a large/infinite implicit graph.
    
    Args:
        successors: a callback function s(v) that returns a list of neighbors 
            of a node v.
        s: the start node
        g: the goal node or goal test
        cost (optional): a callback function c(v,w) that returns the edge cost
        heuristic (optional): a callback function h(v) that returns the 
            heuristic cost-to-go between v and g
        verbose (optional): if nonzero, will print information about search
            progress.
    
    Returns:
        list or None: either the path of nodes from s to g with minimum cost,
        or None if no path exists.
    """
    if not callable(g):
        gtest = lambda x,goal=g: x==g
    else:
        gtest = g
    inf = float('inf')
    d = dict()
    p = dict()
    d[s] = 0
    Q = [(0,0,s)]   #each element is a tuple (f,-c,v) with f=c + heuristic(v), c=cost from start, v=vertex
    nnodes = 0
    while len(Q) > 0:
        f,minus_c,v = heapq.heappop(Q)  #get the element in the queue with the least value of c
        nnodes += 1
        if gtest(v):
            #found a path
            if verbose: print("A* succeeded in",nnodes,"iterations")
            return predecessor_traverse(p,s,v)
        for w in successors(v):
            dcand = d[v] + cost(v,w)   #this is the cost of going through v to w
            if dcand < d.get(w,float('inf')):
                #going through v is optimal
                #if the predecessor of w is not None, then we'll have to adjust the heap
                if w in p:
                    Q = [(f,c,x) for (f,c,x) in Q if x is not w]
                    heapq.heapify(Q)
                d[w] = dcand
                p[w] = v
                #put w back on the queue, with the heuristic value as its priority
                heapq.heappush(Q,(dcand+heuristic(w),-dcand,w))
    #no path found
    if verbose: print("A* failed in",nnodes,"iterations")
    return None

import math

#defines a fairly dense graph, with step 0.1
delta = 0.1
#defines a coarse graph, with step 1
#delta = 1

start = (0,0)
goal = (5,5)

def length_cost(v,w):
    """Euclidean length"""
    return math.sqrt(sum((a-b)**2 for (a,b) in zip(v,w)))

def successors(n):
    c1 = (n[0]+delta,n[1])
    c2 = (n[0]-delta,n[1])
    c3 = (n[0],n[1]+delta)
    c4 = (n[0],n[1]-delta)
    return [c1,c2,c3,c4]

def successors_8connected(n):
    c5 = (n[0]+delta,n[1]+delta)
    c6 = (n[0]-delta,n[1]+delta)
    c7 = (n[0]-delta,n[1]+delta)
    c8 = (n[0]-delta,n[1]-delta)
    return successors(n) + [c5,c6,c7,c8]

#may be needed for rounding errors
def goaltest(n):
    return all(abs(xn-xg) < delta*0.5 for (xn,xg) in zip(n,goal))


display(Markdown("### Dijkstra, implicit graph"))
path = astar_implicit(successors,start,goaltest,cost=length_cost)
print("Path length",len(path),"cost",sum(length_cost(a,b) for (a,b) in zip(path[:-1],path[1:])))

display(Markdown("### Euclidean heuristic, implicit graph"))
def h1(x):
    """Euclidean distance heuristic"""
    return math.sqrt((x[0]-goal[0])**2+(x[1]-goal[1])**2)
path = astar_implicit(successors,start,goaltest,cost=length_cost,heuristic=h1)
print("Path length",len(path),"cost",sum(length_cost(a,b) for (a,b) in zip(path[:-1],path[1:])))

display(Markdown("### Manhattan heuristic, implicit graph"))
def h2(x):
    """Manhattan distance heuristic"""
    return abs(x[0]-goal[0])+abs(x[1]-goal[1])
path = astar_implicit(successors,start,goaltest,cost=length_cost,heuristic=h2)
print("Path length",len(path),"cost",sum(length_cost(a,b) for (a,b) in zip(path[:-1],path[1:])))

display(Markdown("### Dijkstra, 8-connected implicit graph"))
path = astar_implicit(successors_8connected,start,goaltest,cost=length_cost)
print("Path length",len(path),"cost",sum(length_cost(a,b) for (a,b) in zip(path[:-1],path[1:])))

display(Markdown("### Euclidean heuristic, 8-connected implicit graph"))
def h1(x):
    """Euclidean distance heuristic"""
    return math.sqrt((x[0]-goal[0])**2+(x[1]-goal[1])**2)
path = astar_implicit(successors_8connected,start,goaltest,cost=length_cost,heuristic=h1)
print("Path length",len(path),"cost",sum(length_cost(a,b) for (a,b) in zip(path[:-1],path[1:])))

display(Markdown("### Manhattan heuristic, 8-connected implicit graph"))
def h2(x):
    """Manhattan distance heuristic"""
    return abs(x[0]-goal[0])+abs(x[1]-goal[1])
path = astar_implicit(successors_8connected,start,goaltest,cost=length_cost,heuristic=h2)
print("Path length",len(path),"cost",sum(length_cost(a,b) for (a,b) in zip(path[:-1],path[1:])))

Dijkstra, implicit graph

A* succeeded in 116171 iterations
Path length 101 cost 9.99999999999998

Euclidean heuristic, implicit graph

A* succeeded in 6886 iterations
Path length 101 cost 9.99999999999998

Manhattan heuristic, implicit graph

A* succeeded in 108 iterations
Path length 101 cost 9.99999999999998

Dijkstra, 8-connected implicit graph

A* succeeded in 68441 iterations
Path length 51 cost 7.071067811865471

Euclidean heuristic, 8-connected implicit graph

A* succeeded in 51 iterations
Path length 51 cost 7.071067811865471

Manhattan heuristic, 8-connected implicit graph

A* succeeded in 51 iterations
Path length 51 cost 7.071067811865471

It is also easy to perform a multi-source and/or multi-goal graph search, that is, to find the shortest path between any source and goal in some designated sets. Given graph $G$, a set of possible start vertices $S \subset V$, and a set of goal vertices $T \subset V$, we can simply construct a new graph $G^\prime= (V^\prime, E^\prime)$ augmented with a virtual start vertex $s$ and a virtual goal vertex $g$ that are connected to $S$ and $T$ respectively.

More precisely, let $V^\prime = V \cup \{s,g\}$ and $E^\prime = E \cup \{ (s,v)\quad | \quad v \in S \} \cup \{ (v,g) \quad |\quad v\in T\}$. A search on $G^\prime$ will yield a path that passes through the optimal path amongst all pairs of vertices in $S$ and $T$.

Another way of implementing this is to replace in Dijkstra's algorithm the goal test in Line 8 with the condition if $v \in T$, and then set $g=v$.

In [ ]: