6

I was going through the tutorial on SLAM by Hugh Durrant-Whyte and Tim Bailey and found this Time-Update equation (equation 4 in the paper):

$ P\left(\mathrm{x}_{k},\mathrm{m} | \mathrm{Z}_{0:k-1}, \mathrm{U}_{0:k},\mathrm{x}_{0}\right) = \int P\left(\mathrm{x}_{k}|\mathrm{x}_{k-1},\mathrm{u}_{k}\right) \times P\left(\mathrm{x}_{k-1},\mathrm{m}|\mathrm{Z}_{0:k-1},\mathrm{U}_{0:k-1},\mathrm{x}_{0}\right)\mathrm{dx}_{k-1}$

where $\mathrm{m}$ is the position of landmarks or one can call it a map and other symbols have usually meanings like $\mathrm{x}_{k}$ is robot location, $\mathrm{u}_{k}$ is control inputs and $\mathrm{z}_{k}$ world observations made by robot. Is the right hand side derived using the total probability concept. I am aware of the Markov facts that $\mathrm{x}_{k}$ depends on $\mathrm{u}_{k}$ and $\mathrm{x}_{k-1}$ and $\mathrm{z}_{k}$ depends on $\mathrm{x}_{k}$ and $\mathrm{m}$. Still I am not able to derive the right hand side from the left hand side specially the fact that $\mathrm{m}$ was occurring with $\mathrm{x}_{k}$ in the joint probability on left hand side but it gets attached with $\mathrm{x}_{k-1}$ on the right hand side if I think we are using the fact

$P(a,b|c) = \int_{a'} P(a,b|a',c)P(a'|c) da'$

Simultaneous Localisation and Mapping (SLAM): Part I The Essential Algorithms

Chuck
  • 16,061
  • 2
  • 18
  • 47

1 Answers1

3

Our goal is to find a recursive expression for $ P\left(\mathrm{x}_{k},\mathrm{m} | \mathrm{Z}_{0:k}, \mathrm{U}_{0:k},\mathrm{x}_{0}\right)$. This expression is called the belief for the robot's state $\mathrm{x}_{k}$ and the map $\mathrm{m}$ given all the measurements $\mathrm{Z}_{0:k} = (\mathrm{z}_{0},..., \mathrm{z}_{k})$, the control actions $\mathrm{U}_{0:k} = (\mathrm{u}_{0},..., \mathrm{u}_{k})$ and the robot's initial state $\mathrm{x}_{0}$.

  1. First, let us write the down the Markov assumptions:

    • Motion model: $P\left( \mathrm{x}_{k} | \mathrm{x}_{k-1}, \mathrm{m}, \mathrm{Z}_{0:k-1}, \mathrm{U}_{0:k}, \mathrm{x}_{0} \right) = P\left(\mathrm{x}_{k} | \mathrm{x}_{k-1}, \mathrm{u}_{k} \right)$

    • Observation model: $P\left( \mathrm{z}_{k} | \mathrm{x}_{k}, \mathrm{m}, \mathrm{Z}_{0:k-1}, \mathrm{U}_{0:k}, \mathrm{x}_{0} \right) = P\left( \mathrm{z}_{k} | \mathrm{x}_{k}, \mathrm{m} \right) $

  2. By using the Bayes' rule, we can pull $z_{k}$ to the left of "$|$":

$$ P\left(\mathrm{x}_{k},\mathrm{m} | \mathrm{Z}_{0:k}, \mathrm{U}_{0:k},\mathrm{x}_{0}\right) = \eta P\left( \mathrm{z}_{k} | \mathrm{x}_{k}, \mathrm{m}, \mathrm{Z}_{0:k-1}, \mathrm{U}_{0:k}, \mathrm{x}_{0} \right) P\left( \mathrm{x}_{k}, \mathrm{m}, \mathrm{Z}_{0:k-1}, \mathrm{U}_{0:k}, \mathrm{x}_{0} \right) $$

  1. By using the Markov assumption regarding the observation model, we can simplify the term $P\left( \mathrm{z}_{k} | \mathrm{x}_{k}, \mathrm{m}, \mathrm{Z}_{0:k-1}, \mathrm{U}_{0:k}, \mathrm{x}_{0} \right)$:

$$ P\left(\mathrm{x}_{k},\mathrm{m} | \mathrm{Z}_{0:k}, \mathrm{U}_{0:k},\mathrm{x}_{0}\right) = \eta P\left( \mathrm{z}_{k} | \mathrm{x}_{k}, \mathrm{m} \right) P\left( \mathrm{x}_{k}, \mathrm{m}, \mathrm{Z}_{0:k-1}, \mathrm{U}_{0:k}, \mathrm{x}_{0} \right) $$

  1. By using the total probability theorem, we can rewrite the term $P\left( \mathrm{x}_{k}, \mathrm{m}, \mathrm{Z}_{0:k-1}, \mathrm{U}_{0:k}, \mathrm{x}_{0} \right) $ as a marginalization integral over $\mathrm{x}_{k-1}$:

$$ P\left(\mathrm{x}_{k},\mathrm{m} | \mathrm{Z}_{0:k}, \mathrm{U}_{0:k},\mathrm{x}_{0}\right) = \eta P\left( \mathrm{z}_{k} | \mathrm{x}_{k}, \mathrm{m} \right) \int P\left( \mathrm{x}_{k}, \mathrm{m}, \mathrm{Z}_{0:k-1}, \mathrm{U}_{0:k}, \mathrm{x}_{0}, \mathrm{x}_{k-1} \right) \mathrm{dx}_{k-1} $$

  1. By using the Bayes' rule, we can rewrite the term $P\left( \mathrm{x}_{k}, \mathrm{m}, \mathrm{Z}_{0:k-1}, \mathrm{U}_{0:k}, \mathrm{x}_{0}, \mathrm{x}_{k-1} \right)$ in terms of conditional probability:

$$ P\left(\mathrm{x}_{k},\mathrm{m} | \mathrm{Z}_{0:k}, \mathrm{U}_{0:k},\mathrm{x}_{0}\right) = \eta P\left( \mathrm{z}_{k} | \mathrm{x}_{k}, \mathrm{m} \right) \int P\left( \mathrm{x}_{k} | \mathrm{x}_{k-1}, \mathrm{m}, \mathrm{Z}_{0:k-1}, \mathrm{U}_{0:k}, \mathrm{x}_{0} \right) P\left( \mathrm{x}_{k-1}, \mathrm{m}, \mathrm{Z}_{0:k-1}, \mathrm{U}_{0:k}, \mathrm{x}_{0} \right) \mathrm{dx}_{k-1} $$

  1. By using the Markov assumption regarding the motion model, we can simplify the term $P\left( \mathrm{x}_{k} | \mathrm{x}_{k-1}, \mathrm{m}, \mathrm{Z}_{0:k-1}, \mathrm{U}_{0:k}, \mathrm{x}_{0} \right)$:

$$ P\left(\mathrm{x}_{k},\mathrm{m} | \mathrm{Z}_{0:k}, \mathrm{U}_{0:k},\mathrm{x}_{0}\right) = \eta P\left( \mathrm{z}_{k} | \mathrm{x}_{k}, \mathrm{m} \right) \int P\left( \mathrm{x}_{k} | \mathrm{x}_{k-1}, \mathrm{u}_{k} \right) P\left( \mathrm{x}_{k-1}, \mathrm{m}, \mathrm{Z}_{0:k-1}, \mathrm{U}_{0:k}, \mathrm{x}_{0} \right) \mathrm{dx}_{k-1} $$

  1. By using the Bayes' rule, we can rewrite the term $P\left( \mathrm{x}_{k-1}, \mathrm{m}, \mathrm{Z}_{0:k-1}, \mathrm{U}_{0:k}, \mathrm{x}_{0} \right) $ in terms of conditional probability:

$$ P\left(\mathrm{x}_{k},\mathrm{m} | \mathrm{Z}_{0:k}, \mathrm{U}_{0:k},\mathrm{x}_{0}\right) = \eta P\left( \mathrm{z}_{k} | \mathrm{x}_{k}, \mathrm{m} \right) \int P\left( \mathrm{x}_{k} | \mathrm{x}_{k-1}, \mathrm{u}_{k} \right) P\left( \mathrm{x}_{k-1}, \mathrm{m} | \mathrm{Z}_{0:k-1}, \mathrm{U}_{0:k}, \mathrm{x}_{0} \right) P\left( \mathrm{Z}_{0:k-1}, \mathrm{U}_{0:k}, \mathrm{x}_{0} \right)\mathrm{dx}_{k-1} $$

  1. By using the fact that $(\mathrm{x}_{k-1}, \mathrm{m})$ is independent from the future action $\mathrm{u}_{k}$, we can simplify the term $P\left( \mathrm{x}_{k-1}, \mathrm{m} | \mathrm{Z}_{0:k-1}, \mathrm{U}_{0:k}, \mathrm{x}_{0} \right)$:

$$ P\left(\mathrm{x}_{k},\mathrm{m} | \mathrm{Z}_{0:k}, \mathrm{U}_{0:k},\mathrm{x}_{0}\right) = \eta P\left( \mathrm{z}_{k} | \mathrm{x}_{k}, \mathrm{m} \right) \int P\left( \mathrm{x}_{k} | \mathrm{x}_{k-1}, \mathrm{u}_{k} \right) P\left( \mathrm{x}_{k-1}, \mathrm{m} | \mathrm{Z}_{0:k-1}, \mathrm{U}_{0:k-1}, \mathrm{x}_{0} \right) P\left( \mathrm{Z}_{0:k-1}, \mathrm{U}_{0:k}, \mathrm{x}_{0} \right) \mathrm{dx}_{k-1} $$

  1. The term $P\left( \mathrm{Z}_{0:k-1}, \mathrm{U}_{0:k}, \mathrm{x}_{0} \right)$ is a constant within the integral. It can be taken out and fused into $\eta$ coefficient:

$$ P\left(\mathrm{x}_{k},\mathrm{m} | \mathrm{Z}_{0:k}, \mathrm{U}_{0:k},\mathrm{x}_{0}\right) = \eta P\left( \mathrm{z}_{k} | \mathrm{x}_{k}, \mathrm{m} \right) \int P\left( \mathrm{x}_{k} | \mathrm{x}_{k-1}, \mathrm{u}_{k} \right) P\left( \mathrm{x}_{k-1}, \mathrm{m} | \mathrm{Z}_{0:k-1}, \mathrm{U}_{0:k-1}, \mathrm{x}_{0} \right) \mathrm{dx}_{k-1} $$

Finally, the integral expression, that computes the belief before incorporating the measurement $\mathrm{z}_{k}$, is the time update equation (aka prediction step):

$$ P\left(\mathrm{x}_{k},\mathrm{m} | \mathrm{Z}_{0:k-1}, \mathrm{U}_{0:k},\mathrm{x}_{0}\right) = \int P\left( \mathrm{x}_{k} | \mathrm{x}_{k-1}, \mathrm{u}_{k} \right) \times P\left( \mathrm{x}_{k-1}, \mathrm{m} | \mathrm{Z}_{0:k-1}, \mathrm{U}_{0:k-1}, \mathrm{x}_{0} \right) \mathrm{dx}_{k-1} $$

Octavius
  • 377
  • 1
  • 7