Pages

Saturday, August 16, 2014

Surviving a Kidnapping : Particle Filter Style

In this article we will see how particle filters when used in Mobile Robot Localization, such as in Sequential Monte Carlo,  can be implemented to be versatile enough to handle a robot kidnapping -- that is , a robot being randomly removed from its track. This has several applications in real robotics as well as in competitive robotics, such as robot soccer where the robot soccer players are frequently picked up and moved around by humans. In a real robotic application, this can be applied to robots such as underwater autonomous vehicles that can be swept away by a strong ocean current or an animal. These random motions can throw off the strong particles in a filter and ultimately lead to localization failure, because no particles in the distribution agrees with the sensor measurements and motions of the actual robot.





If you haven't read up on particle filters or know how they work, I have a blog post that explains particle filters using Matlab code, along with visuals. You can review the code and the content in that blog post by visiting this link.

What this post will introduce is the concept of re-localization, i.e localizing after failure. How is this different than the standard resampling process from our old particle filter? Well, let's say your robot is localized by having a cluster of particles surrounding it. In other words, the particle filter has a high belief as to where the robot is. Suddenly, the robot is displaced, meaning  it didn't follow its intended trajectory even with the noise factored in. From one time step to the next, the robot has moved 100 meters. What does this do to the particle filter? The result of this is particle depletion, since none of the previously clustered particles' predicted sensor values will be remotely near the robot's physical sensors, and thus will all receive overwhelmingly small importance weights and lead to a loss in particle diversity. This loss of diversity causes overall localization failure.

This can be remedied by injecting particles during times of localization failure. The intuition here is, with certain probability, a particle will instead of being resampled proportional to to its importance weight, be replaced by a new particle. A common first idea is to replace this particle with a new, randomly intialized, particle. However, if the world of the robot is large, a random particle may not help at all in re-localization of the robot. Instead, we sample a probable pose based on sensor measurements. What this will allow us to do, is generate more likely poses than just at random. This stems from our assumption that the robot can take both range and bearing measurements to each landmark it can detect. Thus, given a particular range, R, and bearing, \(\phi\),  to a certain landmark we can generate particles that are in an R radius circle around this landmark, and this true  because we can achieve bearing value \(\phi\) from any orientation, [0,2\(\pi\)] on the circle. Thus, by generating particles with random orientation, \(\upsilon \in [0,2\pi]\) around the circle, we can retrieve their x and y coordinates by saying:

x = landmark.x + R * cos( \(\upsilon\))
y = landmark.y + R * sin(\(\upsilon\))

Clearly, generating particles according to this model is much better than picking particles at random because here we are at least creating particles that are statistically more likely to be near the actual robot. Given enough of these "good" choices, the robot can re-localize and assign high importance weights to these new particles. Below, you will see a visualization of the poses that could be generated using this procedure. The blue triangle represents the landmarks, and the green triangles represent the possible poses:




The algorithm generates a ring around the actual landmark, due to the fact that the robot could have received the sensor measurements by being on any location in that circle. Any particles that are near the robot will end up surviving in future re-sampling runs, and will aid in the re-localization of the robot.


Before delving any further into the re-localization code, it is important to first describe the structure of a particle. Localization in 2D space requires 3 variables, so we need x, y, and heading direction \(\theta\). On top of this, we also need to keep track of the importance weight for each particle, noise variables for the sensors as well as motors, and we also need to keep track of measurement values, i.e each particles' computed sensor values to each observable landmark. This is implemented with the following C++ class:




 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
class particle
{
public: 
 double x, y,theta,range_noise, bearing_noise ,forward_noise, rotation_noise;
 double importance_weight;
 vector<pair<double,double>> measurements;
 particle(double x1, double y1, double t, double range_n, double bn, double fn, double rn)
 {
  x = x1; y = y1; theta = t;
  range_noise = range_n; bearing_noise = bn; forward_noise = fn; rotation_noise = rn;
  importance_weight = 1.0;
 }

 friend ostream& operator << (ostream& os,const particle& p)
 {
  os << "x: " << p.x << " y: " << p.y << " theta: " << p.theta;
  return os;
 }

 
 
};

Nothing too complicated here. We can construct a particle by passing in the appropriate parameters and can even cout << it.

Going back to re-localization, we must address the issue of when to initiate the pose sampling. We have to take into consideration a few things:


  1. We need a way to measure the quantity \(P(z_{t} | z_{1:t-1},\mu_{1:t}, m)\), where z is a sensor measurement, \(\mu\) are motion commands, and m is a map of the environment. This is necessary because whenever the robot is thrown off course, this probability should be low, due to the fact that the new measurements will not coincide with previous measurements and motions.
  2. Just because we receive a low value from step 1 does NOT mean our robot was thrown off course. We must account for noise in the sensors. We should only consider choosing new particles if we consistently see low values from step 1. 
  3. We need to balance out between conservative and liberal ideologies when it comes to adding new particles. This is because adding new particles that are wrong (i.e. far away from the robot) will just be killed off, so they won't really have too big of a negative effect on our localization. However, this potential kill-off comes at the expense of not resampling a particle. Remember, choosing a new particle means we will not resample a particle from the old set. If we happen to not resample a good particle, this can reduce the quality of our localization.
Our solution to step 1 will be to implement averages. What I mean by this is that the expression  \(P(z_{t} | z_{1:t-1},\mu_{1:t}, m)\), can be determined by averaging the particle importance weights. The intuition for this is that the importance weights, to a certain extent, represent how certain we are about the "goodness" of a certain particle - high importance weights represent particles that are consistent with the robot's sensor measurements, while low importance weights represent not-so-likely particles. Thus, given our prior belief ( current set of particles),  \(P(z_{t} | z_{1:t-1},\mu_{1:t}, m)\) can be understood as the average over all particle importance weights.

Solving the other steps involves keeping moving averages. What this means is that we have to keep track of  averages throughout several iterations of the particle filter for the importance weights. Ideally, one of these averages is a "slow" average and the other a "fast" average. The slow average updates its value as slow = slow + \(\alpha(P(z_{t} | z_{1:t-1},\mu_{1:t}, m) - \)slow\()\)  and similarly for the value of the fast moving average. fast = fast + \(\beta(P(z_{t} | z_{1:t-1},\mu_{1:t}, m) - \)fast\()\). What's going on here is that the values of \(\alpha\) and \(\beta\) determine how much should the  displacement between the current value of \(P(z_{t} | z_{1:t-1},\mu_{1:t}, m)\) and the respective average influence the moving average. The slow moving average should thus have a smaller value for \(\alpha\) when compared to \(\beta\), because the slower average should slowly average in the  new results, while the fast moving average should respond quickly to new values. What this then allows us to do is, with probability \(1 - \frac{fast}{slow}\), we pick a new posed particle. We see that this quantity will play nicely with the constraints we had above. Namely, with one noisy sensor reading, the value of the fraction will not change by a significant amount, so the expression \(\frac{fast}{slow}\) will only be changed slightly. However, with sustained low probabilities, the quantity will start converging towards 0, thus rendering the probability of picking a new particle to be 1. Note, however, that this fraction can fall below 0 if the robot is never kidnapped,  or the robot is very well localized. Thus, the probability for picking a new particle should be max(0, 1 - \(\frac{fast}{slow}\)). These concepts are implemented below by the functions importance_weights() and resample().


The function importance_weights() calculates the importance weights for the particles, as well as update the moving averages. It accepts a vector of particles, the actual robot sensor data, and references to the values of slow and fast.

In line 6 we iterate through all the particles and compute the particles' "sensor" values for each landmark in lines 8 through 12. The global variable landmarks is defined as vector<vector<double>>, because each landmark is described as a vector<double>.  From lines 13 through 26 we compute the importance weight by applying a normal distribution to the values of the real robot and the particle. Lines 26-27 keep track of the total importance weight as well the average. Lines 30-32 update the moving averages and the rest of function normalizes the importance weights so as to sum to 1. As a side note, on line 31 I use the constant ALPHA_FAST. This is equivalent to the value of \(\beta\) I was talking about earlier, I just called it ALPHA_FAST in the program.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
inline void importance_weights(vector<particle>& particles, const vector<pair<double,double>>& real_data, double& slow, double& fast)
{
 double total_sum = 0.0;
 double avg = 0;
 int M = particles.size();
 for(auto& p : particles)
 {
  for(const auto& landmark : landmarks)
  {
   auto data_vec = get_data(p,landmark,0);
   p.measurements.push_back(make_pair(data_vec[0],data_vec[1]));
  }
  int i =0 ;
  for(const auto& pr : p.measurements)
  {

   double expon_range = exp(-1 *  pow(pr.first-real_data[i].first,2) / (2 * p.range_noise));
   expon_range /= (sqrt(2*PI*p.range_noise));
   double expon_bear = exp(-1 * pow(pr.second-real_data[i].second,2) / (2  * p.bearing_noise));
   expon_bear /= (sqrt(2 * PI  * p.bearing_noise));
   p.importance_weight *= (expon_range + expon_bear);
   ++i;
   
   
  }
  total_sum += p.importance_weight;
  avg += (p.importance_weight/M);
 }

 slow += ALPHA_SLOW * (avg-slow);
 fast += ALPHA_FAST * (avg-fast);
 for(auto& p  :particles)
 {
  p.importance_weight /= total_sum;
 }
 
}


The function resample() implements the standard particle-filter resampling step as well as the addition of the ability to pick new particles if we suspect the robot might have been kidnapped. We create a vector of importance weights for convenience when accessing the weights as well as initialize other variables. On line 14 the probability of picking new particles is computed. The rest of the code is the same resampling code I posted in my original blog post about particle filters, only translated into C++. However, on line 24 we check if a random number in the range [0,1) is less than our computed probability. If it is, we then proceed to create a new particle, otherwise, we resample as per usual.

Generating a new pose involves picking a random value between [0,2\(\pi\)], which is done on line 28, and then figuring out the respective values of x,y, and \(\theta\). We then create a new particle based on these values and add it to our new set of particles.


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
vector<particle> resample(const vector<particle>& prior,double slow, double fast,int correspondence)
{
 auto r = robot;
 vector<particle> posterior;
 vector<double> importance_weights;
 for(const auto& x : prior)
  importance_weights.push_back(x.importance_weight);
 std::uniform_real_distribution<double> add_random(0,1);
 std::uniform_real_distribution<double> yeta_dist(0,2*PI);
 std::normal_distribution<double> range_dist(0.0,prior[0].range_noise);
 std::normal_distribution<double> bear_dist(0.0,prior[0].bearing_noise);
 std::default_random_engine generator;
 double vl = 0;
 double threshold = 1.0 - (fast/slow);
 cout << "\nthreshold = " << threshold << "\n\n";
 if(threshold > vl) vl = threshold;
 int index = rand() % prior.size();
 double beta =0.0;

 double max_val =*max_element(importance_weights.begin(),importance_weights.end()); 

 for(int i =0 ; i < prior.size(); ++i)
 {
  if(add_random(generator) < vl)
  {
   // add random pose according to the circle of possible poses around the landmark
   
   double yeta = yeta_dist(generator);
   double range_prime = r.measurements[correspondence].first + range_dist(generator);
   double bear_prime = r.measurements[correspondence].second + bear_dist(generator);
   double x_prime = landmarks[correspondence][0] + (range_prime * cos(yeta));
   double y_prime = landmarks[correspondence][1] + (range_prime * sin(yeta));
   double theta_prime = yeta - PI - bear_prime;
   if(theta_prime < 0)
    theta_prime += 2*PI;
   particle sampled(x_prime,y_prime,theta_prime,prior[i].range_noise,prior[i].bearing_noise,prior[i].forward_noise,prior[i].rotation_noise);
   posterior.push_back(sampled);
  }
  else
  {
   beta += fmod(rand() ,( 2 * max_val));
   while(importance_weights[index] < beta)
   {
    beta -= importance_weights[index];
    ++index;
    index %= prior.size();
   }
   particle new_particle = prior[index];
   new_particle.importance_weight=1.0;
   new_particle.measurements = vector<pair<double,double>>();
   posterior.push_back(new_particle);
   

  }
 }
 return posterior;
 
}


The motion model is exactly the same as the one I described in the previous blog post ; circular motion. I tested the code by first allowing 5 iterations of normal particle filter without the robot being kidnapped, and these were my results:


As you can see, the standard particle filter is working well and the threshold  1 - \(\frac{fast}{slow}\) is negative, so our particle averages are high. After the 5th iteration, I set the robot's x and y coordinates to 50,50 ( thus simulating a random kidnapping). Here were my results before the algorithm began inserting particles:




As you can see, the particle filter is not tracking the robot accurately, but, the threshold in increasing, and thus increasing the probability that new particles will be injected into the filter. After a few more iterations, particles that were near the robot were selected and the threshold drops again, leaving us with the following results:



The particles began retracking and re-localizing the robot and the threshold went back into negative territory! The injection algorithm worked!

The full code, if you'd like to test it yourself, is here:

http://goo.gl/JBKXxj  Code for Augmented particle filter.

P.S.
Picking the values for ALPHA_SLOW and ALPHA_FAST ( the parameters for the moving averages) is a bit of an art, but it is fun to see the results of twiddling around with these values. Setting them too high will cause the system to inject new particles whenever noisy measurements come in, but too low will cause it to hardly ever inject new particles. This is where the interplay between conservative and liberal mindsets must come in to play.

I hope this post was informative! Let me know what you think!

2 comments:

  1. Thanks
    These are help for me

    ReplyDelete
  2. By the way
    Did you explain KLD-sample MCL or provide sample code for KLD-sample as this post ?
    Thanks

    ReplyDelete