The idea and intuition behind the particle is similar to that of evolution -- survival of the fittest. In the case of mobile robot localization, each particle will represent a possible robot state, which we can usually constrict to 3 dimensions for the case of a ground vehicle by representing the state of the robot as x,y,\(\theta\) where x and y are the coordinates of the robot in its environment and \(\theta\) is its heading direction (between 0 and 2\(\pi\)). The evolutionary aspect comes from the fact that as the robot gains more information about its environment from its sensors and motions, some particles will not be consistent with the robots measurements, and thus are likely to be ruled out. Over multiple iterations, only particles that are consistently accurate with the robots' sensors and motion updates will survive, this will end up leading to particles that can approximate the position of the robot. For this article, I will assume the robot is able to measure the range to nearby obstacles. This is in general a reasonable assumption as many autonomous vehicles are equipped with laser rangefinders. Other sensor readings can also be used such as bearing, but for this article range-finding will be sufficient to show the effects of information on the particles.
Initially, we are in a state of global uncertainty, therefore nothing is known about the robot's position or environment. In statistical terms, we have a uniform distribution over all possible states. So P(X) = K for all X. Visually, we can view this state of global uncertainty with this particle representation:
Where the red circle represents the robot, and the blue dots are the particles. If the particles represent an approximation to the robot's location it is clear that we are in a state of global uncertainty. This environment isn't very interesting though, the robot cannot sense anything. For the case of particle filters we usually incorporate static (nonmoving) landmarks so the robot has something to sense. Our new environment is now:
The purple/magenta circles represent the landmarks.
Clearly, certain particles such as those in the top right corner are not good approximations to the robot's actual pose but in practice, this fact is unknown because we usually cannot determine the starting location of the robot. Since this represents the global uncertainty case, every particle is just as valid. So what will begin to diminish these particles that are clearly not good approximations? Motion,Information, and Resampling. Mobile robots obviously have the ability to move, and this robot depicted in the red circle is no different, it can move around in its imaginary environment. Let me also state that moving around in the environment also has an effect on the ranges between the landmarks that the robot picks up. For example, if this robot moves up by 1 "unit" it will be closer to some particles, and thus the rangefinder is going to pick up different results. This fact will allow us to discriminate against "good" and "bad" particles. When we move the actual robot up by 1, all particles will similarly follow this same motion. This of course, is subject to noise due to inconsistencies and imperfections in the robot's motors and robotic motion in general. We can model this robot's motion as circular because it can capture all ways to move this robot. To travel in arc, we set the appropriate turning angle and distance, and if we want to travel straight ahead, the "circular" motion will be parameterized by a turning angle of 0 (hence, we're not turning the steering wheel.) Gaussian noise is added to the parameters of the circular motion to be consistent with the fact that robot motion is not 100% accurate. In this Matlab file I wrote, circular motion is applied to particles and the robot. The math behind it is basic trigonometry which you can find the derivation for in Probabilistic Robotics (Thrun, et al.) as well as in Professor Thrun's Udacity CS373 class.
Matlab motion function: http://goo.gl/foVsa1
If you've seen this before, this is the same code as in the CS373 class with some slight modifications and translated into Matlab.
After motion, the robot takes its "sensor" measurements to each of the landmarks. Since this is a simulated robot, we will simulate our sensors by doing the distance formula to each landmark. Each particle will also take their respective distances to the landmarks by using distance formula. Note, however, that in a real application of the particle filter, the robot takes its measurements from sensors while the particles do not. It should be obvious that particles that are close to the robot will have similar distance results when compared to the "sensor" readings. How can measure this "goodness" between particles and the robot? Given that we have noise parameters associated with the sensor ( sensors aren't perfect either), we can use this noise as the variance in a Gaussian distirbution to model how similar these two readings are. Because we are only working with range, this can be modeled as a univariate Gaussian with \(\mu = 0, \sigma^{2} = noise\). In the general case we would have a covariance matrix,\(\Sigma\) and multiple sensor readings to determine a "goodness" rating. I will denote each particle as lowercase "a", sensor measurements as lowercase "z", particle predicted measurements as lowercase "h" ,and motions as lowercase "u". (And not 'p' since I am using P to denote probabilities).In essence, what we're calculating for each particle is P(a | z). We have multiple landmarks, however, so we are actually computing P(a | Z) where capital "Z" is a vector of sensor measurements. This can be decomposed with Baye's rule as
$$ \frac{P(Z|a)P(a)}{P(Z)} $$
We can assume each measurement is independent of the previous, so the component \(P(Z|a)\) can be further decomposed into a product of measurements. Concretely,
\(P(Z|a) = \prod_{z \in Z} P(z|a)\) . This in turn = \(\prod_{x} N(0,\sigma^{2})\) where we plug in values of \(x = z-h\).
This equation will tell us the "goodness" of a certain particle. If a certain particle is very close to the actual robot trajectory, the value it receives from the Gaussian distribution is near the peak, while less consistent particles will get values in either extremity, resulting in low values. The product of these values is known as the importance weight of a particle. Following our evolutionary analogy, the next "generation" of particles are picked, with replacement, in proportion to their importance weight. In other words, the higher the importance weight of a particle, the more likely it is to be picked.
The following Matlab files implement getting the "sensor" measurements from the robot, simulating the measurements for each particle and assigning each particle the importance weights, and the Gaussian function for returning values.
- Robot "sensors": http://goo.gl/DsAJ66
- Computing weights /simulated measurements: http://goo.gl/derI5a
- Guassian function : http://goo.gl/pRcQ3w
Returning to our Baye's Rule equation, \( \frac{P(Z|a)P(a)}{P(Z)}\), the prior is represented as \(P(a)\), and this is just set of particles we currently have. Particles represent our belief at the time, so the prior before the measurement is just the fact that we have this particle to work with. And as usual with Baye's Rule, the normalizer term, \(P(Z)\) can be computed with total probability, but it is just the sum of the importance weights for each particle so we can get values that sum to 1 (and can be modeled as a "probability").
There are several ways to resample the particles. The method I used is from the CS373 class I mentioned earlier which uses a "roulette" wheel to pick the particles in proportion to their importance weights.
- Resampling code: http://goo.gl/iTt7ht
The meat and potatoes of our filter is taken care of, the only other thing needed is some house keeping to generate the particles, get the sum of the X and Y locations from each particle ( so we can average them and estimate our location), plot them, and automate this process.
- Particle generator: http://goo.gl/hknuGi
- Throw everything together ( the actual particle filter): http://goo.gl/xpBRnU
- Retrieve sum of X,Y: http://goo.gl/72kDP0
Using the code is very straightforward. To generate a set of 1000 particles with some noise , you say:
particles = make_particles(1000,3.0,1.0,1.0);
You also need to create a robot struct, which in Matlab couldn't be easier:
robot.x = 7;
robot.y = 7;
robot.theta = pi/4;
robot.s_noise = 1.0;
robot.d_noise = 1.0;
robot.sensor_noise = 3.0;
You can also define locations of landmarks using a vector, where each row corresponds to a new landmark. Example:
landmarks = [10 20; 15 35; 30 40; 25 16;35 37;40 10];
To use the actual particle filter, you need to specify the current particle set (a priori), the robot, magnitude of movement, angle of movement, the landmark vector, and the amount of particles).
Since we initialized our robot to be at \(\theta\)= \(\pi\)/4 (45 degrees), we can tell it to advance \(sqrt(2)\) units, without turning, so it can travel to location (8,8) (with noise). We accomplish this with:
[particles,X,Y,robot] = particle_filter(particles,robot,sqrt(2),0.0,landmarks,1000);
Now we have our updated and resampled particle set! To see this visually over several iterations, here's a few pictures:
Particles after 2 iterations of particle filtering |
Particles clustering |
Notice the progression from the state of global uncertainty to the now! The particles that are most likely will cluster around the robot. The reason we include the [X,Y] is so we can actually get our approximation to the robot pose. Doing sum(X)/M and sum(Y)/M gives us an approximation to the robot's x,y pose. It isn't perfect, obviously, but its a method of filtering and works very well.
Things to consider:
- By adjusting the noise values, you can avoid such a wide spread of particles that can contribute to inaccurate measurements, but you risk "overfitting" and believing too strongly on only a few particles. Remember, the importance weights are governed by Gaussians. So lowering the noise parameter also lowers the variance, and thus even small changes between predicted values and actual values will result in unnecessarily low importance weights, and some particles may even get a value of 0, essentially killing it. This phenomenon is known as particle depletion. However,making the noise too high also has its repercussions, namely you have too much particle diversity. It's a Goldilocks scenario.
- Particle filters grow exponentially with respect to the amount of features necessary. Particle filters work very well for small dimensions, but once you start working with larger state spaces, the particle filters require exponentially more particles to approximate the state. In this situation, you may be better of using a Kalman Filter
- You can also speed up particle killing by employing certain heuristics. For example, if the robot moves and one of its particles updates its position to be "out of bounds" or in a wall, the particle can be killed immediately ( importance weight set to 0).
- This wheel approach is one way to resample, other approaches may suit your needs better.
- Have fun with the particle filter! There are several papers on particle filters and different ways of employing it.
I hope this post was informative, please leave a comment below!
Rhode Island has been offering legal sports betting since 2018 when retail sportsbooks opened at its two commercial casinos. William Hill is the one legal retail and on-line sportsbook within the state. Ryan Gutzler is a Pennsylvania-based playing analyst who has labored within the gaming trade since 2007. He is an skilled on the us on-line sports betting and casino markets and is a frequent contributor to a variety of|quite a lot of|a wide selection of} trade publications. In basic, sports betting is a comparatively low-margin venture in comparison with} different types of gaming. Most wagers obtained 메리트카지노 are paid out within the type of prizes; nationwide, the average maintain for sportsbooks is 7.2 %.
ReplyDelete