Pages

Saturday, July 26, 2014

Extended Kalman Filter Example With Code

In this article we will look into using an Extended Kalman Filter (EKF) for estimating the state of a (simulated) moving vehicle. This article is inspired by the Udacity CS373 : Programming a Robotic Car course. We can use a Kalman Filter to estimate the next pose of the car that uses the following kinematics model:

$$\begin{vmatrix} \mathbf{\theta_{t}} \\ \mathbf{x_{t}} \\ \mathbf{y_{t}} \end{vmatrix} = \begin{vmatrix}\mathbf{(\theta_{t-1} + \alpha_{t} )mod 2\pi} \\ \mathbf{x_{t-1} + Rcos(\theta_{t})}\\ \mathbf{ y_{t-1} + Rsin(\theta_{t})} \end{vmatrix}$$





Where \(\theta\) is our heading direction, and x,y are our coordinates. Confining our robot to a 2D environment thus gives us 3 degrees of freedom. Our robot can steer on every motion by angle \(\alpha\). This leads to the equation that our heading direction is equal to our previous heading direction + steering angle, confined to stay within \(2\pi\). Our new x and y locations are determined by multiplying the distance we're going to travel to the respective cosine and sine values of our heading direction. These formulas are determined by simple trig:

We know our location, (the red circle) and our coordinates. We would like to find out where we are going to be (the blue circle) and our new coordinates , X' and Y'. The angle we form to get to the blue circle is \(\theta + \alpha\) as discussed earlier. We see that to get to our X' coordinate we have to compute the quantity dx, our displacement from our current position to the goal position. $$cos(\theta + \alpha) = \frac{dx}{R}. $$ Since we know R, we can multiply through to get $$dx = Rcos(\theta +\alpha)$$. Thus, our new location is \(x + dx\), or equivalently, \(x + Rcos(\theta + \alpha)\). This same logic applies to deriving the y location, except we use sine instead of cosine to calculate dy. We took care of adding \(\alpha\) to \(\theta\) before the x,y updates, so we can just use \(\theta\) as opposed to \(\theta + \alpha\) for the x,y updates.

Although this model does not accurately describe how a real car can move, this motion model can serve as a segue to circular motion, that more accurately describes car kinematics. For the remainder of this article, however, we will stick to the simple "straight-line" car motion. So, our goal is to use a Kalman Filter to predict the car's next location. Although this isn't an entirely useful application, predicting is often used for object tacking, which has a wide array of applications in the real world.

There's a slight problem when trying to use a Kalman filter to this problem. Kalman Filters are optimal linear  state estimators. Clearly, our car motion is not linear, as we are dealing with sine and cosine values, so a Kalman filter as it stands, would not produce optimal results for us. Instead we can use an EKF to better estimate our state. EKFs are not optimal for nonlinear state dynamics either,but they tend to produce more accurate results than standard Kalman Filters for nonlinear states. So what separates EKFs from normal Kalman Filters is the fact that EKFs can handle nonlinear state transition and measurement prediction models. The clever trick that the EKF employs is that is linearizes the nonlinear models via first order Taylor expansion. Thus, the only additional thing we need to provide for an EKF are the Jacobian( first partial derivatives with respect to several variables) matrices for our state transition model and measurement prediction model. Specifically, our state X, which is modeled by \(\theta, x, y\) (as shown above) will now be computed based upon a function, as opposed to matrix multiplications and so long as our functions are differentiable, we can use the EKF. Now we have that,
$$x_{t} = f(x_{t-1},\mu_{t})$$ Where \(x_{t-1}\) is our updated belief state from the previous iteration and \(\mu_{t}\) is the additional motion input at time \(t\) which in our case, is the steering angle \(\alpha\) and distance R. Similarly, our measurement model is also now modeled by a differentiable function.$$z_{t} = h(x_{t}) $$

As we'll see later on in the post, we express these as functions as opposed to matrix multiplication to generalize our model, but as we can see, the function can simply be a matrix multiplication. By using functions we are relaxing the constraint on as to how we update the state and measurement predictions, but standard matrix multiplication is still allowed. We use the fact that these functions are differentiable to update our state belief covariance matrix, as well as the Kalman Gain matrix. For a full derivation of the EKF equations, they just stem from the the original Kalman Filter equations, but we use first order Taylor approximation to estimate the nonlinear function at a certain point.  A more mathematically precise example can be found in Probabilistic Robotics (Thrun, et.al) as well as on Wikipedia.

Computing the Jacobian is as easy as computing the gradient, or first order derivative, of your update equation with respect to each variable at hand. Luckily for us, Matlab provides a function called jacobian() to do this for us. To get "simulated" values for \(\theta\) , x, and y we can use syms  to symbolize them. Since our state prediction function is \(f\), I'll denote its Jacobian as F.

Matlab Code For Jacobian:

syms x alpha theta R y  % simulated variables
theta_calc = mod(theta + alpha,2*pi); % equation for updating \(\theta\)
x_calc = x + R * cos(theta); % equation for updating x
y_calc = y + R * sin(theta); % equation for updating y
all = [theta_calc x_calc y_calc]; % throw it all together
F = jacobian(all,[theta x y]) % compute Jacobians with respect to \(\theta\) , x, and y
F
The result of this is:

[    1 mod 2*pi, 0, 0]
[ -R*sin(theta), 1, 0]
[  R*cos(theta), 0, 1]

So on each iteration of the Kalman Filter predict/update loop, we compute this by plugging in the associated values.  Similarly, we need a jacobian matrix for the measurement function. In this example, our input consists of the robot's actual x,y locations, so from our current state \(x_{t}\) we need to extract the x and y components, which can actually be expressed as a matrix multiplication. Given our vector \(\begin{vmatrix} \mathbf{\theta_{t}} \\ \mathbf{x_{t}} \\ \mathbf{y_{t}} \end{vmatrix}\), we can extract the x and y values by doing:

\( z = \begin{vmatrix} \mathbf{0} \mathbf{1} \mathbf{0}\\ \mathbf{0} \mathbf{0} \mathbf{1} \end{vmatrix} *  \begin{vmatrix} \mathbf{\theta_{t}} \\ \mathbf{x_{t}} \\ \mathbf{y_{t}} \end{vmatrix}\)

Even though here we're using matrix multiplication as the function \(h\), it still falls under the general class of being a function/modification of our state vector \(x_{t}\). We still need to provide the Jacobian matrix for this function. The derivatives of this one is actually pretty simple, Matlab isn't necessarily needed here since the end result of the matrix multiplication is \(\begin{vmatrix} \mathbf{x_{t}} \\ \mathbf{y_{t}} \end{vmatrix}\). The jacobian of this with respect to \(\theta\), x, and y is \(\begin{vmatrix} \mathbf{0} \mathbf{1} \mathbf{0} \\ \mathbf{0} \mathbf{0} \mathbf{1} \end{vmatrix}\). Look familiar? This jacobian is the same matrix we used for the function \(h\)! This is because the function \(h\) is simply just 1*x , 1*y, 0*\(\theta\). So we get back the same matrix. We will call this Jacobian matrix H. By following the Wikipedia model, all of our Jacobian matrices are filled out. What's left is to translate the math from the Wikipedia page to Matlab code. Now that we have our jacobian matrices in place, we can begin the EKF update equations!

For testing this, I will follow the model of the Udacity CS373 class, and assume that our input are the x,y locations of the robot that is being simulated. It turns out that some GPS systems actually perform Extended Kalman Filter operations to smooth out the errors in GPS signals as well as give better guidance by predicting your next location. For this first part, we will assume little noise on the input, but since this is a Kalman Filter, adding noise is as simple as modifying a matrix, so this can be extended to more realistic settings.

For a test run that I did given the CS373 class code, the actual robot path was:

Init = [x=29.684 y=81.650 orient=3.4692]
myrobot = [x=31.293 y=76.916 orient=5.0400]
myrobot = [x=36.027 y=78.525 orient=0.3276]
myrobot = [x=34.418 y=83.259 orient=1.8984]
myrobot = [x=29.684 y=81.650 orient=3.4692]
myrobot = [x=31.293 y=76.916 orient=5.0400]
myrobot = [x=36.027 y=78.525 orient=0.3276]
myrobot = [x=34.418 y=83.259 orient=1.8984]
myrobot = [x=29.684 y=81.650 orient=3.4692]
myrobot = [x=31.293 y=76.916 orient=5.0400]
myrobot = [x=36.027 y=78.525 orient=0.3276]

All of this data was fed into the EKF during the predict-update loop. The next pose of the robot which was NOT fed to the EKF was:

Myrobot = [x=34.418 y=83.259 orient=1.8984]
The EKF predicted:

  \(\theta\) =  1.8984
  \(x\) =  34.4179
  \(y\) = 83.2591

As you can see, this is pretty accurate! The EKF predicted x value was off by .0001 and so was the y value. So the EKF did a nice job of being able to predict the next step! 

The code that was used to implement the EKF is a direct translation from the math shown in the Wikipedia page to Matlab.  I even did some plotting throughout the code to plot how the EKF predicted the next state vs the actual robot trajectory. I initialized the belief state to all zeros, so it takes around 3 or 4 belief/Kalman Gain updates to start predicting with high accuracy. 

You can find the code that I used here:

Just some technical details about the code:
  1. As input, the code takes a set of input parameters (the measurements), the motion input ( turning angle and distance to travel).
  2. The function returns a vector [x,P,Q,R] where x is the predicted next state, P is the covariance matrix associated with the state, Q is the covariance of the noise in the process, i.e how random is the motion? This value can usually be determined empirically but twiddling around with the parameter is fun to see how it affects the result of the EKF. R is the covariance of the measurement model, i.e how noisy are our robot sensors? As I stated earlier, we will assume low noise for this model, so doing the matlab function random('norm',0.0,1) gives us Gaussian noise with mean 0 and variance 1, so this should give us relatively low noise terms. We use the Gaussian function since Kalman Filters in general expect Gaussian random noise for the process and measurements.

I hope you found this blog post informative and can learn more about EKFs from reading the code and twiddling around with the parameters of the EKFs and noises! Let me know what you think about it in a comment below.

1 comment:

  1. Additional free spins are added and began routinely. ►Awesome graphic and animations give you the offline on line casino atmosphere to play one of the best slot games with on line casino slots free. Slotomania is meant for these 21 and older for amusement purposes only and does not provide ‘real money’ playing, or a possibility to win actual money or actual prizes based on recreation play. Although it 퍼스트카지노 may replicate Vegas-style slot machines, there aren't any cash prizes.

    ReplyDelete