From: Tim Wescott on
HardySpicer wrote:
> On Apr 28, 6:43 am, Tim Wescott <t...(a)seemywebsite.now> wrote:
>> HardySpicer wrote:
>>> On Apr 27, 2:53 pm, Tim Wescott <t...(a)seemywebsite.now> wrote:
>>>> HardySpicer wrote:
>>>>> On Apr 27, 4:40 am, Tim Wescott <t...(a)seemywebsite.now> wrote:
>>>>>> Cagdas Ozgenc wrote:
>>>>>>> Hello,
>>>>>>> In Kalman filtering does the process noise have to be Gaussian or
>>>>>>> would any uncorrelated covariance stationary noise satisfy the
>>>>>>> requirements?
>>>>>>> When I follow the derivations of the filter I haven't encountered any
>>>>>>> requirements on Gaussian distribution, but in many sources Gaussian
>>>>>>> tag seems to go together.
>>>>>> The Kalman filter is only guaranteed to be optimal when:
>>>>>> * The modeled system is linear.
>>>>>> * Any time-varying behavior of the system is known.
>>>>>> * The noise (process and measurement) is Gaussian.
>>>>>> * The noise's time-dependent behavior is known
>>>>>> (note that this means the noise doesn't have to be stationary --
>>>>>> just that it's time-dependent behavior is known).
>>>>>> * The model exactly matches reality.
>>>>>> None of these requirements can be met in reality, but the math is at its
>>>>>> most tractable when you assume them. Often the Gaussian noise
>>>>>> assumption comes the closest to being true -- but not always.
>>>>>> If your system matches all of the above assumptions _except_ the
>>>>>> Gaussian noise assumption, then the Kalman filter that you design will
>>>>>> have the lowest error variance of any possible _linear_ filter, but
>>>>>> there may be nonlinear filters with better (perhaps significantly
>>>>>> better) performance.
>>>>> Don't think so. You can design an H infinity linear Kalman filter
>>>>> which is only a slight modification and you don't even need to know
>>>>> what the covariance matrices are at all.
>>>>> H infinity will give you the minimum of the maximum error.
>>>> But strictly speaking the H-infinity filter isn't a Kalman filter. It's
>>>> certainly not what Rudi Kalman cooked up. It is a state-space state
>>>> estimator, and is one of the broader family of "Kalmanesque" filters,
>>>> however.
>>>> And the H-infinity filter won't minimize the error variance -- it
>>>> minimizes the min-max error, by definition.
>>>> --
>>>> Tim Wescott
>>>> Control system and signal processing consultingwww.wescottdesign.com
>>> Who says that minimum mean-square error is the best? That's just one
>>> convenient criterion.
>> Not me! I made the point in another branch of this thread -- my
>> "optimum" may well not be your "optimum". Indeed, my "optimum" may be a
>> horrendous failure to fall inside the bounds of your "good enough".
>>
>> Minimum mean-square error certainly makes the math easy, though.
>>
>>> For example, the optimal control problem with a Kalman filter is
>>> pretty bad. It doesn't even have integral action.
>>> Simple PID gives better results for many occasions.
>> OTOH, if you model the plant as having an uncontrolled integrator and
>> you track that integrator with your Kalman, you suddenly have an 'I' term.
>>
>> --
>> Tim Wescott
>> Control system and signal processing consultingwww.wescottdesign.com
>
> That's right and what people did, but it doesn't come out naturally,
> whereas it does in H infinity control.
> Kalman filters are not robust to changes in the plant either.

No, and H-infinity filters are. The biggest drawback from the
perspective of my current project is that H-infinity filters require a
lot of computation at design time, and I'm working on an extended Kalman
filter (it's actually morphed into a hybrid extended-unscented filter),
for which the filter must compute the gains -- essentially doing a
design cycle -- at each iteration. The gain computation is easy with a
Kalman-Kalman, but extracting all the eigenvalues for an
H-infinity-Kalman is _expensive_.

--
Tim Wescott
Control system and signal processing consulting
www.wescottdesign.com
From: HardySpicer on
On Apr 28, 8:04 am, Tim Wescott <t...(a)seemywebsite.now> wrote:
> HardySpicer wrote:
> > On Apr 28, 6:43 am, Tim Wescott <t...(a)seemywebsite.now> wrote:
> >> HardySpicer wrote:
> >>> On Apr 27, 2:53 pm, Tim Wescott <t...(a)seemywebsite.now> wrote:
> >>>> HardySpicer wrote:
> >>>>> On Apr 27, 4:40 am, Tim Wescott <t...(a)seemywebsite.now> wrote:
> >>>>>> Cagdas Ozgenc wrote:
> >>>>>>> Hello,
> >>>>>>> In Kalman filtering does the process noise have to be Gaussian or
> >>>>>>> would any uncorrelated covariance stationary noise satisfy the
> >>>>>>> requirements?
> >>>>>>> When I follow the derivations of the filter I haven't encountered any
> >>>>>>> requirements on Gaussian distribution, but in many sources Gaussian
> >>>>>>> tag seems to go together.
> >>>>>> The Kalman filter is only guaranteed to be optimal when:
> >>>>>> * The modeled system is linear.
> >>>>>> * Any time-varying behavior of the system is known.
> >>>>>> * The noise (process and measurement) is Gaussian.
> >>>>>> * The noise's time-dependent behavior is known
> >>>>>>    (note that this means the noise doesn't have to be stationary --
> >>>>>>    just that it's time-dependent behavior is known).
> >>>>>> * The model exactly matches reality.
> >>>>>> None of these requirements can be met in reality, but the math is at its
> >>>>>> most tractable when you assume them.  Often the Gaussian noise
> >>>>>> assumption comes the closest to being true -- but not always.
> >>>>>> If your system matches all of the above assumptions _except_ the
> >>>>>> Gaussian noise assumption, then the Kalman filter that you design will
> >>>>>> have the lowest error variance of any possible _linear_ filter, but
> >>>>>> there may be nonlinear filters with better (perhaps significantly
> >>>>>> better) performance.
> >>>>> Don't think so. You can design an H infinity linear Kalman filter
> >>>>> which is only a slight modification and you don't even need to know
> >>>>> what the covariance matrices are at all.
> >>>>> H infinity will give you the minimum of the maximum error.
> >>>> But strictly speaking the H-infinity filter isn't a Kalman filter.  It's
> >>>> certainly not what Rudi Kalman cooked up.  It is a state-space state
> >>>> estimator, and is one of the broader family of "Kalmanesque" filters,
> >>>> however.
> >>>> And the H-infinity filter won't minimize the error variance -- it
> >>>> minimizes the min-max error, by definition.
> >>>> --
> >>>> Tim Wescott
> >>>> Control system and signal processing consultingwww.wescottdesign.com
> >>> Who says that minimum mean-square error is the best? That's just one
> >>> convenient criterion.
> >> Not me!  I made the point in another branch of this thread -- my
> >> "optimum" may well not be your "optimum".  Indeed, my "optimum" may be a
> >> horrendous failure to fall inside the bounds of your "good enough".
>
> >> Minimum mean-square error certainly makes the math easy, though.
>
> >>> For example, the optimal control problem with a Kalman filter is
> >>> pretty bad. It doesn't even have integral action.
> >>> Simple PID gives better results for many occasions.
> >> OTOH, if you model the plant as having an uncontrolled integrator and
> >> you track that integrator with your Kalman, you suddenly have an 'I' term.
>
> >> --
> >> Tim Wescott
> >> Control system and signal processing consultingwww.wescottdesign.com
>
> > That's right and what people did, but it doesn't come out naturally,
> > whereas it does in H infinity control.
> > Kalman filters are not robust to changes in the plant either.
>
> No, and H-infinity filters are.  The biggest drawback from the
> perspective of my current project is that H-infinity filters require a
> lot of computation at design time, and I'm working on an extended Kalman
> filter (it's actually morphed into a hybrid extended-unscented filter),
> for which the filter must compute the gains -- essentially doing a
> design cycle -- at each iteration.  The gain computation is easy with a
> Kalman-Kalman, but extracting all the eigenvalues for an
> H-infinity-Kalman is _expensive_.
>
> --
> Tim Wescott
> Control system and signal processing consultingwww.wescottdesign.com

Always suspicious about extended Kalman filters since they are not
guaranteed to converge.
I would do a separate estimation of the plant with say a Volterra type
LMS estimator and use that in some way to feed an estimator of the
states.


Hardy
From: Tim Wescott on
HardySpicer wrote:
> On Apr 28, 8:04 am, Tim Wescott <t...(a)seemywebsite.now> wrote:
>> HardySpicer wrote:
>>> On Apr 28, 6:43 am, Tim Wescott <t...(a)seemywebsite.now> wrote:
>>>> HardySpicer wrote:
>>>>> On Apr 27, 2:53 pm, Tim Wescott <t...(a)seemywebsite.now> wrote:
>>>>>> HardySpicer wrote:
>>>>>>> On Apr 27, 4:40 am, Tim Wescott <t...(a)seemywebsite.now> wrote:
>>>>>>>> Cagdas Ozgenc wrote:
>>>>>>>>> Hello,
>>>>>>>>> In Kalman filtering does the process noise have to be Gaussian or
>>>>>>>>> would any uncorrelated covariance stationary noise satisfy the
>>>>>>>>> requirements?
>>>>>>>>> When I follow the derivations of the filter I haven't encountered any
>>>>>>>>> requirements on Gaussian distribution, but in many sources Gaussian
>>>>>>>>> tag seems to go together.
>>>>>>>> The Kalman filter is only guaranteed to be optimal when:
>>>>>>>> * The modeled system is linear.
>>>>>>>> * Any time-varying behavior of the system is known.
>>>>>>>> * The noise (process and measurement) is Gaussian.
>>>>>>>> * The noise's time-dependent behavior is known
>>>>>>>> (note that this means the noise doesn't have to be stationary --
>>>>>>>> just that it's time-dependent behavior is known).
>>>>>>>> * The model exactly matches reality.
>>>>>>>> None of these requirements can be met in reality, but the math is at its
>>>>>>>> most tractable when you assume them. Often the Gaussian noise
>>>>>>>> assumption comes the closest to being true -- but not always.
>>>>>>>> If your system matches all of the above assumptions _except_ the
>>>>>>>> Gaussian noise assumption, then the Kalman filter that you design will
>>>>>>>> have the lowest error variance of any possible _linear_ filter, but
>>>>>>>> there may be nonlinear filters with better (perhaps significantly
>>>>>>>> better) performance.
>>>>>>> Don't think so. You can design an H infinity linear Kalman filter
>>>>>>> which is only a slight modification and you don't even need to know
>>>>>>> what the covariance matrices are at all.
>>>>>>> H infinity will give you the minimum of the maximum error.
>>>>>> But strictly speaking the H-infinity filter isn't a Kalman filter. It's
>>>>>> certainly not what Rudi Kalman cooked up. It is a state-space state
>>>>>> estimator, and is one of the broader family of "Kalmanesque" filters,
>>>>>> however.
>>>>>> And the H-infinity filter won't minimize the error variance -- it
>>>>>> minimizes the min-max error, by definition.
>>>>>> --
>>>>>> Tim Wescott
>>>>>> Control system and signal processing consultingwww.wescottdesign.com
>>>>> Who says that minimum mean-square error is the best? That's just one
>>>>> convenient criterion.
>>>> Not me! I made the point in another branch of this thread -- my
>>>> "optimum" may well not be your "optimum". Indeed, my "optimum" may be a
>>>> horrendous failure to fall inside the bounds of your "good enough".
>>>> Minimum mean-square error certainly makes the math easy, though.
>>>>> For example, the optimal control problem with a Kalman filter is
>>>>> pretty bad. It doesn't even have integral action.
>>>>> Simple PID gives better results for many occasions.
>>>> OTOH, if you model the plant as having an uncontrolled integrator and
>>>> you track that integrator with your Kalman, you suddenly have an 'I' term.
>>>> --
>>>> Tim Wescott
>>>> Control system and signal processing consultingwww.wescottdesign.com
>>> That's right and what people did, but it doesn't come out naturally,
>>> whereas it does in H infinity control.
>>> Kalman filters are not robust to changes in the plant either.
>> No, and H-infinity filters are. The biggest drawback from the
>> perspective of my current project is that H-infinity filters require a
>> lot of computation at design time, and I'm working on an extended Kalman
>> filter (it's actually morphed into a hybrid extended-unscented filter),
>> for which the filter must compute the gains -- essentially doing a
>> design cycle -- at each iteration. The gain computation is easy with a
>> Kalman-Kalman, but extracting all the eigenvalues for an
>> H-infinity-Kalman is _expensive_.
>>
>> --
>> Tim Wescott
>> Control system and signal processing consultingwww.wescottdesign.com
>
> Always suspicious about extended Kalman filters since they are not
> guaranteed to converge.
> I would do a separate estimation of the plant with say a Volterra type
> LMS estimator and use that in some way to feed an estimator of the
> states.

This particular Kalman was pretty strongly dependent on 3-D angles; it
worked OK as an extended Kalman, but really started to shine when it got
turned into an unscented Kalman.

I didn't consider doing the Volterra series, because what's a few more
terms in a really severely nonlinear transform like 3-D angles? But the
unscented version is working like dynamite -- and not in the sense that
it's blowing up in my face.

--
Tim Wescott
Control system and signal processing consulting
www.wescottdesign.com
First  |  Prev  | 
Pages: 1 2 3 4 5
Prev: CPM
Next: "Correcting" output of a filter