From: fisico32 on
>On 07/02/2010 07:01 AM, fisico32 wrote:
>>> On 07/01/2010 07:35 AM, fisico32 wrote:
>>>>> On 07/01/2010 06:53 AM, fisico32 wrote:
>>>>>>> On 06/30/2010 12:03 PM, fisico32 wrote:
>>>>>>>> Hello Forum
>>>>>>>>
>>>>>>>> given a LTI system, I want to make sure I understand the meaning
of
>>>>>>>> "zero state response" and "zero input response".
>>>>>>>>
>>>>>>>> The ZSR is the impulse response only existing if a system is
acted
>>>> upon
>>>>>> by
>>>>>>>> an external force. That assume that if there is not external
force
>>>> the
>>>>>>>> system does nothing.
>>>>>>>
>>>>>>> Correct.
>>>>>>>
>>>>>>> As an aside, you don't have to make your assumption: if the system
>> is
>>>>>>> LTI and the states are all zero then by definition the system does
>>>>>> nothing.
>>>>>>>
>>>>>>>> The ZIR is the natural response due to some initial conditions
that
>>>>>> then
>>>>>>>> instantaneously disappear. The system may decay or not depending
on
>>>>>>>> absorption.....
>>>>>>>>
>>>>>>>> If this difference is correct, is an initial condition could be
>> seen
>>>> an
>>>>>>>> external forcing function that exist only for an instant of time,
>> so
>>>> it
>>>>>> can
>>>>>>>> be considered as a special case of ZSR.... correct?
>>>>>>>
>>>>>>> Almost. It is possible for a system to have states, or linear
>>>>>>> combinations of states (called modes), that cannot be affected by
>> the
>>>>>>> input. These uncontrollable modes can still affect the output,
and
>> if
>>>>>>> they have non-zero initial values then you'll see that in the
>> output.
>>>>>>>
>>>>>>> You'll never see these modes in a transfer function -- transfer
>>>>>>> functions more or less by definition only show controllable and
>>>>>>> observable modes. But they can be there if you describe the
system
>> in
>>>>>>> state space.
>>>>>>>
>>>>>>> --
>>>>>>>
>>>>>>> Tim Wescott
>>>>>>> Wescott Design Services
>>>>>>> http://www.wescottdesign.com
>>>>>>>
>>>>>>> Do you need to implement control loops in software?
>>>>>>> "Applied Control Theory for Embedded Systems" was written for you.
>>>>>>> See details at http://www.wescottdesign.com/actfes/actfes.html
>>>>>>>
>>>>>>
>>>>>>
>>>>>> Thanks Tim.
>>>>>>
>>>>>> So, for example, a system like y(t)=x(t)+5
>>>>>> It is pseudo-linear and not a zero response state system. If
x(t)=0,
>>>> the
>>>>>> output is not zero but y(t)=0.
>>>>>
>>>>> The term you're looking for isn't pseudo-linear, it is "affine".
With
>> a
>>>>> shift of origin the system is linear. Alternately, you can model
this
>>>>> as a linear system with an extra input that you happen to set to 5,
or
>>>>> as a linear system with an extra integrator whose initial condition
>> you
>>>>> set to 5.
>>>>>
>>>>> Check your math. If x(t) = 0 then y(t) must be 5 -- and normally
you
>>>>> would take y(t) as the output.
>>>>>>
>>>>>> How about the system y(t)=x(t) +3t^2? Even if the input does not
>> exist,
>>>> the
>>>>>> output seems to have its own existance and increase as t^2...What
>> type
>>>> of
>>>>>> system is that? It is not nonlinea, but not even linear. It is not
>> zero
>>>>>> state..is it a zero input response system?
>>>>>
>>>>> Strictly speaking it is a nonlinear system, because it doesn't pass
>> the
>>>>> superposition test. But it's also an affine system like the one
>> above,
>>>>> just with a time-varying offset (or input, or with a chain of three
>>>>> integrators with appropriate initial values).
>>>>>
>>>>>> I would say that a if a system output is nonzero even if x(t)=0,
then
>>>> it
>>>>>> must have some memory. Would it correspond to a LTI system of IIR
>> type?
>>>>>
>>>>> Does the system y(t) = 0 * x(t) + 1 have memory? Does it exhibit
>>>>> superposition?
>>>>>
>>>>>> It is said that the "total" response is the sum of ZSR+ZIR. This is
>>>> because
>>>>>> the could have both an initial condition and a persisiting external
>>>> force
>>>>>> applied to the system....
>>>>>
>>>>> For a linear system, yes (note that this is true even for a linear
>>>>> time-varying system). If you want to pay attention to observability
>> and
>>>>> controllability, then say "visible response" instead of "response"
and
>>>>> you've covered your bases.
>>>>>
>>>>> --
>>>>>
>>>>> Tim Wescott
>>>>> Wescott Design Services
>>>>> http://www.wescottdesign.com
>>>>>
>>>>> Do you need to implement control loops in software?
>>>>> "Applied Control Theory for Embedded Systems" was written for you.
>>>>> See details at http://www.wescottdesign.com/actfes/actfes.html
>>>>>
>>>>
>>>>
>>>> Tim,
>>>> by the way, since you mention LTV systems, a professor of mine said
>> that
>>>>
>>>> ".....AN LTV system is a small-signal behavior of a nonlinear
>> autonomous
>>>> (time-invariant) system. In other words, a nonlinear
(time-invariant)
>>>> system can be written as a composit function y(t)=h o x(t) and if the
>>>> system is analytical, can be expanded as an infinte sum of regular
>>>> homogeneous linear terms. The constant term in this series is
>> interpreted
>>>> as the initial state (zero-input) response. LTI system is the
>> first-degree
>>>> (linear) homogenous term, LTV is the second-degree (linear)
homogenous
>>>> term, and so on. In that sense, the delta function is a homogeneous
>> linear
>>>> term of zero-degree. The constant term is also called the
zero-degree
>>>> impulse response, the corresponding LTV term is called a first-degree
>>>> impulse response, and the LTV term is called a second-degree impulse
>>>> response and so on. Using this terminology, the unit-impulse
ffunction
>> is
>>>> called a minus-one homogeneous linear term. It is said that the LTV
>>>> system is the least order (of second-degree) system that can satisfy
>> the
>>>> requirements of a nonlinear autonomous (time-invariant) system.
>>>
>>> The key term that's buried in there is "small-signal behavior".
>>> "Small-signal behavior" means "in the limit as our input signal goes
to
>>> zero" (with emphasis on the zero), or perhaps "There's a bear! I bet
we
>>> can poke it if we don't poke it hard!"
>>>
>>> So just as you can model a transistor amplifier as a linear system by
>>> looking at it's "small signal AC behavior" (i.e. by noting that for
>>> small inputs it looks like an affine system, and with blocking caps on
>>> both input and output it looks pretty linear), if you have a nonlinear
>>> system that's doing something (like oscillating) you can -- sometimes
--
>>> treat that as an affine time-varying system.
>>>
>>> An example of this is a regenerative radio receiver. These clever
>>> little gadgets were invented by Major Edwin Armstrong
>>> (http://en.wikipedia.org/wiki/Regenerative_receiver). They work by
>>> feeding an oscillator circuit with a weak signal from an antenna.
When
>>> you're receiving Morse code the signal from the antenna gets
multiplied
>>> to the oscillator's signal in the oscillator's amplifying element
(tube
>>> or transistor), and rectified to low frequencies, where it can be
>>> amplified and applied to a speaker. In this case the whole system is
>>> quite nonlinear, yet for a small enough input signal (so you can
ignore
>>> blocking, Jerry) its behavior is that of a linear time-varying system.
>>>
>>> OTOH, when you waltz up to your new car and push the little button on
>>> the key fob to make the doors unlock, chances are high that the
receiver
>>> in the car that senses the key fob is a superregenerative receiver.
>>> There isn't a useful description of a superregen that doesn't take the
>>> nonlinearities into account -- the way that a superregen achieves it's
>>> astounding sensitivity is by turning an oscillator on and off (with a
>>> nonlinear element), and rectifying its output (which is a nonlinear
>>> behavior). The oscillator starts faster in the presence of a signal
on
>>> the antenna, which makes the output bigger right before quench;
because
>>> the signal is effectively amplified a bazzilion times by the
oscillator
>>> before quench vast sensitivity is achieved with just a few really
cheap
>>> components.
>>>
>>>> Note that an autonomous LTV system is clasified as time-invariant
>> because
>>>> the system behaviour depends on |t1- t2|, i.e., the distance between
>> the
>>>> system time (t1) and the signal time (t2) (this is called a norm in
>>>> mathematics) not the variations of individual time arguments!...."
>>>>
>>>>
>>>> I am still trying to get my head around it. Do you agree and
understand
>>>> what he is stating?
>>>> thanks
>>>
>>> I don't quite get that last part -- but the first part does make some
>>> sense, and in fact there's a lot of useful work that you can do with
it.
>>> Where I've used that approximation most is in controlling a
nonlinear
>>> system -- you close one eye and claim that the system's apparent
>>> time-variance is slow enough that it won't affect the moment-to-moment
>>> behavior of the system (yea, right), then you make a family of linear
>>> system approximations over the state space of the nonlinear system,
then
>>> you design a controller that will achieve your desired performance
over
>>> the whole state space (or you find out that you can't, and go back to
>>> the drawing board).
>>>
>>> When I was in the process of getting out of school and for the first
few
>>> years after I was really pissed off about the emphasis on small-signal
>>> behavior, because it is emphasized so much yet no one really goes into
>>> depth into just how often the assumption is violated. Sometimes
"small
>>> signal" is uselessly teeny. Sometimes (e.g. with a push-pull amp) a
>>> device shows its nonlinearities _more_ with small signals, and less
with
>>> large ones, at least in proportion to the input signal.
>>>
>>> So I probably haven't paid as much attention to anything that invokes
>>> "small signal" behavior as I should. Now I do accept that it is
useful
>>> -- but you have to be exquisitely aware of the limitations of the
model
>>> at all times, so that you know when a small-signal assumption is
>>> appropriate, and when it is just paving on the garden path that you're
>>> leading yourself down.
>>>
>>> --
>>>
>>> Tim Wescott
>>> Wescott Design Services
>>> http://www.wescottdesign.com
>>>
>>> Do you need to implement control loops in software?
>>> "Applied Control Theory for Embedded Systems" was written for you.
>>> See details at http://www.wescottdesign.com/actfes/actfes.html
>>>
>>
>> Ok, Tim, forgive my low level of understanding: I understand what small
>> signal behavior is: approximating a nonlinear system as a linear one by
>> linearizing it;
>> I understand what a composite function is and how it could be expanded
into
>> a series. In this case the functions in the composite function are
>> the input x(t) and h.
>> But I don't get why the 2nd term of this expansion is the LTV
system....
>> What are the basis functions used in this series expansion?
>
>It sounds like the guy is using something very like a plain ol' Taylor's
>series expansion of the state evolution (or state transition) function.
> So if the system is defined by
>
>dx/dt = f(x)
>
>then you can do a Taylor's series expansion around x0:
>
>dx/dt = f(x0) + (x * d/dx f(x0))/2 + ...
> + (x^n * d^n/dx^n f(x0))/n!
>
>The first term is constant, the second term appears to be a simple gain,
>and we close our eyes and ignore the rest of the terms. So, one way to
>linearize a system would be to choose some x0 and take the linearization
>as gospel.
>
>> How is it possible that the nonlinear system is time-invariant while
the
>> second term in this expansion is the linear "time variant"? That seems
a
>> contradiction...what am i missing?
>
>But assume that you've got a nonlinear system that you're going to
>perturb ever so slightly, and that you want to know it's behavior over a
>time period that's long enough so that your two-term Taylor's series
>expansion isn't going to be good enough. One thing you could do is just
>accept the rest of the terms in the Taylor's series, but they you've
>forked yourself back into full-on nonlinear analysis.
>
>Another thing you could do -- if your perturbations are small enough --
>is to find the system's "ideal" response. Then, instead of taking x0 as
>a constant, take x0 as a function over time. This makes those first two
>terms of your Taylor's expansion into functions of time -- but you still
>have a description of a system that's affine (which makes it trivial to
>turn into a linear system by subtracting f(x0(t))) and time varying
>(because x0(t) is now a function of time). So you can use linear
>time-varying analysis, which, while harder than linear time-invariant
>analysis, can still be loads better than nonlinear analysis.
>
>--
>
>Tim Wescott
>Wescott Design Services
>http://www.wescottdesign.com
>
>Do you need to implement control loops in software?
>"Applied Control Theory for Embedded Systems" was written for you.
>See details at http://www.wescottdesign.com/actfes/actfes.html
>


Ok, I see. Great explanation.
So the series expansion is linear (affine) simply because it contain the
linear term, but time varying because the linear term varies with time.

As you mention, the other choice would be to keep the whole nonlinear
system with all the expansion terms.
Such nonlinear system is time invariant because its impulse response is
time invariant. But the presence of a time perturbation leads to make the
linear approximation represent a system with time-changing impulse response
in order to better approximate the nonlinear system....
Why not simply keep the original time-invariant linear system term (2nd
term) and add to it the same perturbation we added to the nonlinear system?

I guess the approximation would not be as good as when we instead consider
the linear part time-variant itself.....

By the way, a nonlinear memory system can be described by a Volterra
series, which shows many kernels, the first two being h(tau) and h(tau1,
tau2).
Does that mean that in that 2nd order approximation the nonlinear system
has two impulse responses? What does it mean?

What if a nonlinear system if memory-less...How is the 2nd impulse response
h(tau1, tau2) modified? Is it still a function of both tau1, tau1?

Clearly, even a simple LTI system is a memory system. What would the
impulse response of a LTI memoryless system look like? Is it a delta?

Thanks for any thought?





From: Tim Wescott on
On 07/02/2010 12:36 PM, fisico32 wrote:
>> On 07/02/2010 07:01 AM, fisico32 wrote:
>>>> On 07/01/2010 07:35 AM, fisico32 wrote:
>>>>>> On 07/01/2010 06:53 AM, fisico32 wrote:
>>>>>>>> On 06/30/2010 12:03 PM, fisico32 wrote:
>>>>>>>>> Hello Forum
>>>>>>>>>
>>>>>>>>> given a LTI system, I want to make sure I understand the meaning
> of
>>>>>>>>> "zero state response" and "zero input response".
>>>>>>>>>
>>>>>>>>> The ZSR is the impulse response only existing if a system is
> acted
>>>>> upon
>>>>>>> by
>>>>>>>>> an external force. That assume that if there is not external
> force
>>>>> the
>>>>>>>>> system does nothing.
>>>>>>>>
>>>>>>>> Correct.
>>>>>>>>
>>>>>>>> As an aside, you don't have to make your assumption: if the system
>>> is
>>>>>>>> LTI and the states are all zero then by definition the system does
>>>>>>> nothing.
>>>>>>>>
>>>>>>>>> The ZIR is the natural response due to some initial conditions
> that
>>>>>>> then
>>>>>>>>> instantaneously disappear. The system may decay or not depending
> on
>>>>>>>>> absorption.....
>>>>>>>>>
>>>>>>>>> If this difference is correct, is an initial condition could be
>>> seen
>>>>> an
>>>>>>>>> external forcing function that exist only for an instant of time,
>>> so
>>>>> it
>>>>>>> can
>>>>>>>>> be considered as a special case of ZSR.... correct?
>>>>>>>>
>>>>>>>> Almost. It is possible for a system to have states, or linear
>>>>>>>> combinations of states (called modes), that cannot be affected by
>>> the
>>>>>>>> input. These uncontrollable modes can still affect the output,
> and
>>> if
>>>>>>>> they have non-zero initial values then you'll see that in the
>>> output.
>>>>>>>>
>>>>>>>> You'll never see these modes in a transfer function -- transfer
>>>>>>>> functions more or less by definition only show controllable and
>>>>>>>> observable modes. But they can be there if you describe the
> system
>>> in
>>>>>>>> state space.
>>>>>>>>
>>>>>>>> --
>>>>>>>>
>>>>>>>> Tim Wescott
>>>>>>>> Wescott Design Services
>>>>>>>> http://www.wescottdesign.com
>>>>>>>>
>>>>>>>> Do you need to implement control loops in software?
>>>>>>>> "Applied Control Theory for Embedded Systems" was written for you.
>>>>>>>> See details at http://www.wescottdesign.com/actfes/actfes.html
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Thanks Tim.
>>>>>>>
>>>>>>> So, for example, a system like y(t)=x(t)+5
>>>>>>> It is pseudo-linear and not a zero response state system. If
> x(t)=0,
>>>>> the
>>>>>>> output is not zero but y(t)=0.
>>>>>>
>>>>>> The term you're looking for isn't pseudo-linear, it is "affine".
> With
>>> a
>>>>>> shift of origin the system is linear. Alternately, you can model
> this
>>>>>> as a linear system with an extra input that you happen to set to 5,
> or
>>>>>> as a linear system with an extra integrator whose initial condition
>>> you
>>>>>> set to 5.
>>>>>>
>>>>>> Check your math. If x(t) = 0 then y(t) must be 5 -- and normally
> you
>>>>>> would take y(t) as the output.
>>>>>>>
>>>>>>> How about the system y(t)=x(t) +3t^2? Even if the input does not
>>> exist,
>>>>> the
>>>>>>> output seems to have its own existance and increase as t^2...What
>>> type
>>>>> of
>>>>>>> system is that? It is not nonlinea, but not even linear. It is not
>>> zero
>>>>>>> state..is it a zero input response system?
>>>>>>
>>>>>> Strictly speaking it is a nonlinear system, because it doesn't pass
>>> the
>>>>>> superposition test. But it's also an affine system like the one
>>> above,
>>>>>> just with a time-varying offset (or input, or with a chain of three
>>>>>> integrators with appropriate initial values).
>>>>>>
>>>>>>> I would say that a if a system output is nonzero even if x(t)=0,
> then
>>>>> it
>>>>>>> must have some memory. Would it correspond to a LTI system of IIR
>>> type?
>>>>>>
>>>>>> Does the system y(t) = 0 * x(t) + 1 have memory? Does it exhibit
>>>>>> superposition?
>>>>>>
>>>>>>> It is said that the "total" response is the sum of ZSR+ZIR. This is
>>>>> because
>>>>>>> the could have both an initial condition and a persisiting external
>>>>> force
>>>>>>> applied to the system....
>>>>>>
>>>>>> For a linear system, yes (note that this is true even for a linear
>>>>>> time-varying system). If you want to pay attention to observability
>>> and
>>>>>> controllability, then say "visible response" instead of "response"
> and
>>>>>> you've covered your bases.
>>>>>>
>>>>>> --
>>>>>>
>>>>>> Tim Wescott
>>>>>> Wescott Design Services
>>>>>> http://www.wescottdesign.com
>>>>>>
>>>>>> Do you need to implement control loops in software?
>>>>>> "Applied Control Theory for Embedded Systems" was written for you.
>>>>>> See details at http://www.wescottdesign.com/actfes/actfes.html
>>>>>>
>>>>>
>>>>>
>>>>> Tim,
>>>>> by the way, since you mention LTV systems, a professor of mine said
>>> that
>>>>>
>>>>> ".....AN LTV system is a small-signal behavior of a nonlinear
>>> autonomous
>>>>> (time-invariant) system. In other words, a nonlinear
> (time-invariant)
>>>>> system can be written as a composit function y(t)=h o x(t) and if the
>>>>> system is analytical, can be expanded as an infinte sum of regular
>>>>> homogeneous linear terms. The constant term in this series is
>>> interpreted
>>>>> as the initial state (zero-input) response. LTI system is the
>>> first-degree
>>>>> (linear) homogenous term, LTV is the second-degree (linear)
> homogenous
>>>>> term, and so on. In that sense, the delta function is a homogeneous
>>> linear
>>>>> term of zero-degree. The constant term is also called the
> zero-degree
>>>>> impulse response, the corresponding LTV term is called a first-degree
>>>>> impulse response, and the LTV term is called a second-degree impulse
>>>>> response and so on. Using this terminology, the unit-impulse
> ffunction
>>> is
>>>>> called a minus-one homogeneous linear term. It is said that the LTV
>>>>> system is the least order (of second-degree) system that can satisfy
>>> the
>>>>> requirements of a nonlinear autonomous (time-invariant) system.
>>>>
>>>> The key term that's buried in there is "small-signal behavior".
>>>> "Small-signal behavior" means "in the limit as our input signal goes
> to
>>>> zero" (with emphasis on the zero), or perhaps "There's a bear! I bet
> we
>>>> can poke it if we don't poke it hard!"
>>>>
>>>> So just as you can model a transistor amplifier as a linear system by
>>>> looking at it's "small signal AC behavior" (i.e. by noting that for
>>>> small inputs it looks like an affine system, and with blocking caps on
>>>> both input and output it looks pretty linear), if you have a nonlinear
>>>> system that's doing something (like oscillating) you can -- sometimes
> --
>>>> treat that as an affine time-varying system.
>>>>
>>>> An example of this is a regenerative radio receiver. These clever
>>>> little gadgets were invented by Major Edwin Armstrong
>>>> (http://en.wikipedia.org/wiki/Regenerative_receiver). They work by
>>>> feeding an oscillator circuit with a weak signal from an antenna.
> When
>>>> you're receiving Morse code the signal from the antenna gets
> multiplied
>>>> to the oscillator's signal in the oscillator's amplifying element
> (tube
>>>> or transistor), and rectified to low frequencies, where it can be
>>>> amplified and applied to a speaker. In this case the whole system is
>>>> quite nonlinear, yet for a small enough input signal (so you can
> ignore
>>>> blocking, Jerry) its behavior is that of a linear time-varying system.
>>>>
>>>> OTOH, when you waltz up to your new car and push the little button on
>>>> the key fob to make the doors unlock, chances are high that the
> receiver
>>>> in the car that senses the key fob is a superregenerative receiver.
>>>> There isn't a useful description of a superregen that doesn't take the
>>>> nonlinearities into account -- the way that a superregen achieves it's
>>>> astounding sensitivity is by turning an oscillator on and off (with a
>>>> nonlinear element), and rectifying its output (which is a nonlinear
>>>> behavior). The oscillator starts faster in the presence of a signal
> on
>>>> the antenna, which makes the output bigger right before quench;
> because
>>>> the signal is effectively amplified a bazzilion times by the
> oscillator
>>>> before quench vast sensitivity is achieved with just a few really
> cheap
>>>> components.
>>>>
>>>>> Note that an autonomous LTV system is clasified as time-invariant
>>> because
>>>>> the system behaviour depends on |t1- t2|, i.e., the distance between
>>> the
>>>>> system time (t1) and the signal time (t2) (this is called a norm in
>>>>> mathematics) not the variations of individual time arguments!...."
>>>>>
>>>>>
>>>>> I am still trying to get my head around it. Do you agree and
> understand
>>>>> what he is stating?
>>>>> thanks
>>>>
>>>> I don't quite get that last part -- but the first part does make some
>>>> sense, and in fact there's a lot of useful work that you can do with
> it.
>>>> Where I've used that approximation most is in controlling a
> nonlinear
>>>> system -- you close one eye and claim that the system's apparent
>>>> time-variance is slow enough that it won't affect the moment-to-moment
>>>> behavior of the system (yea, right), then you make a family of linear
>>>> system approximations over the state space of the nonlinear system,
> then
>>>> you design a controller that will achieve your desired performance
> over
>>>> the whole state space (or you find out that you can't, and go back to
>>>> the drawing board).
>>>>
>>>> When I was in the process of getting out of school and for the first
> few
>>>> years after I was really pissed off about the emphasis on small-signal
>>>> behavior, because it is emphasized so much yet no one really goes into
>>>> depth into just how often the assumption is violated. Sometimes
> "small
>>>> signal" is uselessly teeny. Sometimes (e.g. with a push-pull amp) a
>>>> device shows its nonlinearities _more_ with small signals, and less
> with
>>>> large ones, at least in proportion to the input signal.
>>>>
>>>> So I probably haven't paid as much attention to anything that invokes
>>>> "small signal" behavior as I should. Now I do accept that it is
> useful
>>>> -- but you have to be exquisitely aware of the limitations of the
> model
>>>> at all times, so that you know when a small-signal assumption is
>>>> appropriate, and when it is just paving on the garden path that you're
>>>> leading yourself down.
>>>>
>>>> --
>>>>
>>>> Tim Wescott
>>>> Wescott Design Services
>>>> http://www.wescottdesign.com
>>>>
>>>> Do you need to implement control loops in software?
>>>> "Applied Control Theory for Embedded Systems" was written for you.
>>>> See details at http://www.wescottdesign.com/actfes/actfes.html
>>>>
>>>
>>> Ok, Tim, forgive my low level of understanding: I understand what small
>>> signal behavior is: approximating a nonlinear system as a linear one by
>>> linearizing it;
>>> I understand what a composite function is and how it could be expanded
> into
>>> a series. In this case the functions in the composite function are
>>> the input x(t) and h.
>>> But I don't get why the 2nd term of this expansion is the LTV
> system....
>>> What are the basis functions used in this series expansion?
>>
>> It sounds like the guy is using something very like a plain ol' Taylor's
>> series expansion of the state evolution (or state transition) function.
>> So if the system is defined by
>>
>> dx/dt = f(x)
>>
>> then you can do a Taylor's series expansion around x0:
>>
>> dx/dt = f(x0) + (x * d/dx f(x0))/2 + ...
>> + (x^n * d^n/dx^n f(x0))/n!
>>
>> The first term is constant, the second term appears to be a simple gain,
>> and we close our eyes and ignore the rest of the terms. So, one way to
>> linearize a system would be to choose some x0 and take the linearization
>> as gospel.
>>
>>> How is it possible that the nonlinear system is time-invariant while
> the
>>> second term in this expansion is the linear "time variant"? That seems
> a
>>> contradiction...what am i missing?
>>
>> But assume that you've got a nonlinear system that you're going to
>> perturb ever so slightly, and that you want to know it's behavior over a
>> time period that's long enough so that your two-term Taylor's series
>> expansion isn't going to be good enough. One thing you could do is just
>> accept the rest of the terms in the Taylor's series, but they you've
>> forked yourself back into full-on nonlinear analysis.
>>
>> Another thing you could do -- if your perturbations are small enough --
>> is to find the system's "ideal" response. Then, instead of taking x0 as
>> a constant, take x0 as a function over time. This makes those first two
>> terms of your Taylor's expansion into functions of time -- but you still
>> have a description of a system that's affine (which makes it trivial to
>> turn into a linear system by subtracting f(x0(t))) and time varying
>> (because x0(t) is now a function of time). So you can use linear
>> time-varying analysis, which, while harder than linear time-invariant
>> analysis, can still be loads better than nonlinear analysis.
>>
>> --
>>
>> Tim Wescott
>> Wescott Design Services
>> http://www.wescottdesign.com
>>
>> Do you need to implement control loops in software?
>> "Applied Control Theory for Embedded Systems" was written for you.
>> See details at http://www.wescottdesign.com/actfes/actfes.html
>>
>
>
> Ok, I see. Great explanation.
> So the series expansion is linear (affine) simply because it contain the
> linear term, but time varying because the linear term varies with time.
>
> As you mention, the other choice would be to keep the whole nonlinear
> system with all the expansion terms.
> Such nonlinear system is time invariant because its impulse response is
> time invariant. But the presence of a time perturbation leads to make the
> linear approximation represent a system with time-changing impulse response
> in order to better approximate the nonlinear system....
> Why not simply keep the original time-invariant linear system term (2nd
> term) and add to it the same perturbation we added to the nonlinear system?
>
> I guess the approximation would not be as good as when we instead consider
> the linear part time-variant itself.....
>
> By the way, a nonlinear memory system can be described by a Volterra
> series, which shows many kernels, the first two being h(tau) and h(tau1,
> tau2).
> Does that mean that in that 2nd order approximation the nonlinear system
> has two impulse responses? What does it mean?

It doesn't mean anything wrt the impulse response of the system, because
the concept of an impulse response is meaningless for a nonlinear system.

> What if a nonlinear system if memory-less...How is the 2nd impulse response
> h(tau1, tau2) modified? Is it still a function of both tau1, tau1?
>
> Clearly, even a simple LTI system is a memory system. What would the
> impulse response of a LTI memoryless system look like? Is it a delta?

What is a memoryless LTI system? Answer that question, and the answer
to your impulse response question will spring out by inspection.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

Do you need to implement control loops in software?
"Applied Control Theory for Embedded Systems" was written for you.
See details at http://www.wescottdesign.com/actfes/actfes.html