From: Michael Plante on
Clay wrote:
>The book (chapters at least) use to be on the web
>available for free. Try looking here: http://www.nr.com/

You have to install some truly nasty plugin for Acrobat last I checked.
They *used* to be available for free, and I may have saved them to a backup
somewhere (foresight is nice, when available). But the book is really
worth the money for the time saved, in spite of the fact the coding style
looks like it came from someone more interested in job security than
teaching.

From: Robert E. Beaudoin on
Rune Allnor wrote:
> On 11 Mar, 22:33, Clay <c...(a)claysturner.com> wrote:
>> On Mar 11, 4:23 pm, spop...(a)speedymail.org (Steve Pope) wrote:
>>
>>
>>
>>
>>
>>> Clay <c...(a)claysturner.com> wrote:
>>>> On Mar 11, 8:55 am, "Rachel2010" <HUGHMOLO...(a)GMAIL.COM> wrote:
>>>>> I have a few periods of a signal converted to an ASCII *.txt file that I
>>>>> want to apply Autocorrelation & Burg Method to seperately and then plot the
>>>>> frequency response for each. Does anyone have optimized C code or Java for
>>>>> doing so. Thank you.
>>>> The book "Numerical Recipes in C" does have a routine for Maximum
>>>> Entropy Method. The book (chapters at least) use to be on the web
>>>> available for free. Try looking here:http://www.nr.com/
>>> I think the MEM is unrelated to Rachel's problem, although
>>> I could be mistaken.
>>> I assume the problem is to generate LPC coefficients by the
>>> two different methods (autocorrelation followed by e.g.
>>> Levinson-Durbin; or a lattice analyser using Burg arithmetic);
>>> and to then compare the frequency response of the all-pole
>>> filters generated by these two methods.
>>> ("Frequency response" possibly meaning the PSD of the
>>> impulse response of these filters.)
>>> Steve
>> It is highly related. With autocorrelation you generate a nice
>> symmetric toeplitz matrix. Burg originated the recursive method for
>> solving the matrix. This allows you to find a good estimate of the
>> power spectrum of the data set.There are multiple names for this
>> method: All Poles Model, Maximum Entrophy Method, and Autoregressive
>> Model. Rachel mentioned she wanted to use autocorrelation and find the
>> power spectrum. So this is the way.
>
> I never really understood the *practical* differences between
> the usual Yule-Walker equations and Burg's method. The way
> I saw this, the Yule-Walker equations pop out from a LMS
> analysis of the AR predictor, while Burg's method was based
> on maximizing the entropy of the residual.
>
> While the derivations and argument leading up to the results
> differed, the end results - the normal equations - were the
> same for both methods. And that's where I lost interest:
> As far as I were concerned, the same normal equations are
> solved the same way numerically, even if there were several
> possible derivations that lead up to the conclusion that the
> normal equations are useful.
>
> Where did I go wrong?
>
> Rune

I suspect some people use "Yule-Walker equations" to refer to the
equations relating the actual (but generally unknown) autocovariances of
the AR process to the AR coefficients, while others may be referring to
the equations in which the actual autocovariances are replaced by one or
another estimate formed from the available sample data (which are what
will generally have to be solved in practice). Let's call the first YW1
and the second YW2. The maximum entropy principle also leads to the YW1
equations, which is interesting but would not in practice affect how
you'd solve them. The differences in methods arise from how one
approximates YW1 with some YW2 (or in the case of Burg's method, how one
finesses this issue). When I hear "autocorrelation method" I think of a
particular instance of YW2 (obtained by one method of estimating the
autocovariance); another popular alternative is the so-called
"covariance method" (in which one solves a different set of YW2
equations obtained with a different estimator of the autocovariance).
Some methods yield a Toeplitz coefficient matrix in YW2, which can be
solved by Levinson-Durbin recursion; others do not. Burg developed a
similar (but distinct) recursion which simultaneously solves certain
YW2-type equations and computes autocovariance estimates (to produce the
next YW2 equation). The YW2 equations solved by distinct methods are
different, and will give different answers (though the differences
should be small with large enough amounts of data, as all the YW2
equations should converge to the YW1 equations).

There's lots of information about this topic at
http://www.eos.ubc.ca/research/cdsst/Tad_home/regress.pdf
(not that any of this helps the OP).

Robert E. Beaudoin
From: Rune Allnor on
On 12 Mar, 03:23, "Robert E. Beaudoin" <r...(a)comcast.net> wrote:
> Rune Allnor wrote:
> > On 11 Mar, 22:33, Clay <c...(a)claysturner.com> wrote:
> >> On Mar 11, 4:23 pm, spop...(a)speedymail.org (Steve Pope) wrote:
>
> >>> Clay  <c...(a)claysturner.com> wrote:
> >>>> On Mar 11, 8:55 am, "Rachel2010" <HUGHMOLO...(a)GMAIL.COM> wrote:
> >>>>> I have a few periods of a signal converted to an ASCII *.txt file that I
> >>>>> want to apply Autocorrelation & Burg Method to seperately and then plot the
> >>>>> frequency response for each. Does anyone have optimized C code or Java for
> >>>>> doing so. Thank you.
> >>>> The book "Numerical Recipes in C" does have a routine for Maximum
> >>>> Entropy Method. The book (chapters at least) use to be on the web
> >>>> available for free. Try looking here:http://www.nr.com/
> >>> I think the MEM is unrelated to Rachel's problem, although
> >>> I could be mistaken.
> >>> I assume the problem is to generate LPC coefficients by the
> >>> two different methods (autocorrelation followed by e.g.
> >>> Levinson-Durbin; or a lattice analyser using Burg arithmetic);
> >>> and to then compare the frequency response of the all-pole
> >>> filters generated by these two methods.
> >>> ("Frequency response" possibly meaning the PSD of the
> >>> impulse response of these filters.)
> >>> Steve
> >> It is highly related. With autocorrelation you generate a nice
> >> symmetric toeplitz matrix. Burg originated the recursive method for
> >> solving the matrix. This allows you to find a good estimate of the
> >> power spectrum of the data set.There are multiple names for this
> >> method: All Poles Model, Maximum Entrophy Method, and Autoregressive
> >> Model. Rachel mentioned she wanted to use autocorrelation and find the
> >> power spectrum. So this is the way.
>
> > I never really understood the *practical* differences between
> > the usual Yule-Walker equations and Burg's method. The way
> > I saw this, the Yule-Walker equations pop out from a LMS
> > analysis of the AR predictor, while Burg's method was based
> > on maximizing the entropy of the residual.
>
> > While the derivations and argument leading up to the results
> > differed, the end results - the normal equations - were the
> > same for both methods. And that's where I lost interest:
> > As far as I were concerned, the same normal equations are
> > solved the same way numerically, even if there were several
> > possible derivations that lead up to the conclusion that the
> > normal equations are useful.
>
> > Where did I go wrong?
>
> > Rune
>
> I suspect some people use "Yule-Walker equations" to refer to the
> equations relating the actual (but generally unknown) autocovariances of
> the AR process to the AR coefficients, while others may be referring to
> the equations in which the actual autocovariances are replaced by one or
> another estimate formed from the available sample data (which are what
> will generally have to be solved in practice).  Let's call the first YW1
> and the second YW2.

I remember somebody using the term 'Yule Walker equations'
for the expression with true autocovariances, and 'normal
equations' for the *Toeplitz* version with estimates. Can't
find any references in a hurry, though.

> The maximum entropy principle also leads to the YW1
> equations, which is interesting but would not in practice affect how
> you'd solve them.

Which is what I figured, and why I stopped digging into Burg's
work.

> The differences in methods arise from how one
> approximates YW1 with some YW2 (or in the case of Burg's method, how one
> finesses this issue).  When I hear "autocorrelation method" I think of a
> particular instance of YW2 (obtained by one method of estimating the
> autocovariance); another popular alternative is the so-called
> "covariance method" (in which one solves a different set of YW2
> equations obtained with a different estimator of the autocovariance).

I used to know a bit too much about those things. Used to work
with one of Kay and Marple's solvers for a particular a sum-of-sines
predictor. I haven't seen Marple's 1987 book for over a decade, but
as I recall, it was 400 pages packed with details about what to do
in which situations, depending on the most minute and obscure details
in the estimator for the covariance matrix.

> Some methods yield a Toeplitz coefficient matrix in YW2, which can be
> solved by Levinson-Durbin recursion; others do not.  Burg developed a
> similar (but distinct) recursion which simultaneously solves certain
> YW2-type equations and computes autocovariance estimates (to produce the
> next YW2 equation).  The YW2 equations solved by distinct methods are
> different, and will give different answers (though the differences
> should be small with large enough amounts of data, as all the YW2
> equations should converge to the YW1 equations).
>
> There's lots of information about this topic athttp://www.eos.ubc.ca/research/cdsst/Tad_home/regress.pdf

I think I already have enough material to find the differences,
now that I know what to look for. The only problem is that the
books I'd need are at the bottom of the pile of books, which itself
is at the bottom of the pile of piles of books...

Maybe I should actually get some of those bookshelfs some
time soon...

Rune
From: Rune Allnor on
On 12 Mar, 10:40, Rune Allnor <all...(a)tele.ntnu.no> wrote:
> On 12 Mar, 03:23, "Robert E. Beaudoin" <r...(a)comcast.net> wrote:
>
>
>
>
>
> > Rune Allnor wrote:
> > > On 11 Mar, 22:33, Clay <c...(a)claysturner.com> wrote:
> > >> On Mar 11, 4:23 pm, spop...(a)speedymail.org (Steve Pope) wrote:
>
> > >>> Clay  <c...(a)claysturner.com> wrote:
> > >>>> On Mar 11, 8:55 am, "Rachel2010" <HUGHMOLO...(a)GMAIL.COM> wrote:
> > >>>>> I have a few periods of a signal converted to an ASCII *.txt file that I
> > >>>>> want to apply Autocorrelation & Burg Method to seperately and then plot the
> > >>>>> frequency response for each. Does anyone have optimized C code or Java for
> > >>>>> doing so. Thank you.
> > >>>> The book "Numerical Recipes in C" does have a routine for Maximum
> > >>>> Entropy Method. The book (chapters at least) use to be on the web
> > >>>> available for free. Try looking here:http://www.nr.com/
> > >>> I think the MEM is unrelated to Rachel's problem, although
> > >>> I could be mistaken.
> > >>> I assume the problem is to generate LPC coefficients by the
> > >>> two different methods (autocorrelation followed by e.g.
> > >>> Levinson-Durbin; or a lattice analyser using Burg arithmetic);
> > >>> and to then compare the frequency response of the all-pole
> > >>> filters generated by these two methods.
> > >>> ("Frequency response" possibly meaning the PSD of the
> > >>> impulse response of these filters.)
> > >>> Steve
> > >> It is highly related. With autocorrelation you generate a nice
> > >> symmetric toeplitz matrix. Burg originated the recursive method for
> > >> solving the matrix. This allows you to find a good estimate of the
> > >> power spectrum of the data set.There are multiple names for this
> > >> method: All Poles Model, Maximum Entrophy Method, and Autoregressive
> > >> Model. Rachel mentioned she wanted to use autocorrelation and find the
> > >> power spectrum. So this is the way.
>
> > > I never really understood the *practical* differences between
> > > the usual Yule-Walker equations and Burg's method. The way
> > > I saw this, the Yule-Walker equations pop out from a LMS
> > > analysis of the AR predictor, while Burg's method was based
> > > on maximizing the entropy of the residual.
>
> > > While the derivations and argument leading up to the results
> > > differed, the end results - the normal equations - were the
> > > same for both methods. And that's where I lost interest:
> > > As far as I were concerned, the same normal equations are
> > > solved the same way numerically, even if there were several
> > > possible derivations that lead up to the conclusion that the
> > > normal equations are useful.
>
> > > Where did I go wrong?
>
> > > Rune
>
> > I suspect some people use "Yule-Walker equations" to refer to the
> > equations relating the actual (but generally unknown) autocovariances of
> > the AR process to the AR coefficients, while others may be referring to
> > the equations in which the actual autocovariances are replaced by one or
> > another estimate formed from the available sample data (which are what
> > will generally have to be solved in practice).  Let's call the first YW1
> > and the second YW2.
>
> I remember somebody using the term 'Yule Walker equations'
> for the expression with true autocovariances, and 'normal
> equations' for the *Toeplitz* version with estimates. Can't
> find any references in a hurry, though.
>
> > The maximum entropy principle also leads to the YW1
> > equations, which is interesting but would not in practice affect how
> > you'd solve them.
>
> Which is what I figured, and why I stopped digging into Burg's
> work.

I found a reference[*] that mentions the difference between
the Levinson recrusion and Burg's algorithm: In what is
generally recognized as the Levinson recursion, one
iteratively solves for the coefficients of the predictor
polynomial, and regards the reflection coefficients as
internal variables. In what this reference terms "Burg's
algorithm" one uses the Levinson recursion, but instead of
storing the predictor polynomial coefficients one stores
the reflection coefficients for use in a lattice realization
of the predictor.

I must admit I never saw those two approaches as two
distinct algorithms, but rather as two variants of the
same method.

Rune

[*] Proakis, Rader, Ling & Nikias: Advanced Signal Processing
Macmillan, 1992.
From: Rune Allnor on
On 12 Mar, 14:14, Rune Allnor <all...(a)tele.ntnu.no> wrote:
> On 12 Mar, 10:40, Rune Allnor <all...(a)tele.ntnu.no> wrote:
>
>
>
>
>
> > On 12 Mar, 03:23, "Robert E. Beaudoin" <r...(a)comcast.net> wrote:
>
> > > Rune Allnor wrote:
> > > > On 11 Mar, 22:33, Clay <c...(a)claysturner.com> wrote:
> > > >> On Mar 11, 4:23 pm, spop...(a)speedymail.org (Steve Pope) wrote:
>
> > > >>> Clay  <c...(a)claysturner.com> wrote:
> > > >>>> On Mar 11, 8:55 am, "Rachel2010" <HUGHMOLO...(a)GMAIL.COM> wrote:
> > > >>>>> I have a few periods of a signal converted to an ASCII *.txt file that I
> > > >>>>> want to apply Autocorrelation & Burg Method to seperately and then plot the
> > > >>>>> frequency response for each. Does anyone have optimized C code or Java for
> > > >>>>> doing so. Thank you.
> > > >>>> The book "Numerical Recipes in C" does have a routine for Maximum
> > > >>>> Entropy Method. The book (chapters at least) use to be on the web
> > > >>>> available for free. Try looking here:http://www.nr.com/
> > > >>> I think the MEM is unrelated to Rachel's problem, although
> > > >>> I could be mistaken.
> > > >>> I assume the problem is to generate LPC coefficients by the
> > > >>> two different methods (autocorrelation followed by e.g.
> > > >>> Levinson-Durbin; or a lattice analyser using Burg arithmetic);
> > > >>> and to then compare the frequency response of the all-pole
> > > >>> filters generated by these two methods.
> > > >>> ("Frequency response" possibly meaning the PSD of the
> > > >>> impulse response of these filters.)
> > > >>> Steve
> > > >> It is highly related. With autocorrelation you generate a nice
> > > >> symmetric toeplitz matrix. Burg originated the recursive method for
> > > >> solving the matrix. This allows you to find a good estimate of the
> > > >> power spectrum of the data set.There are multiple names for this
> > > >> method: All Poles Model, Maximum Entrophy Method, and Autoregressive
> > > >> Model. Rachel mentioned she wanted to use autocorrelation and find the
> > > >> power spectrum. So this is the way.
>
> > > > I never really understood the *practical* differences between
> > > > the usual Yule-Walker equations and Burg's method. The way
> > > > I saw this, the Yule-Walker equations pop out from a LMS
> > > > analysis of the AR predictor, while Burg's method was based
> > > > on maximizing the entropy of the residual.
>
> > > > While the derivations and argument leading up to the results
> > > > differed, the end results - the normal equations - were the
> > > > same for both methods. And that's where I lost interest:
> > > > As far as I were concerned, the same normal equations are
> > > > solved the same way numerically, even if there were several
> > > > possible derivations that lead up to the conclusion that the
> > > > normal equations are useful.
>
> > > > Where did I go wrong?
>
> > > > Rune
>
> > > I suspect some people use "Yule-Walker equations" to refer to the
> > > equations relating the actual (but generally unknown) autocovariances of
> > > the AR process to the AR coefficients, while others may be referring to
> > > the equations in which the actual autocovariances are replaced by one or
> > > another estimate formed from the available sample data (which are what
> > > will generally have to be solved in practice).  Let's call the first YW1
> > > and the second YW2.
>
> > I remember somebody using the term 'Yule Walker equations'
> > for the expression with true autocovariances, and 'normal
> > equations' for the *Toeplitz* version with estimates. Can't
> > find any references in a hurry, though.
>
> > > The maximum entropy principle also leads to the YW1
> > > equations, which is interesting but would not in practice affect how
> > > you'd solve them.
>
> > Which is what I figured, and why I stopped digging into Burg's
> > work.
>
> I found a reference[*] that mentions the difference between
> the Levinson recrusion and Burg's algorithm: In what is
> generally recognized as the Levinson recursion, one
> iteratively solves for the coefficients of the predictor
> polynomial, and regards the reflection coefficients as
> internal variables. In what this reference terms "Burg's
> algorithm" one uses the Levinson recursion, but instead of
> storing the predictor polynomial coefficients one stores
> the reflection coefficients for use in a lattice realization
> of the predictor.
>
> I must admit I never saw those two approaches as two
> distinct algorithms, but rather as two variants of the
> same method.
>
> Rune
>
> [*] Proakis, Rader, Ling & Nikias: Advanced Signal Processing
>     Macmillan, 1992.

(Sigh...)

I hadn't as much as sipped some coffee after posting that,
before I saw my faithful DSP companion for nearly two
decades,

Therrien: "Discrete Random Signals and Statistical Signal
Processing", Prentice Hall, 1994

accurately predicted, protruding from the bottom of the
pile, of the pile at the bottom pile in the pile of piles
of books. Therrien, as I have come to both expect and rely
on, knows the details:

The Yule-Walker equations vs Normal Equations have nothing
to do with 'true' vs 'estimated' covariances, but with
reversed / unreversed covariance matrices, or forward /
backward pedictors (p. 414-415).

Burg's method *does* compute reflection coefficients
similar to those that appear in the Levinson recursion,
but uses a different algorithm, that does not explicitly
compute estimators for the autocovariance (ch. 9.4.4).
So it is actually a different algorithm than the Levinson
recursion, alltogether.

Now, if anybody out there carry *any* kind of clout or
leverage with either Therrien himself, his representatives
or the publishing houses, could you *please* try and pull
some strings to get his book re-issued?

It's arguably one of the best hands-on DSP texts ever
written. It should be available. If in no other way,
so at least in a Dover reprint.

Rune