From: Clay on
On Mar 12, 8:39 am, Rune Allnor <all...(a)tele.ntnu.no> wrote:
> On 12 Mar, 14:14, Rune Allnor <all...(a)tele.ntnu.no> wrote:
>
>
>
>
>
> > On 12 Mar, 10:40, Rune Allnor <all...(a)tele.ntnu.no> wrote:
>
> > > On 12 Mar, 03:23, "Robert E. Beaudoin" <r...(a)comcast.net> wrote:
>
> > > > Rune Allnor wrote:
> > > > > On 11 Mar, 22:33, Clay <c...(a)claysturner.com> wrote:
> > > > >> On Mar 11, 4:23 pm, spop...(a)speedymail.org (Steve Pope) wrote:
>
> > > > >>> Clay  <c...(a)claysturner.com> wrote:
> > > > >>>> On Mar 11, 8:55 am, "Rachel2010" <HUGHMOLO...(a)GMAIL.COM> wrote:
> > > > >>>>> I have a few periods of a signal converted to an ASCII *.txt file that I
> > > > >>>>> want to apply Autocorrelation & Burg Method to seperately and then plot the
> > > > >>>>> frequency response for each. Does anyone have optimized C code or Java for
> > > > >>>>> doing so. Thank you.
> > > > >>>> The book "Numerical Recipes in C" does have a routine for Maximum
> > > > >>>> Entropy Method. The book (chapters at least) use to be on the web
> > > > >>>> available for free. Try looking here:http://www.nr.com/
> > > > >>> I think the MEM is unrelated to Rachel's problem, although
> > > > >>> I could be mistaken.
> > > > >>> I assume the problem is to generate LPC coefficients by the
> > > > >>> two different methods (autocorrelation followed by e.g.
> > > > >>> Levinson-Durbin; or a lattice analyser using Burg arithmetic);
> > > > >>> and to then compare the frequency response of the all-pole
> > > > >>> filters generated by these two methods.
> > > > >>> ("Frequency response" possibly meaning the PSD of the
> > > > >>> impulse response of these filters.)
> > > > >>> Steve
> > > > >> It is highly related. With autocorrelation you generate a nice
> > > > >> symmetric toeplitz matrix. Burg originated the recursive method for
> > > > >> solving the matrix. This allows you to find a good estimate of the
> > > > >> power spectrum of the data set.There are multiple names for this
> > > > >> method: All Poles Model, Maximum Entrophy Method, and Autoregressive
> > > > >> Model. Rachel mentioned she wanted to use autocorrelation and find the
> > > > >> power spectrum. So this is the way.
>
> > > > > I never really understood the *practical* differences between
> > > > > the usual Yule-Walker equations and Burg's method. The way
> > > > > I saw this, the Yule-Walker equations pop out from a LMS
> > > > > analysis of the AR predictor, while Burg's method was based
> > > > > on maximizing the entropy of the residual.
>
> > > > > While the derivations and argument leading up to the results
> > > > > differed, the end results - the normal equations - were the
> > > > > same for both methods. And that's where I lost interest:
> > > > > As far as I were concerned, the same normal equations are
> > > > > solved the same way numerically, even if there were several
> > > > > possible derivations that lead up to the conclusion that the
> > > > > normal equations are useful.
>
> > > > > Where did I go wrong?
>
> > > > > Rune
>
> > > > I suspect some people use "Yule-Walker equations" to refer to the
> > > > equations relating the actual (but generally unknown) autocovariances of
> > > > the AR process to the AR coefficients, while others may be referring to
> > > > the equations in which the actual autocovariances are replaced by one or
> > > > another estimate formed from the available sample data (which are what
> > > > will generally have to be solved in practice).  Let's call the first YW1
> > > > and the second YW2.
>
> > > I remember somebody using the term 'Yule Walker equations'
> > > for the expression with true autocovariances, and 'normal
> > > equations' for the *Toeplitz* version with estimates. Can't
> > > find any references in a hurry, though.
>
> > > > The maximum entropy principle also leads to the YW1
> > > > equations, which is interesting but would not in practice affect how
> > > > you'd solve them.
>
> > > Which is what I figured, and why I stopped digging into Burg's
> > > work.
>
> > I found a reference[*] that mentions the difference between
> > the Levinson recrusion and Burg's algorithm: In what is
> > generally recognized as the Levinson recursion, one
> > iteratively solves for the coefficients of the predictor
> > polynomial, and regards the reflection coefficients as
> > internal variables. In what this reference terms "Burg's
> > algorithm" one uses the Levinson recursion, but instead of
> > storing the predictor polynomial coefficients one stores
> > the reflection coefficients for use in a lattice realization
> > of the predictor.
>
> > I must admit I never saw those two approaches as two
> > distinct algorithms, but rather as two variants of the
> > same method.
>
> > Rune
>
> > [*] Proakis, Rader, Ling & Nikias: Advanced Signal Processing
> >     Macmillan, 1992.
>
> (Sigh...)
>
> I hadn't as much as sipped some coffee after posting that,
> before I saw my faithful DSP companion for nearly two
> decades,
>
> Therrien: "Discrete Random Signals and Statistical Signal
>     Processing", Prentice Hall, 1994
>
> accurately predicted, protruding from the bottom of the
> pile, of the pile at the bottom pile in the pile of piles
> of books. Therrien, as I have come to both expect and rely
> on, knows the details:
>
> The Yule-Walker equations vs Normal Equations have nothing
> to do with 'true' vs 'estimated' covariances, but with
> reversed / unreversed covariance matrices, or forward /
> backward pedictors (p. 414-415).
>
> Burg's method *does* compute reflection coefficients
> similar to those that appear in the Levinson recursion,
> but uses a different algorithm, that does not explicitly
> compute estimators for the autocovariance (ch. 9.4.4).
> So it is actually a different algorithm than the Levinson
> recursion, alltogether.
>
> Now, if anybody out there carry *any* kind of clout or
> leverage with either Therrien himself, his representatives
> or the publishing houses, could you *please* try and pull
> some strings to get his book re-issued?
>
> It's arguably one of the best hands-on DSP texts ever
> written. It should be available. If in no other way,
> so at least in a Dover reprint.
>
> Rune- Hide quoted text -
>
> - Show quoted text -

The differences I know of are whether you use autocorrelation or
covariance and if you window the data or not. So yes Burg's method and
the Autocorrelation method are highly similar but not identical. And
experimentation demonstrates the two (Burg and Autocorrelation)
methods yield similar results. And the Autocorrelation method enjoys a
slight computational advantage. Durbin's recursion is quite fast. If
you are using a fixed point machine, you may want to look at LeRoux
and Gueguen's method for directly finding the reflection coefficients
if you don't need the LPCs.

Clay


Details may be found in "Practical Approaches to Speech Coding" by
Panos E. Papamichalis.
From: Steve Pope on
Rune Allnor <allnor(a)tele.ntnu.no> wrote:

>I found a reference[*] that mentions the difference between
>the Levinson recrusion and Burg's algorithm: In what is
>generally recognized as the Levinson recursion, one
>iteratively solves for the coefficients of the predictor
>polynomial, and regards the reflection coefficients as
>internal variables. In what this reference terms "Burg's
>algorithm" one uses the Levinson recursion, but instead of
>storing the predictor polynomial coefficients one stores
>the reflection coefficients for use in a lattice realization
>of the predictor.

You might want to look at the link I posted yesterday, equation (1),
showing "Burg's Method" of the lattice realization.

What I'm unsure of is the relationship of this method, to the
"Burg's Method" as per Mathwork's implementation of AR coefficients
or PSD. Maybe they are mathematically related, maybe they are two
different unrelated formulations concocted by the same Dr. Burg.

Steve