From: Richard Owlett on
*DT* defined - DIVERGING from 'topic'


Jerry Avins wrote:
> kork wrote:
>>> Tim Wescott wrote:
>>>> On Thu, 04 Feb 2010 09:57:49 -0600, kork wrote:
>>>>
>>>>
>>>>>> kork wrote:
>>>>>>> Hi folks,
>>>>>>>
>>>>>>> I'm going to develop a quality control application that inspects
>>>>> recently
>>>>>>> imported audio files for a number of checks. One of them is the
>>>>> detection
>>>>>>> of counterphase fragments in the file. With counterphase I mean a
>> 180
>>>>>>> degrees (or pi rad, if you prefer)
>>>>> phase
>>>>>>> shift between the two audio channels in the (stereo) file. In a
>> radio
>>>>>>> broadcast of the file this is killing when it is listened through a
>>>>>>> mono-receiver.
>>>>>>>
>>>>>>> I was thinking of subtracting one channel from the other (or reverse
>> a
>>>>>>> channel and add it to the other). Then flagging the audio fragments
>> as
>>>>>>> counterphase when the resulting signal differs a lot from zero
>> during
>>>>> a
>>>>>>> certain amount of time.
>>>>>>> But since it is likely that the 2 channels are anything but equal,
>> I
>>>>> may
>>>>>>> never get to see a flatlioe.
>>>>>>>
>>>>>>> I thought maybe you DSP guys can give me some insights on this?
>> Maybe
>>>>>>> there's a test in the frequency domain I can think of?
>>>>>> Compute (L+R) and (L-R), rectify, accumulate, compare. It is very
>>>>>> obvious if the stereo channels are in phase or out of phase.
>>>>>>
>>>>>>
>>>>>> Vladimir Vassilevsky
>>>>>> DSP and Mixed Signal Design Consultant http://www.abvolt.com
>>>>> Hi Vladimir,
>>>>>
>>>>> Thanks for your answer.
>>>>> Would you mind elaborating a bit on the "rectify" and "accumulate"
>>>>> suggestions? They're not so obvious terms for me in this domain.
>> Thanks
>>>>> again.
>>>> "Rectify": take the absolute value.
>>>>
>>>> "Accumulate": sum up a bunch of samples.
>>>>
>>>> Then compare the relative strengths of the L+R and L-R channels --
>>>> normally L-R should be significantly smaller than L+R. In fact, this
>> is
>>>> why the 'wrong' way is a broadcast-killer -- the FM stereo broadcast
>>>> protocol depends on this property, won't work without it, etc.
>>>>
>>>> I'll charge you money for answers, too, but only if the question takes
>>
>>>> more than a few lines to answer.
>>> The accumulation should be lossy; i.e., include a "forgetting
>>> factor". alternatively, you could dump the result after a suitable
>>> time and start
>>
>>> over.
>>>
>>> Jerry
>>> --
>>
>> Thanks Tim and Jerry,
>>
>> I appreciate the jargon explanation.
>> This sounds pretty straight-forward to implement. I'll have a go at it.
>>
>> Jerry, your "forgetting factor" sounds logical. I was thinking of just
>> testing separate successive chunks of samples, so I won't have any
>> "memory-effect".
>
> That will require counting and branching. Forgetting is actually
> simpler. The convention is that x[n] is the input and y[n] is the
> output. Set y[n+1] = (1-a)*y[n] + a*x[n+1]. For stability, 0 > a > 1.
> Larger values forget faster. This is called an exponential averager.
>
> Jerry

Why is this called an "exponential averager"?

I have heard of "boxcar" and "running" averages.
What is/are the difference(s)?
What other averageres exist?

The OP apparently says he is looking at *NON*overlapping chunks
of data. Is there not there an *INTRINSIC* forgetting factor?