in [DSP]

Prev: Announcing AjarDSP - an open source VLIW DSP
Next: Last Call for Papers Reminder (extended): International Conference on Signal Processing and Imaging Engineering ICSPIE 2010
From: PhilipOrr on 22 Jul 2010 12:31 >Any truly random noise, i.e. photodetector, ADC, or circuit noise, will >double in power when you do the subtraction. But if the thing you're >measuring isn't moving too fast, the measurements will be highly >correlated, which means that when you do the subtraction the _amplitude_ >will double -- which means that the power will go up by a factor of >four. So your signal to noise ratio should improve. > >This is what you do by averaging: if your noise is a white process and >you add up N samples, the noise power will go up by N (i.e. the expected >noise amplitude goes up by sqrt(N)). But if your signal isn't moving >significantly over those N samples the _signal_ power goes up by N^2. >Then you divide the sum by N, and you're left with the original signal >and 1/sqrt(N) of the noise power. > >-- > >Tim Wescott Thanks Tim - I needed that distinction explained. In fact that explains a lot about what I am seeing.
From: robert bristow-johnson on 22 Jul 2010 13:26 On Jul 22, 12:02 pm, "PhilipOrr" <philip.orr (a)n_o_s_p_a_m.eee.strath.ac.uk> wrote:> >in other words, your interleaved samples represent the signal (or it's > >negative) at the "interleaved" times of sampling. right? then, > >before subtracting, you need to advance one sample (let's say it's the > >even indexed sample) *ahead* 1/2 sample and the other sample (the odd > >one) behind 1/2 samples, so that they represent simultaneous sampling > >before you subtract one from the other. > > >also, as Tim pointed out, this should take care of any DC bias in your > >ADC. if the error components are uncorrelated, their power will add > >(double) and their voltages will will increase by sqrt(2). but the > >legitimate component of your signal (what you want) will double and > >you will get *slightly* (3 dB) better signal-to-noise ratio. > > >it's not a bad way to do things before the days of sigma-delta > >converters. i did this myself (a medical instrument with a very slow > >sampling rate and 12-bit converter) in 1979. have never done anything > >like it since. > > >r b-j > > As for the ADC, I'm putting the photodetector straight into a National > Instruments PXI analogue input. The processing (de-interleaving, > subtraction, FFT) is then done in LabVIEW. > > At the moment I am not shifting any samples like you say. My understanding > of DSP is limited as I'm sure you have noticed. What I am doing is > constantly sampling but using shift registers to store sample A, then get > sample B and subtract. > > i.e. > > Sample - Store as A > Sample - Store as B -> Subtract B-A > Sample - Store as A > Sample - Store as B -> Subtract B-A > > (Differential output at fs/2). > > or > > Sample - Store as A > Sample - Store as B -> Subtract B-A > Sample - Store as A -> Subtract B-A > Sample - Store as B -> Subtract B-A > > (Differential output at fs). > > As it stands, I don't even know which of the above is more technically > sound. I have been trying both. probably the latter is better. i can't think a lot about this right now, but i have a question for you: can you tolerate a delay of, say 8.5 samples for both? you can apply a 16 tap FIR to get a delay of 7.5 samples on A, then when B comes in apply the same FIR, but with the taps reversed, to get a delay of 8.5 samples on B. but at that time, then A will also be delayed 8.5 samples and you can subtract them coherently. so what delay can you tolerate? and what is the expected bandwidth of this signal (in comparison to the sampling rate)? and what computation can you afford to do? r b-j
From: glen herrmannsfeldt on 22 Jul 2010 14:04 Mark <makolber (a)yahoo.com> wrote:(snip) > Read up on stereo FM multiplexing, it is an anaolog system, but the L > and R channels are "sampled" or interleaved. The ADC QUANTIZES the > signals. The AA filter is needed when a signal is SAMPLED and it > sounds like in your system that happens when the 2 signals are > "interleaved" into one. It is usually described as (L+R) in the baseband and (L-R) (or is it R-L) on a 38kHz AM-SC subcarrier. With the appropriate amplitude for the subcarrier, and appropriate band-limiting for the modulating signal, the result is exactly as you say. (There is a factor of two that I usually get wrong in trying to explain it, but, yes, that is the way it works.) It is necessary to go through the math to figure out what the modulation index is of the result. If you see it as alternating between L and R, you can easily see that the amplitude never gets higher than the larger of L and R. (Not counting the 19kHz 10% pilot signal.) Otherwise, you might suspect that at some time the two would add such that the result could be much larger. > I agree with the others about doing the differential summing in the > analog domain and pass the result through ONE AA filter and one A/D. -- glen
From: glen herrmannsfeldt on 22 Jul 2010 14:12 Tim Wescott <tim (a)seemywebsite.com> wrote:(snip) > One of the top-ten mistakes that I see in digital systems design is some > variation of the following statement: "I am sampling, therefore I need > an anti-alias filter". > You don't, necessarily. See > http://www.wescottdesign.com/articles/Sampling/sampling.html for details. I used to wonder about this, as in some cases it isn't easy to see. I have learned, though, that digital SLRs have an optical anti-alias filter in front of the CCD sensor. I don't know what one actually looks like, but they do have one. For cheaper cameras it might be that the lens resolution is low enough that it isn't needed. (That is, the lens itself is an anti-alias filter.) (It seems that they don't always filter appropriately for the low resulution LCD display on the back of the camera, though.) In the analog television camera days, using a scanning electron beam reading the signal off a silicon sensor, the size of the electron beam was the anti-alias filter. Well, it won't be so sharp as you might want, maybe a Gaussian distribution, but it will filter out much of the higher spatial frequencies. One that I still wonder about is the filtering needed to display an HDTV picture with a different number of scan lines than the display native resolution. It would seem that one should do an appropriate resampling, though it likely takes too long with the available hardware. -- glen
From: Fred Marshall on 22 Jul 2010 16:20
Well, I need a picture / or perhaps a model. Here's what I get from reading all these posts: 1 Analog signal > 1 commutator/inverter/switch/chopper @ 500Hz : 1 channel to 2 > 2 AD conversions @ 500Hz each > 2 discrete sequences Is that right? Jerry and others asked "where is the noise coming from?". That's a key point because if it's in the analog signal that's one thing and if it's introduced after the commutator/inverter that's another. Also, others have mentioned the bandwidth of this noise. This is important because the samples are interleaved / taken at different times. Ditto for the signal itself. How about this for a model: S+Ns > commutator/inverter > S+Ns+Nw > A/D .. S+Ns+Nw+Nc and ignoring the two sequences generated for now..... Ns is inherent in the signal Nw is due to switching or is introduced in the switching stage/cabling. Nc is due to A/D conversion quantization, etc. We can view each sequence as a decimated version of the original signal: And, one can apply signs and "add" or "subtract" accordingly. Here, I'll not invert the signs and will subtract to get a difference: We have: S1(n) +N1s(n) + N1w(n) + N1c(n) S2(n+1) + N2s(n+1) + N2w(n+1) + N2c(n+1) S1(n+2) + N1s(n+2) + N1w(n+2) + N1c(n+1) etc. And, we state that S2 == -S1 and N2s == -N1s and nothing yet about N.w or N.c And, subtracting: [S1(n) - S2(n+1)] + [N1s(n) - N2s(n+1)] + [N1w(n) - N2w(n+1)] + [N1c(n) - N2c(n+1)] So, it should be clear enough: - At low frequencies S1 - S2 ~ 2Sx or 6dB increase - At low frequencies N1 - N2 ~ 2Nx or 6dB increase (But as Jerry points out, - At low frequencies N1w - N2w Common Mode noise is reduced - perhaps to near zero. - Random N.w noises in N1w - N2w are increased by ~3dB. - At low frequencies N1c - N2c Common Mode noise is reduced - perhaps to near zero. - Random N.c noises in N1c - N2c are increased by ~3dB which is where the quantization noise is.... Bottom line is that some noises are reduced and others are increased. So the performance depends a lot on where the noise is introduced - as Jerry and I think others have mentioned. The original SNR is not ever improved, only made worse going through any system. That is, without filtering. Common mode noise after the inversions can be reduced as long as there is reasonably low phase shift between one sample to the next. I believe the temporal measure here is 1msec and not 2msec. So, maybe there'd be OK reduction up to around 100Hz?? Above that frequency the phase differences will reduce the effectiveness of differencing. Fred |