From: glen herrmannsfeldt on
Tauno Voipio <tauno.voipio(a)notused.fi.invalid> wrote:
> On 18.6.10 2:04 , gpezzella wrote:

>> I will try to explane better and if there are errors,
>> please correct me.

>> My goal is to acquire very low frequency and voltage
>> signal (5Hz-50Hz 1mV -2mV) with 10bit DAC.

> EKG/ECG/EEG signal?

> Please note that the built-in A/D converters of small processors
> (like ATTiny) are far too noisy for this kind of work. If you have
> biological signals, you need a proper pre-amplifier, and you were
> much better off with a separate A/D converter.

You might be able to use the statstical techniques of
signal averaging. If you make many measurements of a signal
that has a random error (noise) component, and average those
values, you can decrease the noise by a factor of sqrt(N)
(where N is the number of points averaged).

It is a little more complicated in the case of a time
varying signal, but it can still be done. This relies on
either the signal changing much slower than the A/D conversion
time or it being periodic such that you can make repeated
measurements.

-- glen

From: glen herrmannsfeldt on
Jerry Avins <jya(a)ieee.org> wrote:
(big snip)

> > 1) My "Virtual Sample Rate" is 152 HZ and hence Filter f-cut = [0 -
> > 0.5] * 152

> What is virtual sample rate? The reference you cite below deals with
> windowed-sinc filters, not resolution or accuracy.
(snip)

> This isn't going to work. Other conditions being met, the precision
> gained by averaging increases with the square root of the number of
> measurements. To increase the precision eight times (three bits worth)
> you need to average 64 (8^2) measurements.

As I just wrote before reading this post. That does work for
reducing the effects of random noise. In many cases that will
be a problem. With slow signals and fast A/D it shouldn't be
hard to do.

> Even at that rate, it probably won't work on your processor.
> To get 13-bit results from a 10-bit ADC, the converter needs
> to slice accurately to 13 bits thresholds.

Well, now it is the systematic error question.

There is the still-used technique of using a smaller A/D converter,
sending the result through a D/A converter, doing an analog
subtraction, and then A/D on the difference. That does depend
on the first A/D thesholds being accurate.

I believe, though, that you don't need the 13 bit accurate
thresholds if you know the (inaccurate) thresholds to 13 bits.
That is, correct for the systematic error in the appropriately
averaged (and dithered) result.

Not exaclty the same, but this reminds me of how I understand
that image sensors are used. As it is difficult to make all
the pixels in a large sensor (such as in a digital camera) exactly
the same, instead they supply a table (ROM) indicating the
systematic error in each and then correct the resulting image.

> Any converter on a processor chip is most unlikely to be
> that good. If the spec sheet doesn't say it is, it isn't.
> There are other considerations, but one thing at a time.

-- glen
From: Tim Wescott on
On 06/18/2010 12:29 PM, glen herrmannsfeldt wrote:
> Jerry Avins<jya(a)ieee.org> wrote:
> (big snip)
>
>>> 1) My "Virtual Sample Rate" is 152 HZ and hence Filter f-cut = [0 -
>>> 0.5] * 152
>
>> What is virtual sample rate? The reference you cite below deals with
>> windowed-sinc filters, not resolution or accuracy.
> (snip)
>
>> This isn't going to work. Other conditions being met, the precision
>> gained by averaging increases with the square root of the number of
>> measurements. To increase the precision eight times (three bits worth)
>> you need to average 64 (8^2) measurements.
>
> As I just wrote before reading this post. That does work for
> reducing the effects of random noise. In many cases that will
> be a problem. With slow signals and fast A/D it shouldn't be
> hard to do.

With a 10-bit ADC you won't necessarily get enough noise to linearize
the ADC response -- you may need to add a dither signal to the input to
dodge quantization noise.

>> Even at that rate, it probably won't work on your processor.
>> To get 13-bit results from a 10-bit ADC, the converter needs
>> to slice accurately to 13 bits thresholds.
>
> Well, now it is the systematic error question.
>
> There is the still-used technique of using a smaller A/D converter,
> sending the result through a D/A converter, doing an analog
> subtraction, and then A/D on the difference. That does depend
> on the first A/D thesholds being accurate.
>
> I believe, though, that you don't need the 13 bit accurate
> thresholds if you know the (inaccurate) thresholds to 13 bits.
> That is, correct for the systematic error in the appropriately
> averaged (and dithered) result.

True, but after you get your ADC calibrated today and at room
temperature, who's to say if it'll have the same thresholds tomorrow, or
at a different temperature?

> Not exaclty the same, but this reminds me of how I understand
> that image sensors are used. As it is difficult to make all
> the pixels in a large sensor (such as in a digital camera) exactly
> the same, instead they supply a table (ROM) indicating the
> systematic error in each and then correct the resulting image.

With the focal plane arrays used in infra-red imaging the correction is
for the gain and offset of each pixel. Pro video cameras do this, too,
but pro video camera makers don't like to fess up to it.

--
Tim Wescott
Control system and signal processing consulting
www.wescottdesign.com
From: Jerry Avins on
On 6/18/2010 3:29 PM, glen herrmannsfeldt wrote:
> Jerry Avins<jya(a)ieee.org> wrote:
> (big snip)
>
>>> 1) My "Virtual Sample Rate" is 152 HZ and hence Filter f-cut = [0 -
>>> 0.5] * 152
>
>> What is virtual sample rate? The reference you cite below deals with
>> windowed-sinc filters, not resolution or accuracy.
> (snip)
>
>> This isn't going to work. Other conditions being met, the precision
>> gained by averaging increases with the square root of the number of
>> measurements. To increase the precision eight times (three bits worth)
>> you need to average 64 (8^2) measurements.
>
> As I just wrote before reading this post. That does work for
> reducing the effects of random noise. In many cases that will
> be a problem. With slow signals and fast A/D it shouldn't be
> hard to do.
>
>> Even at that rate, it probably won't work on your processor.
>> To get 13-bit results from a 10-bit ADC, the converter needs
>> to slice accurately to 13 bits thresholds.
>
> Well, now it is the systematic error question.
>
> There is the still-used technique of using a smaller A/D converter,
> sending the result through a D/A converter, doing an analog
> subtraction, and then A/D on the difference. That does depend
> on the first A/D thesholds being accurate.
>
> I believe, though, that you don't need the 13 bit accurate
> thresholds if you know the (inaccurate) thresholds to 13 bits.
> That is, correct for the systematic error in the appropriately
> averaged (and dithered) result.
>
> Not exaclty the same, but this reminds me of how I understand
> that image sensors are used. As it is difficult to make all
> the pixels in a large sensor (such as in a digital camera) exactly
> the same, instead they supply a table (ROM) indicating the
> systematic error in each and then correct the resulting image.
>
>> Any converter on a processor chip is most unlikely to be
>> that good. If the spec sheet doesn't say it is, it isn't.
>> There are other considerations, but one thing at a time.

I've been away for a long weekend, and I had a disturbing thought about
the whole process. Thirteen bits of resolution is not necessity for the
purpose, so U have to assume that the OP wants them for added
sensitivity. If his signal is so small that the ADC will call them zero,
it doesn't matter how many are averaged. A preamplifier with a gain of a
few hundred is likely what is needed.

Jerry
--
Engineering is the art of making what you want from things you can get.
�����������������������������������������������������������������������