From: Les Cargill on
robert bristow-johnson wrote:
> On Mar 10, 8:03 pm, Les Cargill <lcargil...(a)comcast.net> wrote:
>> robert bristow-johnson wrote:
>>> On Mar 10, 6:05 pm, Les Cargill <lcargil...(a)comcast.net> wrote:
>>>> Jerry Avins wrote:
>>> ...
>>>>> Positive/negative symmetry in a transfer function precludes even
>>>>> harmonics,
>>> i would say that even symmetry causes even harmonics and odd symmetry
>>> generates only odd harmonics.
>>> not sure what you meant, Jerry, by positive/negative symmetry. (as
>>> opposed to "same sign"? okay, you meant odd.)
>>>> I am mostly experimenting. And rather than buy parts and
>>>> assembling circuits, for some reason trying to do it DSP seemed
>>>> appealing.
>>>> I may also wander off and try the DFT approach.
>>> i wouldn't recommend it.
>>> On Mar 10, 6:06 pm, Les Cargill <lcargil...(a)comcast.net> wrote:
>>>> I also found that if you run a long enough string of 'em,
>>>> they generate a sync pulse. Talk about generating an
>>>> identity the hard way...
>>> do you mean
>>> { 1/N for 1<=n<=N
>>> a[n] = {
>>> { 0 otherwise?
>>> and then you input
>>> x(t) = (A-1)/2 + ((A+1/2)*cos(w0*t) ?
>>> let A >= 1.
>>> that's what you mean?
>>> r b-j
>> Yarg... "sinc", not "sync".
>>
>> I used the T(a,sample) = cos(a*sample) form. If
>> you apply V = T(1,V) + T(2,V)... T(n,V) where
>> V is initially a sine wave, it produces a sinc function.
>> The complexity of this is O(n*M), (where M is the number
>> of samples) so it's very inefficient, but it's interesting.
>
> Les,
>
> do you know what it's like to be a teeny bit rigorous? whatever you
> wrote above makes no sense at all. do you mean to say
>
> Y = T(1,V) + T(2,V)... T(n,V) ?
>
> and if V is a sine wave, then T(a,V)=cos(a*V) has nothing to do with
> Tchebyshev, maybe Bessel, but not Tchebyshev.
>

There was a typo - that should have read Tn(x) = cos(n * arcos(x))

> i dunno how you do it, but i actually can't make *anything* work,
> unless i'm rigorous. because reality requires it to be right. the
> DSP or computer that you program doesn't say "oh yeah, i know what you
> mean...", it only does what you code it to do.
>
> so Vlad, maybe you were right.

I don't sit around doing Taylor series before breakfast, if that's what
you mean.

>
> r b-j

--
Les Cargill
From: robert bristow-johnson on
On Mar 11, 7:10 pm, Les Cargill <lcargil...(a)comcast.net> wrote:
> robert bristow-johnson wrote:
> > On Mar 10, 8:03 pm, Les Cargill <lcargil...(a)comcast.net> wrote:
> >> robert bristow-johnson wrote:
> >>> On Mar 10, 6:05 pm, Les Cargill <lcargil...(a)comcast.net> wrote:
> >>>> Jerry Avins wrote:
> >>> ...
> >>>>> Positive/negative symmetry in a transfer function precludes even
> >>>>> harmonics,
> >>> i would say that even symmetry causes even harmonics and odd symmetry
> >>> generates only odd harmonics.
> >>> not sure what you meant, Jerry, by positive/negative symmetry.  (as
> >>> opposed to "same sign"?  okay, you meant odd.)
> >>>> I am mostly experimenting. And rather than buy parts and
> >>>> assembling circuits, for some reason trying to do it DSP seemed
> >>>> appealing.
> >>>> I may also wander off and try the DFT approach.
> >>> i wouldn't recommend it.
> >>> On Mar 10, 6:06 pm, Les Cargill <lcargil...(a)comcast.net> wrote:
> >>>> I also found that if you run a long enough string of 'em,
> >>>> they generate a sync pulse. Talk about generating an
> >>>> identity the hard way...
> >>> do you mean
> >>>          { 1/N    for   1<=n<=N
> >>>   a[n] = {
> >>>          { 0      otherwise?
> >>> and then you input
> >>>   x(t) = (A-1)/2 + ((A+1/2)*cos(w0*t)   ?
> >>> let A >= 1.
> >>> that's what you mean?
> >>> r b-j
> >> Yarg... "sinc", not "sync".
>
> >> I used the T(a,sample) = cos(a*sample) form. If
> >> you apply V = T(1,V) + T(2,V)... T(n,V) where
> >> V is initially a sine wave, it produces a sinc function.
> >> The complexity of this is O(n*M), (where M is the number
> >> of samples) so it's very inefficient, but it's interesting.
>
> > Les,
>
> > do you know what it's like to be a teeny bit rigorous?  whatever you
> > wrote above makes no sense at all.  do you mean to say
>
> >     Y = T(1,V) + T(2,V)... T(n,V) ?
>
> > and if V is a sine wave, then T(a,V)=cos(a*V) has nothing to do with
> > Tchebyshev, maybe Bessel, but not Tchebyshev.
>
> There was a typo - that should have read Tn(x) = cos(n * arcos(x))
>
> > i dunno how you do it, but i actually can't make *anything* work,
> > unless i'm rigorous.  because reality requires it to be right.  the
> > DSP or computer that you program doesn't say "oh yeah, i know what you
> > mean...", it only does what you code it to do.
>
....
> I don't sit around doing Taylor series before breakfast, if that's what
> you mean.

it's not what i mean, but it doesn't hurt to do Taylor series before
breakfast, unless one is already late for work.

it's more about communicating *clearly* an idea, rather than making
the people reading your copy *guess* at what you mean. we know that,
when programming, computers don't guess at what the programmer means
but do only what the program says.

saying "If you apply V = T(1,V) + T(2,V)... T(n,V) where V is
initially a sine wave, it produces a sinc function" is pretty
confusing for those of us reading it (until we figger out it just
can't be right, like having "V" on both sides. also, the sinc()
function is not periodic, but driving a mapping function with a
sinusoid *is* periodic.

i dunno if it's true for sure, but now that i *think* i understand
what you're talking about, i can posit it as:

let

sinc(x) = sin(pi*x)/(pi*x)

+inf
v(t) = SUM{ sinc( N*f0*(t - k/f0) ) }
k=-inf

that's a *periodic* sinc() function with period 1/f0 (f0 is the
fundamental).


let x(t) = cos(2*pi*f0*t) (a sinusoid with f0 as it's frequency)


Is it true (or is it not true) that

N-1
v(t) = SUM{ 1/N * T[n]( x(t) ) }
n=0

where T[n](x) = cos( n*arccos(x) ). (the scaling of 1/N is necessary
so that v(0) = 1 in both cases.)

can someone (including Les) say definitively if that is true or not?
maybe it's true if the "N" in both equations are not the same N but
somehow related to each other.

i would be interested in knowing that this is true or not. i can
kinda see it in the frequency domain, but i am worried that the DC
component of v(t) is twice the magnitude of the other frequency
components. but the problem statement needs to be clear, just so that
others have some idea what one is even talking about.

r b-j
From: Les Cargill on
robert bristow-johnson wrote:
> On Mar 11, 7:10 pm, Les Cargill <lcargil...(a)comcast.net> wrote:
>> robert bristow-johnson wrote:
>>> On Mar 10, 8:03 pm, Les Cargill <lcargil...(a)comcast.net> wrote:
>>>> robert bristow-johnson wrote:
>>>>> On Mar 10, 6:05 pm, Les Cargill <lcargil...(a)comcast.net> wrote:
>>>>>> Jerry Avins wrote:
>>>>> ...
>>>>>>> Positive/negative symmetry in a transfer function precludes even
>>>>>>> harmonics,
>>>>> i would say that even symmetry causes even harmonics and odd symmetry
>>>>> generates only odd harmonics.
>>>>> not sure what you meant, Jerry, by positive/negative symmetry. (as
>>>>> opposed to "same sign"? okay, you meant odd.)
>>>>>> I am mostly experimenting. And rather than buy parts and
>>>>>> assembling circuits, for some reason trying to do it DSP seemed
>>>>>> appealing.
>>>>>> I may also wander off and try the DFT approach.
>>>>> i wouldn't recommend it.
>>>>> On Mar 10, 6:06 pm, Les Cargill <lcargil...(a)comcast.net> wrote:
>>>>>> I also found that if you run a long enough string of 'em,
>>>>>> they generate a sync pulse. Talk about generating an
>>>>>> identity the hard way...
>>>>> do you mean
>>>>> { 1/N for 1<=n<=N
>>>>> a[n] = {
>>>>> { 0 otherwise?
>>>>> and then you input
>>>>> x(t) = (A-1)/2 + ((A+1/2)*cos(w0*t) ?
>>>>> let A >= 1.
>>>>> that's what you mean?
>>>>> r b-j
>>>> Yarg... "sinc", not "sync".
>>>> I used the T(a,sample) = cos(a*sample) form. If
>>>> you apply V = T(1,V) + T(2,V)... T(n,V) where
>>>> V is initially a sine wave, it produces a sinc function.
>>>> The complexity of this is O(n*M), (where M is the number
>>>> of samples) so it's very inefficient, but it's interesting.
>>> Les,
>>> do you know what it's like to be a teeny bit rigorous? whatever you
>>> wrote above makes no sense at all. do you mean to say
>>> Y = T(1,V) + T(2,V)... T(n,V) ?
>>> and if V is a sine wave, then T(a,V)=cos(a*V) has nothing to do with
>>> Tchebyshev, maybe Bessel, but not Tchebyshev.
>> There was a typo - that should have read Tn(x) = cos(n * arcos(x))
>>
>>> i dunno how you do it, but i actually can't make *anything* work,
>>> unless i'm rigorous. because reality requires it to be right. the
>>> DSP or computer that you program doesn't say "oh yeah, i know what you
>>> mean...", it only does what you code it to do.
> ...
>> I don't sit around doing Taylor series before breakfast, if that's what
>> you mean.
>
> it's not what i mean, but it doesn't hurt to do Taylor series before
> breakfast, unless one is already late for work.
>
> it's more about communicating *clearly* an idea, rather than making
> the people reading your copy *guess* at what you mean. we know that,
> when programming, computers don't guess at what the programmer means
> but do only what the program says.
>
> saying "If you apply V = T(1,V) + T(2,V)... T(n,V) where V is
> initially a sine wave, it produces a sinc function" is pretty
> confusing for those of us reading it (until we figger out it just
> can't be right, like having "V" on both sides. also, the sinc()
> function is not periodic, but driving a mapping function with a
> sinusoid *is* periodic.
>
> i dunno if it's true for sure, but now that i *think* i understand
> what you're talking about, i can posit it as:
>
> let
>
> sinc(x) = sin(pi*x)/(pi*x)
>
> +inf
> v(t) = SUM{ sinc( N*f0*(t - k/f0) ) }
> k=-inf
>
> that's a *periodic* sinc() function with period 1/f0 (f0 is the
> fundamental).
>
>
> let x(t) = cos(2*pi*f0*t) (a sinusoid with f0 as it's frequency)
>
>
> Is it true (or is it not true) that
>
> N-1
> v(t) = SUM{ 1/N * T[n]( x(t) ) }
> n=0
>
> where T[n](x) = cos( n*arccos(x) ). (the scaling of 1/N is necessary
> so that v(0) = 1 in both cases.)
>
> can someone (including Les) say definitively if that is true or not?

Bless you Robert, but no, I cannot say. I tried several
variations; all came to nought. It's been too long,
and I am rusty. Somebody more in practice than I could
probably do it; I just noticed a pattern, which is
not the same as making a statement.

Y'know, though - the going through of all those trig identities
reaffirms something for me. But I could not make a cogent chess game
of it. I do other things. You can't go home again. This is ok.

To Vlad: you betcha I'm a lamer. Oh yeah. Euler be praised. I
had to make a living. If it were in your domain of expertise, I'd
support you to the Nth.

Humans network. That is the truth of it. This is a good thing. And
Robert Bristow networks better than most. It is sad that he
has such sorry clay as me to work with.

> maybe it's true if the "N" in both equations are not the same N but
> somehow related to each other.
>

If they need be related, I expect they could be made so.

> i would be interested in knowing that this is true or not. i can
> kinda see it in the frequency domain, but i am worried that the DC
> component of v(t) is twice the magnitude of the other frequency
> components.

I don't fully understand the system, but I had formed the illusion
that excluding v(0) pretty much eliminated DC. But phase and me
have a very tenuous understanding.

> but the problem statement needs to be clear, just so that
> others have some idea what one is even talking about.
>

That is nice when you can do it. My pattern matcher works overtime.
I am sorry if this has been a problem. In real life, it's
mostly not a problem; errors are treated as puns and have
a great deal of social benefit. I am glad that I just have
to make things work from definitions and don't have to bet
my identity on everything I say.



> r b-j

--
Les Cargill
From: robert bristow-johnson on
On Mar 11, 10:11 pm, Les Cargill <lcargil...(a)comcast.net> wrote:

> Humans network. That is the truth of it. This is a good thing. And
> Robert Bristow networks better than most. It is sad that he
> has such sorry clay as me to work with.

listen, i was picking on you for saying one thing of substance and
that was intriguing (run sinusoid through Tchebyshev polynomials and
you get a sinc() function) and then the math (which is s'pose to
illucidate) that you followed up with made me waste a half hour trying
to figure out just what you were saying. i use math to spell things
out when words just won;t do.

Vlad can be sorta raunchy at times, but i have gotten used to him.
and i don't think you're a lamer.

> > maybe it's true if the "N" in both equations are not the same N but
> > somehow related to each other.
>
> If they need be related, I expect they could be made so.
>
> > i would be interested in knowing that this is true or not.  i can
> > kinda see it in the frequency domain, but i am worried that the DC
> > component of v(t) is twice the magnitude of the other frequency
> > components.
>
> I don't fully understand the system, but I had formed the illusion
> that excluding v(0) pretty much eliminated DC. But phase and me
> have a very tenuous understanding.

here is my understanding. i can only equate the two time-domain
expressions because i trust that the Fourier Transform is a
"bijective" or invertible operation. that is, if the Fourier
Transforms of x(t) and y(t) are equal, then i trust that x(t) and y(t)
are equal.

so here's how it goes:

we know that the periodic "dirac comb" function (with period 1/f0) in
the time domain is another dirac comb in the frequency domain, but i'm
not gonna prove it.

+inf
d(t) = 1/f0 * SUM{ delta(t - n/f0) }
n=-inf


has for a Fourier Transform:


+inf
D(f) = SUM{ delta(f - k*f0) }
k=-inf


f0 is the fundamental frequency of this periodic function. in the
frequency domain, there is a spike (a dirac impulse) of height=1
spaced apart from its neighbor by f0.

now we run that impulse train, d(t), through an ideal brickwall low-
pass filter with frequency response:


H(f) = 1/(2N) * rect( f/((2N+1)*f0) )

where

{ 1 |u| < 1/2
rect(u) = {
{ 0 |u| > 1/2


the rect() function is centered at u=0 has height of 1 and width of
1. rect(1/2) usually is defined as 1/2.

H(f) has gain of 1 in the passband and the passband width is (2N+1)*f0
(encompassing both positive and negative f). that means there are 2N
+1 spikes in the frequency domain that are passed (with gain 1/(2N))
and all the rest are killed. one dirac spike at DC, N spikes in
positive frequency and N spikes in negative frequency.

the output spectrum is

N
Y(f) = H(f) * D(f) = 1/(2N) * SUM{ delta(f - k*f0) }
k=-N

the impulse response of that filter is:

h(t) = 1/(2N) * (2N+1)*f0 * sinc((2N+1)*f0*t)


so the time domain output is:


+inf
y(t) = 1/f0 * SUM{ h(t - n/f0) }
n=-inf


+inf
= 1/(2N*f0) * (2N+1)*f0 * SUM{ sinc((2N+1)*f0*(t - n/f0)) }
n=-inf


+inf
= (2N+1)/(2N) * SUM{ sinc((2N+1)*(f0*t - n)) }
n=-inf


so there are your sinc functions.

taking another look at the spectrum of the output:


N
Y(f) = 1/(2N) * SUM{ delta(f - k*f0) }
k=-N

N
= delta(f)/(2N) + SUM{ (delta(f-k*f0) + delta(f+k*f0))/(2N) }
k=1

converting that spectrum back to a time-domain function gets:

N
y(t) = 1/(2N) + SUM{ 1/N * cos(2*pi*k*f0*t) }
k=1



N
y(t) = 1/(2N) + SUM{ 1/N * T[k](cos(2*pi*f0*t)) }
k=1



where T[k](x) = cos(k * arccos(x))

because of the DC term, y(0) = 1 + 1/(2N). it appears that when t=0
that the sinc() functions have to add to 1 (which gets scaled by the
same 1+1/(2N)).

i realize that the frequency-domain look at Y(f) would be the same if
the width of the brickwall filter was anything strictly bigger than
(2*N)*f0 and strictly smaller than (2*N+2)*f0 which would change the
scaling inside the sinc() functions accordingly.

this is the only way i know how to look at this. i'd be happy if Clay
or Vlad or Jerry or Dale or Robert O or anyone would check it over.

r b-j
From: Les Cargill on
robert bristow-johnson wrote:
> On Mar 11, 10:11 pm, Les Cargill <lcargil...(a)comcast.net> wrote:
>
>> Humans network. That is the truth of it. This is a good thing. And
>> Robert Bristow networks better than most. It is sad that he
>> has such sorry clay as me to work with.
>
> listen, i was picking on you for saying one thing of substance and
> that was intriguing (run sinusoid through Tchebyshev polynomials and
> you get a sinc() function) and then the math (which is s'pose to
> illucidate) that you followed up with made me waste a half hour trying
> to figure out just what you were saying. i use math to spell things
> out when words just won;t do.
>

Well, I apologize. I was actually online with Comcast at the same time
(my cable box has lost its mind) and was distracted. We finally
got an answer - the cable box's FLASH went out. K'phooey. And I
had a window full of 'C' code up, messing with that.

I really am chagrined that it cost you half an hour. Sorry about
that. <rends garment; heaps ashes on head>....

:)

> Vlad can be sorta raunchy at times, but i have gotten used to him.
> and i don't think you're a lamer.
>

Well, thank you. When it comes to DSP, being called a "lamer" isn't
that inappropriate. I am pretty early in the learning curve. For
explanatory purposes, I could not exactly say *why* I was investigating
this. It was partly as Vlad said - to see what it sounds like - but
also something else. But I really don't know why. I'm not set
to solve a specific problem.

I kind of expected to find a slightly bored presentation of
how to do this on the Web, but nobody wants to *produce*
distortion, outside of NDA intensive climes like Line6. Because
if you can *produce* distortion, you can invert it and feed it back...

And yes, I am Guitarded:
http://home.texoma.net/~flhh/guitarded.jpg
(weird - Texoma.net was my first ISP - wonder what
Larry Vaden is up to these days...)

>>> maybe it's true if the "N" in both equations are not the same N but
>>> somehow related to each other.
>> If they need be related, I expect they could be made so.
>>
>>> i would be interested in knowing that this is true or not. i can
>>> kinda see it in the frequency domain, but i am worried that the DC
>>> component of v(t) is twice the magnitude of the other frequency
>>> components.
>> I don't fully understand the system, but I had formed the illusion
>> that excluding v(0) pretty much eliminated DC. But phase and me
>> have a very tenuous understanding.
>
> here is my understanding. i can only equate the two time-domain
> expressions because i trust that the Fourier Transform is a
> "bijective" or invertible operation. that is, if the Fourier
> Transforms of x(t) and y(t) are equal, then i trust that x(t) and y(t)
> are equal.
>

Aye! For any given implementation, stable output is
assumed.

> so here's how it goes:
>
> we know that the periodic "dirac comb" function (with period 1/f0) in
> the time domain is another dirac comb in the frequency domain, but i'm
> not gonna prove it.
>
> +inf
> d(t) = 1/f0 * SUM{ delta(t - n/f0) }
> n=-inf
>
>
> has for a Fourier Transform:
>
>
> +inf
> D(f) = SUM{ delta(f - k*f0) }
> k=-inf
>
>
> f0 is the fundamental frequency of this periodic function. in the
> frequency domain, there is a spike (a dirac impulse) of height=1
> spaced apart from its neighbor by f0.
>
> now we run that impulse train, d(t), through an ideal brickwall low-
> pass filter with frequency response:
>
>
> H(f) = 1/(2N) * rect( f/((2N+1)*f0) )
>
> where
>
> { 1 |u| < 1/2
> rect(u) = {
> { 0 |u| > 1/2
>
>
> the rect() function is centered at u=0 has height of 1 and width of
> 1. rect(1/2) usually is defined as 1/2.
>
> H(f) has gain of 1 in the passband and the passband width is (2N+1)*f0
> (encompassing both positive and negative f). that means there are 2N
> +1 spikes in the frequency domain that are passed (with gain 1/(2N))
> and all the rest are killed. one dirac spike at DC, N spikes in
> positive frequency and N spikes in negative frequency.
>
> the output spectrum is
>
> N
> Y(f) = H(f) * D(f) = 1/(2N) * SUM{ delta(f - k*f0) }
> k=-N
>
> the impulse response of that filter is:
>
> h(t) = 1/(2N) * (2N+1)*f0 * sinc((2N+1)*f0*t)
>
>

Sorry, but what is the basis for this? It's a deconvolution, but
what's the name of the mechanism that justifies this transform?

I am, uh, not up to deconvolution as an art form just yet.

> so the time domain output is:
>
>
> +inf
> y(t) = 1/f0 * SUM{ h(t - n/f0) }
> n=-inf
>
>

How does this lead to this (below)? Oh, substitution. Geez....
h(t) = 1/(2N) * (2N+1)*f0 * sinc((2N+1)*f0*t)

<facepalm>

> +inf
> = 1/(2N*f0) * (2N+1)*f0 * SUM{ sinc((2N+1)*f0*(t - n/f0)) }
> n=-inf
>
>
> +inf
> = (2N+1)/(2N) * SUM{ sinc((2N+1)*(f0*t - n)) }
> n=-inf
>
>
> so there are your sinc functions.
>

Yarg. Well, I hope you can get a theorem out of it. I
was unable to follow all steps. Some I caught, others,
not so much.

I do have a contact who may be able to find resources to
vet this as a theorem, assuming it has not been stumbled
upon before. I'd find it rather incredible that it
hasn't been found. It looks useful, too. Possibly
computationally prohibitive, but...

I hope that does not sound extremely stupid. But not
so much that I won't say it...

> taking another look at the spectrum of the output:
>
>
> N
> Y(f) = 1/(2N) * SUM{ delta(f - k*f0) }
> k=-N
>
> N
> = delta(f)/(2N) + SUM{ (delta(f-k*f0) + delta(f+k*f0))/(2N) }
> k=1
>
> converting that spectrum back to a time-domain function gets:
>
> N
> y(t) = 1/(2N) + SUM{ 1/N * cos(2*pi*k*f0*t) }
> k=1
>
>
>
> N
> y(t) = 1/(2N) + SUM{ 1/N * T[k](cos(2*pi*f0*t)) }
> k=1
>
>
>
> where T[k](x) = cos(k * arccos(x))
>
> because of the DC term, y(0) = 1 + 1/(2N). it appears that when t=0
> that the sinc() functions have to add to 1 (which gets scaled by the
> same 1+1/(2N)).
>

As a practical matter, I have simply avoided y(0). More hackerism
there. Doctor, it hurts when I do that...

> i realize that the frequency-domain look at Y(f) would be the same if
> the width of the brickwall filter was anything strictly bigger than
> (2*N)*f0 and strictly smaller than (2*N+2)*f0 which would change the
> scaling inside the sinc() functions accordingly.
>

What I saw, the sinc functions were of roughly the same amplitude.

> this is the only way i know how to look at this. i'd be happy if Clay
> or Vlad or Jerry or Dale or Robert O or anyone would check it over.
>
> r b-j

Robert, thanks *very* much. This is more than slightly amazing. I
look forward to additions.

--
Les Cargill