From: dvsarwate on
On May 31, 12:08 pm, spop...(a)speedymail.org (Steve Pope) asked:
>
>  in 2-MSK, are the two tones orthogonal or not?

Yes, the two tones, at frequencies f_c + 1/(4T) and
f_c - 1/(4T) are orthogonal over each T-second bit
interval even though this violates the usual shibboleth
that tones are orthogonal over any interval whose length
is an integer multiple of the inverse of the frequency
difference. Here, the frequency difference is 1/(2T)
whose inverse is 2T, and so the shibboleth requires
intervals of length 2mT where m is an integer.

The reasons that the MSK tones still manage to be
orthogonal over T-second intervals are

(i) the start and end points are fixed as the endpoints
of each bit signaling interval.

(ii) the phases of the signals are carefully controlled.

More generally, cos(2 pi (f_c + 1/(4T) + theta)
and cos(2 pi (f_c - 1/(4T) + phi) are *not* orthogonal
over the interval (tau, tau + T) of length T (too short)
but they are orthogonal over (tau, tau + 2T) or over
(tau + 2mT) where m denotes an integer.

--Dilip Sarwate




From: cfy30 on
Thanks all for the inputs!

It seems to me now
1. If both x and y are deterministic signal and periodic, orthogonality is
the integration of x*y from 0 to T while correlation is the integration of
x*y from 0 to n*T. The only difference between orthogonality and
correlation is the integration length.
2. If both x and y are random variable, there is no orthogonality as T
cannot be defined but correlation can still be found since T is arbitrary.

Consider an ideal receiver with random data n(t) as input, and ideal I and
Q down-converters, cos(omega*t) and sin(omega*t), respectively. The I and Q
outputs, namely n(t)*cos(omega*t) and n(t)*sin(omega*t), they are
uncorrelated and no orthogonal can be defined. Is this aligned with
everyone's understanding?


cfy30


>On May 31, 12:08=A0pm, spop...(a)speedymail.org (Steve Pope) asked:
>>
>> =A0in 2-MSK, are the two tones orthogonal or not?
>
>Yes, the two tones, at frequencies f_c + 1/(4T) and
>f_c - 1/(4T) are orthogonal over each T-second bit
>interval even though this violates the usual shibboleth
>that tones are orthogonal over any interval whose length
>is an integer multiple of the inverse of the frequency
>difference. Here, the frequency difference is 1/(2T)
>whose inverse is 2T, and so the shibboleth requires
>intervals of length 2mT where m is an integer.
>
>The reasons that the MSK tones still manage to be
>orthogonal over T-second intervals are
>
>(i) the start and end points are fixed as the endpoints
>of each bit signaling interval.
>
>(ii) the phases of the signals are carefully controlled.
>
>More generally, cos(2 pi (f_c + 1/(4T) + theta)
>and cos(2 pi (f_c - 1/(4T) + phi) are *not* orthogonal
>over the interval (tau, tau + T) of length T (too short)
>but they are orthogonal over (tau, tau + 2T) or over
>(tau + 2mT) where m denotes an integer.
>
>--Dilip Sarwate
>
>
>
>
>
From: dvsarwate on
On May 31, 10:23 pm, "cfy30" <cfy30(a)n_o_s_p_a_m.yahoo.com> wrote:
> Thanks all for the inputs!
>
> It seems to me now
> 1. If both x and y are deterministic signal and periodic, orthogonality is
> the integration of x*y from 0 to T while correlation is the integration of
> x*y from 0 to n*T. The only difference between orthogonality and
> correlation is the integration length.

No.


> 2. If both x and y are random variable, there is no orthogonality as T
> cannot be defined but correlation can still be found since T is arbitrary..
>

No.


> Consider an ideal receiver with random data n(t) as input, and ideal I and
> Q down-converters, cos(omega*t) and sin(omega*t), respectively. The I and Q
> outputs, namely n(t)*cos(omega*t) and n(t)*sin(omega*t), they are
> uncorrelated and no orthogonal can be defined. Is this aligned with
> everyone's understanding?

It is not aligned with my understanding. Some others
will undoubtedly agree with the above interpretation.

--Dilip Sarwate

From: Frank on
My two cents, feel free to disagree and correct, or just complain
about going a little
off-topic :)


To really answer the original question, it's important to understand
the intuitive as well
as the mathematical concepts. Just understanding that a definition
corresponds
to an equation is not very illuminating when it comes to trying to use
the concepts.
With that introduction, and in the interest of further complicating
everything, here
we go:


Intuitive
---------

In communications, the important property of orthogonal signals is
that they may
be added together and their individual weights be uniquely and
unambiguously
determined from the sum alone, e.g. if x(t) and y(t) are orthogonal
functions
then we may sum these together as:

z(t) = a x(t) + b y(y)

and determine the values of both a and b from knowledge of z(t) alone.
The same
is true of any pair/set of orthogonal signals/functions. This is a
basis for digital
communication systems. a and b may be determined from z(t) by the
appropriate
operation which would typically be a correlation (see the mathematical
part further
down).

The simplest examples on paper involve just sine and cosine, i.e.

z(t) = a sin(wt) + b cos(wt)

but in practice an actual transmitted signal is cannot be a signal
with infinite time
support. It is for this reason that shaping such as raised cosine
shaping is used.
e.g. r(t) is a shaping function (e.g. raised cosine), then our
orthogonal functions
may be x(t) = r(t)sin(wt) and y(t) = r(t)cos(wt), and

z(t) = a r(t)sin(wt) + b r(t)cos(wt)


Continuing this example, the raised cosine shape corresponds to a
finite time
support signal (it's truncated, I know) which may be imposed on sine
and cosine
without losing the orthogonal property (it's approximate because of
the truncation,
but still). Note that the raised cosine also has lots of other nice
properties of course
(sensitivity to receiver timing, controllable bandwidth expansion,
etc.) but this discussion
is about orthogonality. OFDM transmission is easy understood by this
reasoning too,
replacing r(t) with a simple rectangular window.




Mathematical
------------

Mathematically, it's both simpler, and more complicated and for this
reason I'm just going to state some facts without detail.

Two functions are orthogonal when their inner product is zero
(subject
to Hilbert space, etc., etc.). Typically for functions, the inner
product is
defined as correlation (from -infinity to +infinity), and thus
orthogonality
and correlation are equivalent.

e.g. we form z(t) from orthogonal functions x(t) and y(t), and denote
the inner product by <., .>

z(t) = a x(t) + b y(t)

we can uniquely determine the values of a and b by taking the
following
inner products

<z(t), x(t)> = a <x(t), x(t)> + b <y(t), x(t)> = a
<z(t), y(t)> = a <x(t), y(t)> + b <y(t), y(t)> = b

or equivalently (directly written using correlation and
assuming that's the definition of the inner product that
we're using)

+infinity
integrate (z(t) x(t)) = a
-infinitiy

+infinity
integrate (z(t) y(t)) = b
-infinitiy

(okay there's an assumption of orthonormality there too, sue me)


Note that there is no proof in the sense of the original question.


Frank
From: illywhacker on
On Jun 2, 1:54 pm, Frank <frank.snow...(a)gmail.com> wrote:
> e.g. we form z(t) from orthogonal functions x(t) and y(t), and denote
> the inner product by <., .>
>
> z(t) = a x(t) + b y(t)
>
> we can uniquely determine the values of a and b by taking the
> following
> inner products
>
> <z(t), x(t)> = a <x(t), x(t)> + b <y(t), x(t)> = a
> <z(t), y(t)> = a <x(t), y(t)> + b <y(t), y(t)> = b

If you know what x and y are, then taking the inner product of z with
any two linearly independent functions will give you two simultaneous
equations for a and b, thereby uniquely determining a nd b provided
that x and y are also linearly independent. It has nothing to do with
x and y being orthogonal.

illywhacker