From: Ant_Magma on
It's solved!! Thx martin for the tip, the model is now working.

I have another series of questions, hope you don't mind....

I've now included the Puncture and Insert Zero block to my model to
obtain a better code rate of 3/4. However, this has lead into another
problem.

As i have mentioned before, my Viterbi decoder is set to Hard decision.
Hard decision works based on 0s & 1s. When i included the puncture
pair, again i get 0.5 BER. However, if i change the decision type in my
Viterbi to Unquantized and add a Unipolar to Biploar Converter (since
unquantized viterbi works on -1 n 1) before the Insert Zero block, it
works perfectly.

I was wondering is there any way to work around the problem? meaning
without using the bipolar converter?

2nd Q:

I obtained somewhere from the help in Simulink that for a 3/4 code rate
the optimum puncture vector is [110110].' correct?
Based on your experience, is there any optimum parameter for
interleaving? rows & columns (i plan to add interleaving after i've
figure out the puncturing)

I've noticed in the Simulink demos (both HiperLAN2 & WLAN802.11a) which
involves OFDM, interleaving is done twice using the Matrix interleaver
and the General block interleaver. Is it just a choice or preference?
Or is there a specific reason to it? And if so, how do i choose the
Elements for the General block interleaver?

Sorry for the barrage of questions....really appreaciate u guy's help...

From: cb135 on
See Below.

Ant_Magma wrote:
> It's solved!! Thx martin for the tip, the model is now working.
>

Good to hear it.

> I have another series of questions, hope you don't mind....
>
> I've now included the Puncture and Insert Zero block to my model to
> obtain a better code rate of 3/4. However, this has lead into another
> problem.
>
> As i have mentioned before, my Viterbi decoder is set to Hard decision.
> Hard decision works based on 0s & 1s. When i included the puncture
> pair, again i get 0.5 BER. However, if i change the decision type in my
> Viterbi to Unquantized and add a Unipolar to Biploar Converter (since
> unquantized viterbi works on -1 n 1) before the Insert Zero block, it
> works perfectly.
>
> I was wondering is there any way to work around the problem? meaning
> without using the bipolar converter?
>

I think if you step back one second and think about what it is you are
supplying to the Viterbi decoder, then all will become clear. You say
that the Viterbi decoder is based on hard decision decoding and,
rightly, you are supplying hard decisions as an input. The problem
comes (in your case) when you puncture the rate k/n code to get a rate
k'/n' code. At the receiver (in your case the Viterbi decoder) you
want to present the decoder with the original rate k/n code so that it
can estimate the sequence correctly. You do this by inserting zeros
into the code (of rate k'/n') and then feeding this to the decoder.

Now, when you insert zeros, think about what it is your telling the
Viterbi decoder? The original bits corresponding to the positions that
you punctured in the transmitter are now lost to the world...you have
no information about them at the receiver. Therefore, you want to tell
the Viterbi decoder that for those particular bits, you have no
information i.e. you don't know whether it is a binary one or zero. To
do this, you assign the bits where you need to reinsert a value of 0.5
(half way between the binary one and zero). However, if you now insert
values of 0.5 instead of zeros into the decoder, you are really forcing
the decoder to use "soft" decisions because your Viterbi will now have
to "do something" with the values of 0.5 that it sees on the input!
What it 'does' with these values depends on how the decoder is
programmed!

As an aside, do you see what happens if you insert zeros when using the
hard decision values of 1 and 0 as an input?? What are you telling the
decoder about those 'lost' bits?

Okay, the second method you talk about is really the correct way of
proceeding. You are now forcing the decoder to process the signal for
values in the range +/- 1, where +1 = binary 1 and -1 = binary zero (or
whatever your mapping happens to be!). In this case, for the positions
where you have no information on what was transmitted (due to the
puncturing you performed in the transmitter) you supply the decoder
with no information (A ZERO) for those bits and let the decoder
estimate what you sent!

What i'm saying is that there isn't really a work around, at some point
you either process 'soft' values by default, or the decoder will just
make a hard estimate on the bit value for the zero insertion positions.

I hope that is clear?



> 2nd Q:
>
> I obtained somewhere from the help in Simulink that for a 3/4 code rate
> the optimum puncture vector is [110110].' correct?

It depends on the constraint length of your encoder and the mother code
rate that you're dealing with. I would be very surprised if the
(default) puncture mask hard wired into simulink is not an optimum one?

> Based on your experience, is there any optimum parameter for
> interleaving? rows & columns (i plan to add interleaving after i've
> figure out the puncturing)
>

Interleaving is very much an artform and tailored to your system
parameters (interleaver length, BER, etc etc) I would recommend that
you just apply a random interleaver for now and then you can tweak the
specs later on.


> I've noticed in the Simulink demos (both HiperLAN2 & WLAN802.11a) which
> involves OFDM, interleaving is done twice using the Matrix interleaver
> and the General block interleaver. Is it just a choice or preference?
> Or is there a specific reason to it? And if so, how do i choose the
> Elements for the General block interleaver?
>

Not sure about the last one sorry!


> Sorry for the barrage of questions....really appreaciate u guy's help...

From: Ant_Magma on
The first method you mentioned is soft-decision viterbi correct?

From: cb135 on
The first method was really just an example of what is required to
provide the decoder with the correct information for the missing bits.
If you provide the decoder with a value of 0.5, from the input range
{0,1} then you are effectively providing 'soft' bit decisions aren't
you?

What I'm really trying to say is that you need to decide what your
maximum positive value (equiv binary one) and your maximum negative
values (equiv binary zero) are, and then for the missing bits, due to
puncturing, you insert a value that is smack bang in the middle of
these two extreme values (assuming, of course, that you don't have any
prior information pertaining to the missing bits!). Why, do you choose
a value smack bang in the middle? Well, because I'm assuming that the
transmitted bits are equiprobable and therefore, you won't have any
knowledge of the values of the missing punctured bits in the receiver.


For example, if you have only hard bit decisions at the input, then you
either convert the binary stream of ones and zeros into a bipolar
signal and then perform zero insertion. Or, you leave the input data
as a string of ones and zeros and you perform 0.5 insertion. Both
methods work equally well, it's just that you have to work out whether
the decoder you're implementing can handle one input or the other....no
more, no less!

cheers.


nt_Magma wrote:
> The first method you mentioned is soft-decision viterbi correct?