From: Skybuck Flying on
Ok people,

I keep coming across different ways in source codes for creating a bit mask
for a certain ammount of bits and it's kinda funnieing me out ! =D

Therefore to have some fun, it's time to create a thread dedicated to
creating bitmasks... how many ways are there ?

So far I have come across these methods:

1. (My own way:) Mask := not word(65535 shl BitCount); // not 1111000 =
0000111

2. Mask := (1 shl BitCount)-1; // 10000-1 = 09999 = 01111 ;) :)

3. Mask := ($FFFF shl BitCount) xor $FFFF; // 1111000 xor 1111111 = 0000111

I also wonder which one would be fastest, since processors might execute
instructions in different alu's en such...
Maybe processors have special "boolean/logic" units and "arithmetic units".

To what category would shl belong ? (Maybe "bit logic unit" ?)

Anyway if you know any other way of calculating such bitmasks please add
them to this thread for the fun of it ! ;) =D

Bye,
Skybuck =D








From: alanglloyd on
If you're dealing with meaningful bits then use a set of an enumerated
value.

type
TBitFlags = (bfBit0, bfBit1, bfBit2, bfBit3,
bfBit4, bfBit5, bfBit6, bfBit7);
TBitFlagSet = set of TBitFlags;


const
BitMask : TBitFlagSet = [bfBit0..bfBit3];
AllBits : TBitFlagSet = [bfBit0..bfBit7];
TwoBits : TBitFlagSet = [bfBit2, bfBit5];

You can give your TBitFlags definition meaningful names and get
clarity in your code.

Alan Lloyd
From: Skybuck Flying on
Sets in pascal are easy to use but they are very slow.

Bye,
Skybuck.


From: James Harris on
On 24 May, 17:23, "Skybuck Flying" <IntoTheFut...(a)hotmail.com> wrote:
> Ok people,
>
> I keep coming across different ways in source codes for creating a bit mask
> for a certain ammount of bits and it's kinda funnieing me out ! =D
>
> Therefore to have some fun, it's time to create a thread dedicated to
> creating bitmasks... how many ways are there ?
>
> So far I have come across these methods:
>
> 1. (My own way:)   Mask := not word(65535 shl BitCount);  // not 1111000 =
> 0000111
>
> 2. Mask := (1 shl BitCount)-1;  // 10000-1 = 09999 = 01111 ;) :)
>
> 3. Mask := ($FFFF shl BitCount) xor $FFFF; // 1111000 xor 1111111 = 0000111
>
> I also wonder which one would be fastest, since processors might execute
> instructions in different alu's en such...
> Maybe processors have special "boolean/logic" units and "arithmetic units".

"Bitmasks" are just bit patterns used in bitwise logic such as 0b1100.
You mean masks for the lowest N bits of a value, i.e. a specific type
of bitmask.

Your option 2 is very good! As well as being fast it works regardless
of word size, something the other two fail to do.

As for speed, any of these should be much faster than many
alternatives since they use only simple operations which any self-
respecting CPU will have in its armoury. Your option 2 should compile
on x86-32 to something like

mov eax, 1
shl eax, cl
dec eax

Note there's no range checking on the number of bits to shift (in the
cl register). IIRC if too many are specified the processor will mask
the value keeping just the lower five bits rather than maxing it out.
For example, if you specified a shift of 63 (0b0011_1111) it would
shift 31 (0b0001_1111) places. If that's not what's wanted a test for
range would be needed. This also applies to options 1 and 3 and all
solutions where a shift is required if that shift amount is not a
constant.

BTW, your options 1 and 3 are basically the same as each other. An xor
with a mask of all 1s has the same effect as a not operation. Assuming
32-bit arithmetic, not your 16-bit, they should become something like

mov eax, -1
shl eax, cl
not eax

A word about the limits of these code fragments. It looks like both
pieces of code will work with a shift amount in the range 0 to 31.
Anything 32 or above (which is not to be masked) would need a little
bit of extra code.

The object code for your option 2 should be a little bit shorter than
code for the other two.

James
From: MitchAlsup on
I use a lot of code like (all burried in #define macros):

(~((~0)<<lnFieldWidth)<<lnFieldOffset)

((variable >> lnFieldOffset) & (~((~0)<<lnFieldWidth)))

~0 is the easiest portable way to get all ones
up shifting is the portable way to get all ones in significant digits
and zeros in the bottom bits
Constant propogators are excellent in converting almost all of this
into constant bit patterns
Works in 2s-complement, 1s-complement, and with some limitations,
signed-magnitude.

To me: 65535 is a poor way to write this constant, (~((~0)<<16)))
informs the user that the size of the field is 16 bits and that it is
covered in ones (~0s)
To me: it is better to write (~((~0)<<lnSize)) than to write
((1<<lnSize)-1).

It is easy to define log-base-two constants and get field sizes and
masks from them, but very difficult to have a field mask and get back
to the log-base-two constant field size. Thus, I would write:

# define lnPageSize (12)
# define PageSize (1<<lnPageSize)
# define PageMask (~((~0)<<lnPageSize))

In fact I once wrote a pre-preprocessor to find "# define ln" and
automagically generate the other # defines:: Size, Masks, and ... from
the log-base-two definition. Thus if I changed the size of a field, a
recompilation would have everything back were it should be. In
addition these log-nbase-two tems can be added and subtracted if one
has to resort to actual bit fields.

struct <blah> { unsigned int second: lnPageSize,
first: lnPageSize,
offset: 32 -
lnPageSize*2;
} <blah>;

Mtich