From: Ben Bacarisse on
"bart.c" <bartc(a)freeuk.com> writes:

> "Peter Nilsson" <airia(a)acay.com.au> wrote in message
> news:621bcbad-9ffc-41a2-981d-2d632c3d4cd1(a)z15g2000prh.googlegroups.com...
>> "Joe" <J...(a)NoSpammers.Com> wrote:
>>> > What is it that you are really trying to accomplish?
>>>
>>> I just want to be absolutely sure that variables which I
>>> want to be N bit wide always will be compiled into N bit
>>> wide variables independent of platform.
>>
>> Then you're not writing maximally portable code. All you need
>> are variables that are _at least_ N bits wide.
>>
>>> No reason to have 16-bit signed values suddenly being
>>> stored in 32-bit variables.
>>
>> Why bother writing 'ANSI C' code if you're going to
>> exclude the plethora of implementations that don't offer
>> precise 16-bit wide integers?
>
> In:
>
> unsigned short a;
> int b=123456;
> a=b;
>
> 'a' may end up as 57920 on some machines, and likely 123456 on
> others. It seems reasonable to be able to request 'a' to be exactly
> 16-bits on any machine, whether that is natural for the architecture
> on not.

In a rather limited way, you can: you can use a 16-bit wide bit-field.
I suppose a conforming C compiler may be permitted to refuse to allocate
such a bit-field but that seems to stretch the letter of the law beyond
what is reasonable.

There are a lot of things you can't do with bit-fields, but they do
provide automatically masked and promoted integer arithmetic. On
balance, though, I'd write the code to use "at least 16 bits" and put
the masks in myself.

--
Ben.
From: bart.c on

"Ben Bacarisse" <ben.usenet(a)bsb.me.uk> wrote in message
news:0.fafda0c28fab87afab6f.20100603125850BST.87eigotjid.fsf(a)bsb.me.uk...
> "bart.c" <bartc(a)freeuk.com> writes:

>> unsigned short a;
>> int b=123456;
>> a=b;
>>
>> 'a' may end up as 57920 on some machines, and likely 123456 on
>> others. It seems reasonable to be able to request 'a' to be exactly
>> 16-bits on any machine, whether that is natural for the architecture
>> on not.
>
> In a rather limited way, you can: you can use a 16-bit wide bit-field.
> I suppose a conforming C compiler may be permitted to refuse to allocate
> such a bit-field but that seems to stretch the letter of the law beyond
> what is reasonable.
>
> There are a lot of things you can't do with bit-fields, but they do
> provide automatically masked and promoted integer arithmetic. On
> balance, though, I'd write the code to use "at least 16 bits" and put
> the masks in myself.

But then, on a machine where C shorts are actually 16-bits, you have to rely
on the compiler to remove the unnecessary masks.

Also, where the intention is to have arrays of such values, then bit-fields
may not work, while simple masking is insufficient, needing shifts and masks
instead, which would be silly where the machines does directly support
16-bits.

--
Bartc

From: Dann Corbit on
In article <2qmdnQNJioRrFZrRnZ2dnUVZ8nmdnZ2d(a)bt.com>,
rjh(a)see.sig.invalid says...
>
> bart.c wrote:
> >
> <snip>
>
> > It seems reasonable to be able to request 'a' to be exactly 16-bits on
> > any machine, whether that is natural for the architecture on not.
>
> I'm not entirely convinced that that /is/ a reasonable request. It is
> perfectly reasonable to ask for it to be *at least* 16 bits.
>
> How, precisely, would you implement your exactly-16-bit type on a
> machine whose natural word size is 64?

typedef struct Integer16 { signed value:16; } Integer16 ;

> Such a machine does exist, and C
> implementations for it have to jump through all kinds of crazy hoops to
> give programmers the 8-bit char they expect. I would not like to be the
> one to tell the compiler team "well done lads, but now there's this
> bloke on Usenet who wants 16-bit short ints..."

I think it works everywhere.
From: Dann Corbit on
In article <SDNNn.1596$jL2.533(a)hurricane>, bartc(a)freeuk.com says...
>
> "Ben Bacarisse" <ben.usenet(a)bsb.me.uk> wrote in message
> news:0.fafda0c28fab87afab6f.20100603125850BST.87eigotjid.fsf(a)bsb.me.uk...
> > "bart.c" <bartc(a)freeuk.com> writes:
>
> >> unsigned short a;
> >> int b=123456;
> >> a=b;
> >>
> >> 'a' may end up as 57920 on some machines, and likely 123456 on
> >> others. It seems reasonable to be able to request 'a' to be exactly
> >> 16-bits on any machine, whether that is natural for the architecture
> >> on not.
> >
> > In a rather limited way, you can: you can use a 16-bit wide bit-field.
> > I suppose a conforming C compiler may be permitted to refuse to allocate
> > such a bit-field but that seems to stretch the letter of the law beyond
> > what is reasonable.
> >
> > There are a lot of things you can't do with bit-fields, but they do
> > provide automatically masked and promoted integer arithmetic. On
> > balance, though, I'd write the code to use "at least 16 bits" and put
> > the masks in myself.
>
> But then, on a machine where C shorts are actually 16-bits, you have to rely
> on the compiler to remove the unnecessary masks.
>
> Also, where the intention is to have arrays of such values, then bit-fields
> may not work, while simple masking is insufficient, needing shifts and masks
> instead, which would be silly where the machines does directly support
> 16-bits.

Portability may have a cost, in other words. There is no other way that
is fully portable, since the exact width types of C99 are optional.

The O.P.'s goal seems to be some kind of extreme portability. If you
need exact widths, and you need it to work everywhere, then you need bit
fields.

I also doubt that bit fields manipulations will be the bottleneck of any
important operations with the bit fields.

So I think it comes down to these choices:
1. Total portability with bit fields. (Some cost in efficiency and
convenience: arrays, addresses, etc.)
2. Total speed via separate functions for each hardware system (huge
cost in debugging and maintenance)
3. Careful analysis to ensure that the native integral sizes perform
the operations in a satisfactory manner. (huge design effort needed)

I don't know which option is best.
From: Dann Corbit on
In article <9aadnRACk_1rMZrRnZ2dnUVZ8gydnZ2d(a)bt.com>,
rjh(a)see.sig.invalid says...
>
> Dann Corbit wrote:
> > In article <2qmdnQNJioRrFZrRnZ2dnUVZ8nmdnZ2d(a)bt.com>,
> > rjh(a)see.sig.invalid says...
> >> bart.c wrote:
> >> <snip>
> >>
> >>> It seems reasonable to be able to request 'a' to be exactly 16-bits on
> >>> any machine, whether that is natural for the architecture on not.
> >> I'm not entirely convinced that that /is/ a reasonable request. It is
> >> perfectly reasonable to ask for it to be *at least* 16 bits.
> >>
> >> How, precisely, would you implement your exactly-16-bit type on a
> >> machine whose natural word size is 64?
> >
> > typedef struct Integer16 { signed value:16; } Integer16 ;
>
> That would make even simple addition rather tedious:
>
> Integer16 x = { 6 }; /* braces required */
> Integer16 y = { 42 };
> Integer16 z;
>
> z.value = x.value + y.value;

I didn't say it was pretty.
C++ could tidy up the appearance, but under the covers it would be doing
exactly the same thing.