From: Dann Corbit on
In article <4c058d33$0$283$14726298(a)news.sunsite.dk>, Joe(a)NoSpammers.Com
says...
>
> > What is it that you are really trying to accomplish?
>
> I just want to be absolutely sure that variables which I want to
> be N bit wide always will be compiled into N bit wide variables
> independent of platform. No reason to have 16-bit signed values
> suddenly being stored in 32-bit variables.

I think that probably you are more interested in portability of function
than portability of storage.

> I want to implement a fixed-point, digital filter which is portable.
> The filter must work in the same way no matter which platform it is compiled
> on as long
> as the platform comes with an ANSI C compliant compiler.
>
> I am using the typedefs in stdint.h.

Keep in mind that some of the typedefs in stdint.h are optional and also
that stdint.h is only guaranteed to be present for C99 compilers.

The use of bit fields will probably suit your needs.

To read and write them you will need a bit i/o method of some kind, or
convert to text.
From: Peter Nilsson on
"Joe" <J...(a)NoSpammers.Com> wrote:
> > What is it that you are really trying to accomplish?
>
> I just want to be absolutely sure that variables which I
> want to be N bit wide always will be compiled into N bit
> wide variables independent of platform.

Then you're not writing maximally portable code. All you need
are variables that are _at least_ N bits wide.

> No reason to have 16-bit signed values suddenly being
> stored in 32-bit variables.

Why bother writing 'ANSI C' code if you're going to
exclude the plethora of implementations that don't offer
precise 16-bit wide integers?

> I want to implement a fixed-point, digital filter which
> is portable.

Fine, but you don't need a precise width type to do that.

--
Peter
From: bart.c on

"Peter Nilsson" <airia(a)acay.com.au> wrote in message
news:621bcbad-9ffc-41a2-981d-2d632c3d4cd1(a)z15g2000prh.googlegroups.com...
> "Joe" <J...(a)NoSpammers.Com> wrote:
>> > What is it that you are really trying to accomplish?
>>
>> I just want to be absolutely sure that variables which I
>> want to be N bit wide always will be compiled into N bit
>> wide variables independent of platform.
>
> Then you're not writing maximally portable code. All you need
> are variables that are _at least_ N bits wide.
>
>> No reason to have 16-bit signed values suddenly being
>> stored in 32-bit variables.
>
> Why bother writing 'ANSI C' code if you're going to
> exclude the plethora of implementations that don't offer
> precise 16-bit wide integers?

In:

unsigned short a;
int b=123456;
a=b;

'a' may end up as 57920 on some machines, and likely 123456 on others. It
seems reasonable to be able to request 'a' to be exactly 16-bits on any
machine, whether that is natural for the architecture on not.

--
Bartc

From: Pascal J. Bourguignon on
"bart.c" <bartc(a)freeuk.com> writes:

> "Peter Nilsson" <airia(a)acay.com.au> wrote in message
> news:621bcbad-9ffc-41a2-981d-2d632c3d4cd1(a)z15g2000prh.googlegroups.com...
>> "Joe" <J...(a)NoSpammers.Com> wrote:
>>> > What is it that you are really trying to accomplish?
>>>
>>> I just want to be absolutely sure that variables which I
>>> want to be N bit wide always will be compiled into N bit
>>> wide variables independent of platform.
>>
>> Then you're not writing maximally portable code. All you need
>> are variables that are _at least_ N bits wide.
>>
>>> No reason to have 16-bit signed values suddenly being
>>> stored in 32-bit variables.
>>
>> Why bother writing 'ANSI C' code if you're going to
>> exclude the plethora of implementations that don't offer
>> precise 16-bit wide integers?
>
> In:
>
> unsigned short a;
> int b=123456;
> a=b;
>
> 'a' may end up as 57920 on some machines, and likely 123456 on
> others. It seems reasonable to be able to request 'a' to be exactly
> 16-bits on any machine, whether that is natural for the architecture
> on not.

You can indeed request it:

a=0xffff & b;

but you cannot expect that unsigned short is such a request.

I hope you know the difference between = and ≥.


--
__Pascal Bourguignon__ http://www.informatimago.com/
From: bart.c on

"Richard Heathfield" <rjh(a)see.sig.invalid> wrote in message
news:2qmdnQNJioRrFZrRnZ2dnUVZ8nmdnZ2d(a)bt.com...
> bart.c wrote:
>>
> <snip>
>
>> It seems reasonable to be able to request 'a' to be exactly 16-bits on
>> any machine, whether that is natural for the architecture on not.
>
> I'm not entirely convinced that that /is/ a reasonable request. It is
> perfectly reasonable to ask for it to be *at least* 16 bits.

I think asking for so many bits is reasonable. Requesting a specific range
(such as 32 .. 95) less so, unless that is a feature of the type system
(thinking Pascal and Ada here).

> How, precisely, would you implement your exactly-16-bit type on a machine
> whose natural word size is 64?

(Presumably, where the addressing is also in 64-bits words.)

It isn't really difficult. My first machine was 36-bits and I routinely used
18-bit data sizes.

And I'm implementing a language at the moment (for a byte-addressed machine,
but similar challenges) where any bit-width can be specified:

[]bit:3 x # dynamic array of 3-bit-wide unsigned ints

This is a bit harder than a regular byte-array (especially getting pointers
and slices to work properly), but nothing to write home about.

> Such a machine does exist, and C implementations for it have to jump
> through all kinds of crazy hoops to give programmers the 8-bit char they
> expect. I would not like to be the one to tell the compiler team "well
> done lads, but now there's this bloke on Usenet who wants 16-bit short
> ints..."

They should have done it properly, then there would have been support for
8,16 and 32-bits with little extra effort.

If an application needs a huge array of numbers, that can fit into 16 bits
but not into 8, then, no matter how much memory is available, a 16-bit array
will only ever use a quarter of the memory required by a 64-bit array (and
the reduced memory access will likely compensate for the shifting and
masking overheads).

--
Bartc