From: Thomas Womack on
In article <a6d4ec20-9052-4003-a3c7-486885d791a4(a)q12g2000yqj.googlegroups.com>,
MitchAlsup <MitchAlsup(a)aol.com> wrote:
># define sat_add(a,b) (((tmp =3D (a)+(b)), (tmp > SAT_MAX ? SAT_MAX:
>(tmp < SAT_MIN ? SAT_MIN : tmp)))

And what type is 'tmp'?

Tom
From: nmm1 on
In article <2010Jun22.201655(a)mips.complang.tuwien.ac.at>,
Anton Ertl <anton(a)mips.complang.tuwien.ac.at> wrote:
>
>>This is exactly like the 1970s, when many people used to say that
>>the compilers should preserve the details of the IBM System/360
>>arithmetic, because almost all systems were like that and people
>>relied on it.
>
>Byte addressed. Yup.
>8-bit bytes. Yup.
>2s-complement signed integers. Yup.
>
>Yes, they were right, current general-purpose systems are like that.

8 unused bits in every pointer that can be used for flags. Yup.
Architected, application-level interrupt handling. Yup.
Integer overflow generates an interrupt. Yup.
Hex. base floating-point and truncation. Yup.
EBCDIC, built into the hardware architecture. Yup.
Genuinely asynchronous, application-controllable I/O. Yup.

Or perhaps Nope. Not all systems were like that, then, and not all
systems are what you are assuming, now.


Regards,
Nick Maclaren.
From: nmm1 on
In article <88csh7FauU2(a)mid.individual.net>,
Andrew Reilly <areilly---(a)bigpond.net.au> wrote:
>
>> A language
>> specification isn't much use if it specifies only the most trivial
>> aspects and leaves the rest undefined.
>
>A language specification isn't much use if it leaves so much undefined
>that it doesn't provide semantics to express reasonable algorithms within
>its nominal remit (i.e., as a low-level systems language, good for
>building libraries and functionality from the metal up.)

I fully agree. One of the things that I regret (with hindsight) is
voting for C90 - I am pretty sure that if I had been proactive against
it, we could have got the standard voted down. Now, what would have
happened then, I can't guess ....

>I admit that the overflow() idea was daft, but defining operations for
>signed arithmetic that actually captures what the hardware does doesn't
>seem unreasonable, or too inimical to optimisation. You certainly
>haven't made a convincing argument along those lines.

The former, no, but you are wrong with the latter. The point is that
you can't do any significant code rearrangement if you want to either
'capture what the hardware does' or produce deterministic results.
That shows up much more clearly with IEEE 754, but the same applies
to integers once you do anything non-trivial or have (say) a twos'
complement model with an overflow flag (i.e. like IEEE 754).

A much better approach is to have a library that provides such
support because, even in programs that need reliable overflow
detection, you do not want the overheads on 90% of the integer
operations.

>Saying "signed integer overflow is just wrong" is unhelpful: it happens,
>it's what the hardware does, and some reasonable algorithms want to know
>about it when it happens.

Agreed.


Regards,
Nick Maclaren.
From: Terje Mathisen "terje.mathisen at on
Thomas Womack wrote:
> In article<a6d4ec20-9052-4003-a3c7-486885d791a4(a)q12g2000yqj.googlegroups.com>,
> MitchAlsup<MitchAlsup(a)aol.com> wrote:
>> # define sat_add(a,b) (((tmp =3D (a)+(b)), (tmp> SAT_MAX ? SAT_MAX:
>> (tmp< SAT_MIN ? SAT_MIN : tmp)))
>
> And what type is 'tmp'?

Any signed type with at least one more bit of precision than a or b?
:-)

Terje

--
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"
From: Andrew Reilly on
On Wed, 23 Jun 2010 10:28:56 +0200, Terje Mathisen wrote:

> Thomas Womack wrote:
>> In
>> article<a6d4ec20-9052-4003-
a3c7-486885d791a4(a)q12g2000yqj.googlegroups.com>,
>> MitchAlsup<MitchAlsup(a)aol.com> wrote:
>>> # define sat_add(a,b) (((tmp =3D (a)+(b)), (tmp> SAT_MAX ? SAT_MAX:
>>> (tmp< SAT_MIN ? SAT_MIN : tmp)))
>>
>> And what type is 'tmp'?
>
> Any signed type with at least one more bit of precision than a or b? :-)

I think that I mentioned using extended precision in my first post. Not
always possible or useful. Not all C compilers for 32-bit processors can
do useful "long long", although most can, and the number that can't is
decreasing. This doesn't help if a and b are already the largest type
available. Not all of the processors that I (help to) support are 32-
bit, either: some 16, some 24, some 64, some word-addressed.

Why is it so impossible to imagine that signed integer arithmetic might
overflow? Especially in languages that don't have lisp-style graceful
bignum-fallback, which is just about everything. That's weird 1984-style
nu-think.

Cheers,

--
Andrew