From: MitchAlsup on
On Jun 22, 3:33 am, Andrew Reilly <areilly...(a)bigpond.net.au> wrote:
> Unfortunately, that's not an answer I can make any use of.  Signal
> processing with fixed point arithmetic generally requires maximising SNR
> (left-aligning values to the greatest extent possible) in algorithms that
> often have unlikely extreme-range conditions.  Generally, it is
> preferable to clip than wrap-around, which is why all processors designed
> for the purpose have saturating signed addition modes (or in-register
> headroom and saturating store modes).  While that makes for neat per-
> processor optimised code, it's nice to have a vanilla-C fallback that
> does the right thing, as a reference.  That is now considerably uglier
> than it needs to be/used to be.

Why not use the SSE instructions that directly provide saturating
arithmetic?

Mitch
From: MitchAlsup on
On Jun 22, 6:47 am, Andrew Reilly <areilly...(a)bigpond.net.au> wrote:
> No: I want the 2's compliment, fixed-point integers to wrap, just like
> the hardware does.

Note this only when using 'unsigned' arithmetic.

>  I don't mind an interrupt on overflow, so long as I
> can install an "ignore" handler (which isn't really in the scope of a
> language even vaguely like C).

Note: Taking an interrupt to a null user-level handler is about 1000
cycles
Taking an interrupt to a kernel-level handler is at least 200 cycles.
Thus, you need to find a way to do this that is not exception-model
based, and still not take any room in the instruction set--good luck.

> Here's a comp.arch tangent: I believe that processor architects only
> design and optimise for the requirements of the bulk of the "important"
> code base.  If C compilers actively *prevent* the detection of signed
> integer overflow, then application code will find ways to avoid depending
> on being able to.  How long before new processors just don't bother
> including the functionality?

This is an accurate description of where we find ourselves today.

In the RISC generation (circa 1980) there were no benchmarks dependent
on overflow accuracy, utility, or specificity. We found ourselves in a
position that if we lost a benchmark by 1% but did every other thing
customers wanted, we would sell no chips. Thus, pare it to the bone
was dè rigor for a couple decades.
This paring to the bone, resulted in some idiosyncratic instructions
like EXT or EXTU from the 88K. This instruction would perform a right
shift and a field mask simultaneously. If there were no bits above
bit-5 it operated like an shift right (arithmetic or signed). So one
could access the full functionality like:

field = carrier >> ((width<<5)|offset);

All of the overflow detection at the language level would prevent
access to the actual power of the underlying instruction itself. This
was just great for accessing bit fields, and for writing machine level
simulators for architectures. {And there were similarly easy wasy of
putting a field back.}

If you want architectures (not microarchitectures) that accruately
detect signed and unsigned overflows, underflows, and other arithmetic
anomolies (neg-max); figure out how to write benchmarks that are
totally dependent on performing these things accurately to a precise
specification. Then get it through a performance standards comittee.
Then wait 3 deacdes for the bencmark to penetrate the design process.

But I think the actual problem is not overflow and underflow per-se,
but that the (access) computations are not bounds checked. Although
there certainly are situations where overflow/underflow cause
problems, overflow and underflow are a subset of the bounds checking
that (some argue) should be taking place in all/most accesses. Some
like overflow and underflow detection because its cheap--so cheap it
has past out of being available in any consistent form. Bounds
checking was unavoidable in the 'descriptor' machines of the later
1960s.

Mitch
From: MitchAlsup on
On Jun 22, 7:34 am, n...(a)cam.ac.uk wrote:
> From the viewpoint of a high-level language, that is insane behaviour.
> And, for better or worse, ISO C attempts to be a high-level language.

This is one of those "for the worse" results.
C is and was supposed to be a portable assembler.

{My opinion}

Mitch
From: MitchAlsup on
On Jun 22, 9:35 am, Andy 'Krazy' Glew <ag-n...(a)patten-glew.net> wrote:
> I have vague hopes that the pendulum is swinging the other way, but I admit that I got tired of waiting and left.

Architecturally, we have available what we have available now. If/when
these do not match up with the needs of software, software has the
tools to add the functionality needed to make robust applications.

It is way too expensive to introduce a new and incompatible
architecture into the world of general purpose computing. Thus, the
first sentance will remain true for a considerable amount of time into
the future. Sad but true.

Mitch
From: nmm1 on
In article <4C20CA32.9000809(a)patten-glew.net>,
Andy 'Krazy' Glew <ag-news(a)patten-glew.net> wrote:
>
> Plus, there were workarounds like checking if x+y < x which could
> substitute for INTO, with more useful semantics. But C has now
> made these officially not work, although de facto they usually
> still do.

Sigh. No, that is not right. They have NEVER worked in any high
level language, before Java - in particular, they worked only with
some implementations in K&R C and have never done so in standard C.
And the very reason that standard C leaves them undefined is
precisely because they didn't work even in the 1980s.

Yes, OF COURSE, you see problems on current systems only when you
enable significant optimisation - but that's been true of most such
constraints for at least 50 years - yes, 50. The reason that Java
Grande flopped, big time, is precisely because Java forbids any
optimisations that might change the results. Do you REALLY want
a C with the performance of Java?

This is exactly like the 1970s, when many people used to say that
the compilers should preserve the details of the IBM System/360
arithmetic, because almost all systems were like that and people
relied on it. We have rotated through the concept of architecture
neutrality in languages back to where we were 35 years ago. 'Tain't
progress - it's regress :-(


Regards,
Nick Maclaren.