From: Tim Bradshaw on
On 2009-12-05 21:34:52 +0000, Spiros Bousbouras <spibou(a)gmail.com> said:

> And that's the problem. I'm asking for fast fixnums plus arrays as
> large as the system can support plus standard conformance. I don't
> think that's a lot to ask for but I can't get it.

Of course you can. Just use an implementation whose fixnums agree with
what the platform you are using supports.

From: Tim Bradshaw on
On 2009-12-05 21:27:22 +0000, Spiros Bousbouras <spibou(a)gmail.com> said:

> No it's not necessarily a failure of the implementation because an
> implementation is also responsible for assuring that the fixnums it
> provides are as fast as possible for the the architecture on which the
> implementation is running. So for example if an implementation is
> running on a 32-bit computer then fixnums likely have to ([1]) be at
> most 32 bits. So there are at most 31 bits to represent positive
> numbers. This means that you cannot create an array with more than
> 2**31 elements even if the elements are just bits in which case the
> array would comfortably fit in memory.

And that's just fine, because an array that large will exhaust the
address space of the system, except possibly for bit-arrays.

If you really want to create bit-arrys that large, then you either have
a really smart compiler or you don't care much about performance - it
is generally much, much better to create arrays of larger objects and
then roll your own bitwise access.

From: Tim Bradshaw on
On 2009-12-06 16:41:15 +0000, Barry Margolin <barmar(a)alum.mit.edu> said:

> Bit arrays already have inherent inefficiency. Few CPUs support bit
> array indexing natively, so it has to be done by splitting the index
> into a byte or word index and offset, then shift/mask to get that bit
> (there might be a machine instruction for this). Requiring the first
> step to use bignums would make things even worse.

As I hinted in an earlier post, this is quite right. I had an
application where the "natural" representation was large bitvectors
(large as in would-just-fit-in-4G, so not huge by today's standards,
but large enough). It became very rapidly apparent that if I wanted
performance not to suck, I needed to use vectors of bytes or larger
objects and do the bit operations myself. (I suspect 32-bit objects
would be the safe compromise - comfortably smaller than a fixnum on a
64-bit implementation, but large enough to reduce the number of
fetches).

From: Thomas A. Russ on
Kaz Kylheku <kkylheku(a)gmail.com> writes:

> On 2009-12-04, Thomas A. Russ <tar(a)sevak.isi.edu> wrote:
> > BTW, just for reference, here is the fixnum sizes for a number of lisps
> > that I happen to have easy access to.
> >
> > Mac OS X:
> > bits implementation
> > ---- --------------
> > 29 LispWorks Personal 5.1 (32-bit)
> > 60 ClozureCL x8664 (64-bit)
> > 24 CLISP 2.47
>
> This can't be right. CLISP's most-positive-fixnum is 16.7 million, which
> means that the mantissa has 24 bits.

That's what the table says, doesn't it?

> But remember that fixnum is signed; CLISP also has a
> most-negative-fixnum that is -16.7 million.
>
> Might you be neglecting to count the sign bit?

I used

(integer-length most-positive-fixnum)

to generate the sizes.



--
Thomas A. Russ, USC/Information Sciences Institute
From: Thomas A. Russ on
Kaz Kylheku <kkylheku(a)gmail.com> writes:

> On 2009-12-04, Thomas A. Russ <tar(a)sevak.isi.edu> wrote:
> >
> > Well, my guess would be related to considering how one often uses a loop
> > to access elements in an array. Typically, you would iterate starting
> > at 0 and ending when the index is >= the dimension.
>
> Thus, the dimension can be max-positive-bignum!

Doh!

--
Thomas A. Russ, USC/Information Sciences Institute