From: Pascal J. Bourguignon on
Tamas K Papp <tkpapp(a)gmail.com> writes:

> On Wed, 16 Dec 2009 18:04:14 +0000, Kaz Kylheku wrote:
>
>> Without cycles, you can't express graph structures, so an entire branch
>> of computer science goes out the window. Many useful algorithms
>
> I disagree. Graphs can be represented, for example, in a matrix
> (a[i,j] is nonzero iff i->j, etc). You can use this to reconstruct a
> lookup table, basically emulating pointers.
>
> Which would, of course, would be a pretty pointless thing to do -- but
> the OP's questions have been pretty pointless so far, so maybe this
> would be a harmonious outcome :-)

Without mutation, arrays become a O(N) affair, there's no sharing of
slots amongst arrays... So building such a matrix will be rather costly.

--
__Pascal Bourguignon__
http://www.informatimago.com
From: Pascal J. Bourguignon on
Pillsy <pillsbury(a)gmail.com> writes:

> On Dec 16, 1:04�pm, Kaz Kylheku <kkylh...(a)gmail.com> wrote:
>
>> On 2009-12-16, Bennu Strelitzia <bennu.strelit...(a)gmail.com> wrote:
>
>> > i.e. reference counting could do the trick for an efficient minimal
>> > implementation if there were no referential cycles, I hate the real
>
>> Reference counting is not efficient. �Except in some highly contrived
>> benchmarks that don't reflect real-world use, it is slower than garbage
>> collection. Even simple mark-and-sweep can beat reference counting in
>> realistic situations.
>
> The following is a very naive observation.
>
> The one thing it seems like reference counting could allow you to do
> is immediate recycling of space. For example, if I want to sum two
> vectors, using
>
> (map 'vector #'+ a-big-vector another-big-vector)
>
> I'll get a third big vector that has to be consed. If I know that A-
> BIG-VECTOR isn't being used by anything else, I can recycle it using
> MAP-INTO instead, but in general I have no idea. However, if there
> were a reference count on those big vectors, the implementation could
> make the decision for me at run-time. I'm primarily thinking of using
> using reference counting as a way of augmenting garbage collection,
> not as a replacement for it; indeed, you could limit it to only work
> with some data types or objects, like big simple arrays of floats or
> characters.
>
> There's no way this is a new idea, and it's entirely possible I'm
> overlooking some trade-offs that keep it from being worthwhile.

Basically, you're overlooking the performance of generationnal GC.
Allocating even a big number of temporaty structure is basically free,
both in allocation and in garbage collecion, since only the objects that
are kept will cost some work to the garbage collector.

--
__Pascal Bourguignon__
http://www.informatimago.com
From: George Neuner on
On Wed, 16 Dec 2009 11:29:01 -0800 (PST), Pillsy <pillsbury(a)gmail.com>
wrote:

>On Dec 16, 1:04�pm, Kaz Kylheku <kkylh...(a)gmail.com> wrote:
>
>> On 2009-12-16, Bennu Strelitzia <bennu.strelit...(a)gmail.com> wrote:
>
>> > i.e. reference counting could do the trick for an efficient minimal
>> > implementation if there were no referential cycles, I hate the real
>
>> Reference counting is not efficient. �Except in some highly contrived
>> benchmarks that don't reflect real-world use, it is slower than garbage
>> collection. Even simple mark-and-sweep can beat reference counting in
>> realistic situations.
>
>The following is a very naive observation.
>
>The one thing it seems like reference counting could allow you to do
>is immediate recycling of space. For example, if I want to sum two
>vectors, using
>
>(map 'vector #'+ a-big-vector another-big-vector)
>
>I'll get a third big vector that has to be consed. If I know that A-
>BIG-VECTOR isn't being used by anything else, I can recycle it using
>MAP-INTO instead, but in general I have no idea. However, if there
>were a reference count on those big vectors, the implementation could
>make the decision for me at run-time. I'm primarily thinking of using
>using reference counting as a way of augmenting garbage collection,
>not as a replacement for it; indeed, you could limit it to only work
>with some data types or objects, like big simple arrays of floats or
>characters.
>
>There's no way this is a new idea, and it's entirely possible I'm
>overlooking some trade-offs that keep it from being worthwhile.

Refcounters (even lazy ones) eventually have to scan all the garbage
blocks. Scavengers (semi-space, generational, etc.) scan only live
data. Because they move/compact live data at each collection,
scavengers can use sequential "bump" allocation which is just a
pointer or index increment and limit check rather than block
splitting/coalescing and list manipulation. The combination of near
free allocation and ignoring garbage blocks means that temporary
allocations which don't trigger a collection are essentially free.

Refcounters have a slight advantage overall in terms of data locality
and also have an advantage in low memory conditions. However
scavengers typically size young object regions so as to fit completely
within the CPU cache - making for very fast incremental collections.

Generational systems are designed with the assumption that temporary
objects are expected to die quickly. The longer data survives the
less frequently it will be subjected to collection.

George
From: Barry Margolin on
In article <8763864kad.fsf(a)informatimago.com>,
pjb(a)informatimago.com (Pascal J. Bourguignon) wrote:

> Tamas K Papp <tkpapp(a)gmail.com> writes:
>
> > On Wed, 16 Dec 2009 18:04:14 +0000, Kaz Kylheku wrote:
> >
> >> Without cycles, you can't express graph structures, so an entire branch
> >> of computer science goes out the window. Many useful algorithms
> >
> > I disagree. Graphs can be represented, for example, in a matrix
> > (a[i,j] is nonzero iff i->j, etc). You can use this to reconstruct a
> > lookup table, basically emulating pointers.
> >
> > Which would, of course, would be a pretty pointless thing to do -- but
> > the OP's questions have been pretty pointless so far, so maybe this
> > would be a harmonious outcome :-)
>
> Without mutation, arrays become a O(N) affair, there's no sharing of
> slots amongst arrays... So building such a matrix will be rather costly.

From http://www.haskell.org/tutorial/arrays.html:

Obviously, a naive implementation of such an array semantics would be
intolerably inefficient, either requiring a new copy of an array for
each incremental redefinition, or taking linear time for array lookup;
thus, serious attempts at using this approach employ sophisticated
static analysis and clever run-time devices to avoid excessive copying.

--
Barry Margolin, barmar(a)alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
*** PLEASE don't copy me on replies, I'll read them in the group ***
From: Rahul Jain on
pjb(a)informatimago.com (Pascal J. Bourguignon) writes:

> Without mutation, arrays become a O(N) affair, there's no sharing of
> slots amongst arrays... So building such a matrix will be rather costly.

O(log N) is pretty trivial to achieve... just use a tree.

--
Rahul Jain
rjain(a)nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8
Prev: TPS
Next: Style, or lack thereof in Graham's ANSI CL