From: Carl Banks on
On Nov 6, 9:28 am, "Alf P. Steinbach" <al...(a)start.no> wrote:
> * Rami Chowdhury:
>
> > On Fri, 06 Nov 2009 08:54:53 -0800, Alf P. Steinbach <al...(a)start.no>
> > wrote:
>
> >> But wow. That's pretty hare-brained: dynamic allocation for every
> >> stored value outside the cache range, needless extra indirection for
> >> every operation.
>
> > Perhaps I'm not understanding this thread at all but how is dynamic
> > allocation hare-brained, and what's the 'needless extra indirection'?
>
> Dynamic allocation isn't hare-brained, but doing it for every stored integer
> value outside a very small range is, because dynamic allocation is (relatively
> speaking, in the context of integer operations) very costly even with a
> (relatively speaking, in the context of general dynamic allocation) very
> efficient small-objects allocator - here talking order(s) of magnitude.


Python made a design trade-off, it chose a simpler implementation and
uniform object semantic behavior, at a cost of speed. C# made a
different trade-off, choosing a more complex implementation, a
language with two starkly different object semantic behaviors, so as
to allow better performance.

You don't have to like the decision Python made, but I don't think
it's fair to call a deliberate design trade-off hare-brained.


Carl Banks
From: Rami Chowdhury on
On Fri, 06 Nov 2009 09:28:08 -0800, Alf P. Steinbach <alfps(a)start.no>
wrote:

> * Rami Chowdhury:
>> On Fri, 06 Nov 2009 08:54:53 -0800, Alf P. Steinbach <alfps(a)start.no>
>> wrote:
>>
>>> But wow. That's pretty hare-brained: dynamic allocation for every
>>> stored value outside the cache range, needless extra indirection for
>>> every operation.
>>>
>> Perhaps I'm not understanding this thread at all but how is dynamic
>> allocation hare-brained, and what's the 'needless extra indirection'?
>
> Dynamic allocation isn't hare-brained, but doing it for every stored
> integer value outside a very small range is, because dynamic allocation
> is (relatively speaking, in the context of integer operations) very
> costly even with a (relatively speaking, in the context of general
> dynamic allocation) very efficient small-objects allocator - here
> talking order(s) of magnitude.

Well, sure, it may seem that way. But how large a cache would you want to
preallocate? I can't see the average Python program needing to use the
integers from -10000 to 10000, for instance. In my (admittedly limited)
experience Python programs typically deal with rather more complex objects
than plain integers.

> int intValueOf( Object const& o )
> {
> if( o.type_id != int_type_id ) { throw TypeError(); }
> return static_cast<IntType*>( o.p )->value; // Extra
> indirection
> }

If a large cache were created and maintained, would it not be equally
indirect to check for the presence of a value in the cache, and return
that value if it's present?

> creating that value then involves a dynamic allocation.

Creating which value, sorry -- the type object?


--
Rami Chowdhury
"Never attribute to malice that which can be attributed to stupidity" --
Hanlon's Razor
408-597-7068 (US) / 07875-841-046 (UK) / 0189-245544 (BD)
From: Alf P. Steinbach on
* Carl Banks:
> On Nov 6, 9:28 am, "Alf P. Steinbach" <al...(a)start.no> wrote:
>> * Rami Chowdhury:
>>
>>> On Fri, 06 Nov 2009 08:54:53 -0800, Alf P. Steinbach <al...(a)start.no>
>>> wrote:
>>>> But wow. That's pretty hare-brained: dynamic allocation for every
>>>> stored value outside the cache range, needless extra indirection for
>>>> every operation.
>>> Perhaps I'm not understanding this thread at all but how is dynamic
>>> allocation hare-brained, and what's the 'needless extra indirection'?
>> Dynamic allocation isn't hare-brained, but doing it for every stored integer
>> value outside a very small range is, because dynamic allocation is (relatively
>> speaking, in the context of integer operations) very costly even with a
>> (relatively speaking, in the context of general dynamic allocation) very
>> efficient small-objects allocator - here talking order(s) of magnitude.
>
>
> Python made a design trade-off, it chose a simpler implementation

Note that the object implementation's complexity doesn't have to affect to any
other code since it's trivial to provide abstract accessors (even macros), i.e.,
this isn't part of a trade-off except if the original developer(s) had limited
resources -- and if so then it wasn't a trade-off at the language design level
but a trade-off of getting things done then and there.


> and uniform object semantic behavior,

Also note that the script language level semantics of objects is /unaffected/ by
the implementation, except for speed, i.e., this isn't part of a trade-off
either. ;-)


> at a cost of speed.

In summary, the trade-off, if any, couldn't as I see it be what you describe,
but there could have been a different kind of getting-it-done trade-off.

It is usually better with Something Usable than waiting forever (or too long)
for the Perfect... ;-)

Or, it could be that things just evolved, constrained by frozen earlier
decisions. That's the main reason for the many quirks in C++. Not unlikely that
it's also that way for Python.


> C# made a
> different trade-off, choosing a more complex implementation, a
> language with two starkly different object semantic behaviors, so as
> to allow better performance.

Don't know about the implementation of C#, but whatever it is, if it's bad in
some respect then that has nothing to do with Python.


> You don't have to like the decision Python made, but I don't think
> it's fair to call a deliberate design trade-off hare-brained.

OK. :-)


Cheers,

- Alf
From: Alf P. Steinbach on
* Rami Chowdhury:
> On Fri, 06 Nov 2009 09:28:08 -0800, Alf P. Steinbach <alfps(a)start.no>
> wrote:
>
>> * Rami Chowdhury:
>>> On Fri, 06 Nov 2009 08:54:53 -0800, Alf P. Steinbach <alfps(a)start.no>
>>> wrote:
>>>
>>>> But wow. That's pretty hare-brained: dynamic allocation for every
>>>> stored value outside the cache range, needless extra indirection for
>>>> every operation.
>>>>
>>> Perhaps I'm not understanding this thread at all but how is dynamic
>>> allocation hare-brained, and what's the 'needless extra indirection'?
>>
>> Dynamic allocation isn't hare-brained, but doing it for every stored
>> integer value outside a very small range is, because dynamic
>> allocation is (relatively speaking, in the context of integer
>> operations) very costly even with a (relatively speaking, in the
>> context of general dynamic allocation) very efficient small-objects
>> allocator - here talking order(s) of magnitude.
>
> Well, sure, it may seem that way. But how large a cache would you want
> to preallocate? I can't see the average Python program needing to use
> the integers from -10000 to 10000, for instance. In my (admittedly
> limited) experience Python programs typically deal with rather more
> complex objects than plain integers.

Uhm, you've misunderstood or failed to understand something basic, but what? The
CPython implementation uses a cache to alleviate problems with performance. A
tagged scheme (the usual elsewhere, e.g. Windows' Variant) doesn't need any
cache and can't benefit from such a cache, since then an integer's value is
directly available in any variable that logically holds an int. In short, a
cache for integer values is maningless for the tagged scheme.


>> int intValueOf( Object const& o )
>> {
>> if( o.type_id != int_type_id ) { throw TypeError(); }
>> return static_cast<IntType*>( o.p )->value; // Extra
>> indirection
>> }
>
> If a large cache were created and maintained, would it not be equally
> indirect to check for the presence of a value in the cache, and return
> that value if it's present?

Again, that's meaningless. See above.


>> creating that value then involves a dynamic allocation.
>
> Creating which value, sorry -- the type object?

Well it's an out-of-context quote, but t'was about creating the value object
that a variable contains a pointer to with the current CPython implementation.

I'm sure that more information about tagged variant schemes are available on the
net.

E.g. Wikipedia.


Cheers & hth.,

- Alf
From: Mel on
Alf P. Steinbach wrote:
> Note that the object implementation's complexity doesn't have to affect to
> any other code since it's trivial to provide abstract accessors (even
> macros), i.e., this isn't part of a trade-off except if the original
> developer(s) had limited
> resources -- and if so then it wasn't a trade-off at the language design
> level but a trade-off of getting things done then and there.

But remember what got us in here: your belief (which followed from your
assumptions) that computing `is` required testing the object types. You
might optimize out the "extra indirection" to get an object's value, but
you'd need the "extra indirection" anyway to find out what type it was
before you could use it.

Mel.