Prev: Python daemonisation with python-daemon
Next: External Hashing [was Re: matching strings in a large set of strings]
From: Steven D'Aprano on 2 May 2010 06:08 On Sun, 02 May 2010 04:04:11 -0400, Terry Reedy wrote: > On 5/2/2010 1:05 AM, Alf P. Steinbach wrote: >> On 02.05.2010 06:06, * Aahz: > >>> and sometimes >>> they rebind the original target to the same object. >> >> At the Python level that seems to be an undetectable null-operation. > > If you try t=(1,2,3); t[1]+=3, if very much matters that a rebind > occurs. > >> Granted one could see something going on in a machine code or byte code >> debugger. But making that distinction (doing nothing versus >> self-assignment) at the Python level seems, to me, to be meaningless. > > Please do not confuse things. Augmented *assignment* must be understood > as assignment. Failure to do so leads (and has lead) newbies into > confusion, and puzzled posts on this list. I think that if you think *only* about Python's standard namespaces, self- assignment is more or less a no-op. I can't think of any way that x = x could do anything other than use CPU cycles, *if* you limit yourself to the standard global or function local namespaces. But if you think about custom namespace types, and item assignment (e.g. the example you gave with a tuple), the situation becomes different. Here's a nice example, using Python 3.1: >>> class B(A): # class A defined elsewhere -- see below. .... x = 1 .... x = x .... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 3, in B File "<stdin>", line 4, in __setitem__ TypeError: can't rebind constants Thus proving that self-assignment is not necessarily a no-op. How did I make that work? It takes a custom dict and a bit of metaclass magic: class ConstantNamespace(dict): def __setitem__(self, key, value): if key in self: raise TypeError("can't rebind constants") super(ConstantNamespace, self).__setitem__(key, value) class WriteOnceClass(type): @classmethod def __prepare__(metacls, name, bases): return ConstantNamespace() def __new__(cls, name, bases, classdict): return type.__new__(cls, name, bases, classdict) class A(metaclass=WriteOnceClass): pass -- Steven
From: Albert van der Horst on 2 May 2010 08:12 In article <mailman.2429.1272646255.23598.python-list(a)python.org>, Jean-Michel Pichavant <jeanmichel(a)sequans.com> wrote: >Jabapyth wrote: >> At least a few times a day I wish python had the following shortcut >> syntax: >> >> vbl.=func(args) >> >> this would be equivalent to >> >> vbl = vbl.func(args) >> >> example: >> >> foo = "Hello world" >> foo.=split(" ") >> print foo >> # ['Hello', 'world'] >> >> and I guess you could generalize this to >> >> vbl.=[some text] >> # >> vbl = vbl.[some text] >> >> e.g. >> >> temp.=children[0] >> # temp = temp.children[0] >> >> thoughts? >> >Useless if you use meaningful names for your variables & attributes. > >It may happen that one object attribute refer to an object of the same >type, but it is quite rare that both can share the same name anyway. > >Possible use cases: > >1/ >car = Car() >car = car.wheel # ??? > >2/ >wheel = Car() # ??? >wheel = wheel.wheel # ??? > >3/ >currentCar = Car() >currentCar = currentCar.nextCar > >The syntax you prose will be applicable on very little assignements (use >case 3). I'm not sure it's worth it. Note how related it is to the requirement to have a _radd_ operator. It amounts to the argument that a op= b requires that a and b have somewhat "similar" "type", or that the "type" of a doesn't really change as a result from the operation. This is IMHO an argument against the .= pseudo-operator. > >JM Groetjes Albert -- -- Albert van der Horst, UTRECHT,THE NETHERLANDS Economic growth -- being exponential -- ultimately falters. albert(a)spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst
From: Alf P. Steinbach on 3 May 2010 00:37 * Terry Reedy: > * Alf P. Steinbach: >> * Aahz: > >>> and sometimes >>> they rebind the original target to the same object. >> >> At the Python level that seems to be an undetectable null-operation. > > If you try t=(1,2,3); t[1]+=3, if very much matters that a rebind occurs. Testing: <test lang="py3"> >>> t = ([], [], []) >>> t ([], [], []) >>> t[0] += ["blah"] Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'tuple' object does not support item assignment >>> t (['blah'], [], []) >>> _ </test> Yep, it matters. Is this change-but-raise-exception a bug? I seem to have a knack for running into bugs. :-) >> Granted one could see something going on in a machine code or byte code >> debugger. But making that distinction (doing nothing versus >> self-assignment) at the Python level seems, to me, to be meaningless. > > Please do not confuse things. Augmented *assignment* must be understood > as assignment. Failure to do so leads (and has lead) newbies into > confusion, and puzzled posts on this list. OK. But I think it would be less confusing, less breaking of expectations, if, for the example above, += reduced to the functionality of extend(), with no x. Cheers, & thanks, - Alf
From: Steven D'Aprano on 3 May 2010 03:29 On Mon, 03 May 2010 06:37:49 +0200, Alf P. Steinbach wrote: > * Terry Reedy: >> * Alf P. Steinbach: >>> * Aahz: >> >>>> and sometimes >>>> they rebind the original target to the same object. >>> >>> At the Python level that seems to be an undetectable null-operation. >> >> If you try t=(1,2,3); t[1]+=3, if very much matters that a rebind >> occurs. > > Testing: > > <test lang="py3"> > >>> t = ([], [], []) > >>> t > ([], [], []) > >>> t[0] += ["blah"] > Traceback (most recent call last): > File "<stdin>", line 1, in <module> > TypeError: 'tuple' object does not support item assignment > >>> t > (['blah'], [], []) > >>> _ > </test> > > Yep, it matters. > > Is this change-but-raise-exception a bug? > > I seem to have a knack for running into bugs. :-) No, I don't believe so -- I believe that it is behaving exactly as advertised. But it is absolutely a gotcha. Consider: >>> class K(object): .... def __init__(self, value=0): .... self.value = value .... def __add__(self, other): .... self.value = self.value + other .... return self .... def __str__(self): .... return "%s" % self.value .... __repr__ = __str__ .... >>> x = K(42) >>> x + 5 47 >>> t = (None, x) >>> t (None, 47) >>> >>> t[1] + 3 50 >>> t (None, 50) >>> t[1] += 1 Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'tuple' object does not support item assignment >>> t (None, 51) Unintuitive, yes. Possibly a bad design, maybe. Surprising, absolutely. But not a bug, as it's working exactly as promised. += is conceptually two steps: perform an addition, and perform an assignment afterward. That addition is sometimes performed in-place, but regardless of whether it is or not, the assignment is always attempted. -- Steven
From: Peter Otten on 3 May 2010 05:34
Alf P. Steinbach wrote: > <test lang="py3"> > >>> t = ([], [], []) > >>> t > ([], [], []) > >>> t[0] += ["blah"] > Traceback (most recent call last): > File "<stdin>", line 1, in <module> > TypeError: 'tuple' object does not support item assignment > >>> t > (['blah'], [], []) > >>> _ > </test> > > Yep, it matters. > > Is this change-but-raise-exception a bug? No. a[0] += b translates to a.__setitem__(0, a.__getitem__(0).__iadd__(b)) assuming a[0] has an __iadd__() method. It should be obvious that only the the last operation, the outer a.__setitem__(...), will fail here. A possible fix might be a changed order of evaluation: _internal_set = a.__setitem__ _internal_set(0, a.__getitem__(0).__iadd__(b)) I don't know if there are arguments against this other than increased compiler complexity. Or one could argue that a += b should have been implemented as a = a + b or a = a.__add__(b) which is currently used as the fallback when there is no __iadd__() method and which gives a more intuitive behaviour at the cost of a greater overhead. But it's a little late for that discussion, for that language. Peter |