From: Peter Otten on
wheres pythonmonks wrote:

> Funny... just spent some time with timeit:
> I wonder why I am passing in strings if the callback overhead is so
> light...
> More funny: it looks like inline (not passed in) lambdas can cause
> python to be more efficient!
>>>> import random
>>>> d = [ (['A','B'][random.randint(0,1)],x,random.gauss(0,1)) for x in
>>>> xrange(0,1000000) ] def A1(): j = [ lambda t: (t[2]*t[1],t[2]**2+5) for
>>>> t in d ]
>>>> def A2(): j = [ (t[2]*t[1],t[2]**2+5) for t in d ]
>>>> def A3(l): j = [ l(t) for t in d]
>>>> import timeit
>>>> timeit.timeit('A1()','from __main__ import A1,d',number=10);
> 2.2185971572472454
>>>> timeit.timeit('A2()','from __main__ import A2,d',number=10);
> 7.2615454749912942
>>>> timeit.timeit('A3(lambda t: (t[2]*t[1],t[2]**2+5))','from __main__
>>>> import A3,d',number=10);
> 9.4334241349350947
> So: in-line lambda possible speed improvement. in-line tuple is slow,
> passed-in callback, slowest yet?
> Is this possibly right?
> Hopefully someone can spot the bug?

A1() makes a lot of lambdas but doesn't invoke them. Once that is fixed A1()
is indeed slower than A2():

>>> from timeit import timeit
>>> import random
>>> d = [(random.choice("AB"), x, random.gauss(0, 1)) for x in
>>> def A1(d=d): return [(lambda t: (t[2]*t[1],t[2]**2+5))(t) for t in d]
>>> def A2(d=d): return [(t[2]*t[1], t[2]**2+5) for t in d]
>>> assert A1() == A2()
>>> timeit("A1()", "from __main__ import A1", number=10)
>>> timeit("A2()", "from __main__ import A2", number=10)

From: Dave Angel on
Duncan Booth wrote:
> <snip>
> Consider languages where you can easily write a swap function (or any other
> function that updates its arguments). e.g. consider C or C#.
> For C your function must take pointers to the variables, so when you call
> swap you have to make this explicit by taking the address of each variable:
> foo(&x, &y);
> For C# the function takes references to the variables. Again you have to
> also make this explicit at the point of call by prefixing the argument with
> 'ref':
> foo(ref x, ref y);
> Python is really no different: if you want a function to rebind its
> arguments you have to make that explicit at the point of call. The only
> difference is that in Python you make the rebinding explicit by assigning
> to the names:
> x, y = foo(x, y)
I don't disagree with the overall point, but C has references (or at
least C++ does, I don't think I've written any pure C code since 1992).
If the function is declared to take references, the caller doesn't do a
thing differently.

void foo(int &a, int &b);

is called by foo(x, y), and the function may indeed swap the arguments.

If I recall correctly, pascal is the same way. The called function
declares byref, and the caller doesn't do anything differently.


From: Dave Angel on
wheres pythonmonks wrote:
> Funny... just spent some time with timeit:
> I wonder why I am passing in strings if the callback overhead is so light...
> More funny: it looks like inline (not passed in) lambdas can cause
> python to be more efficient!
>>>> import random
>>>> d = (['A','B'][random.randint(0,1)],x,random.gauss(0,1)) for x in xrange(0,1000000) ]
>>>> def A1(): j = lambda t: (t[2]*t[1],t[2]**2+5) for t in d ]
>>>> def A2(): j = (t[2]*t[1],t[2]**2+5) for t in d ]
But A1() gives a different result. It builds a list of function
objects. It doesn't actually do any of those multiplies. In fact, I
don't even think it would get the same answers if you then looped
through it, calling the functions it stored.


From: Stephen Hansen on
On 7/23/10 2:05 AM, Steven D'Aprano wrote:
> On Thu, 22 Jul 2010 21:23:05 -0700, Stephen Hansen wrote:
>> On 7/22/10 7:47 PM, wheres pythonmonks wrote:
> [...]
>>> The truth is that I don't intend to use these approaches in anything
>>> serious. However, I've been known to do some metaprogramming from time
>>> to time.
>> Depending on how you define "metaprogramming", Python is pretty
>> notoriously ill-suited towards the task (more, its basically considered
>> a feature it doesn't let you go there) -- or, it allows you to do vast
>> amounts of stuff with its few dark magic hooks. I've never had a
>> satisfying definition of metaprogramming that more then 50% of any group
>> agree with, so I'm not sure which you're looking for. :)
> I disagree strongly at your characterisation that Python is notorious for
> being ill-suited towards metaprogramming. I'd say the complete opposite
> -- what is considered dark and scary metaprogramming tasks in other
> languages is considered too ordinary to even mention in Python.

I rather think you missed my point entirely.

'Depending on how you define "metaprogramming"'

The 'Depending' was the most important part of that sentence. You go on
to talk about runtime modification of classes and the like. That, Python
lets you do quite readily, and then go on to do crazy amounts of without
any significant effort.

That's one definition of metaprogramming. That one Python does well.

The other involves things like macros, or things where you basically
write a new sub-language in the language itself to achieve some commonly
desired task more efficiently (or just more succinctly). That's another
definition of metaprogramming: where its not so much structures
(classes, etc) which are modified at runtime (which Python lets you do
readily), but the syntax or semantics of the language itself. That
Python isn't game for.

> [...]
>> But! What it doesn't let you do is get clever with syntax and write a
>> sort of "simplified" or domain specific language to achieve certain
>> sorts of repetitive tasks quickly. You always end up writing normal
>> Python that looks like all other Python.
> Exactly... 90% of the time that you think you want a DSL, Python beat you
> to it.

Yet, that is a branch of what is considered metaprogramming that the OP
seems to be asking for, that we do not offer any sort of real support
for. I was making this distinction.


Stephen Hansen
... Also: Ixokai
... Mail: me+list/python (AT) ixokai (DOT) io
... Blog:

From: Thomas Jollans on
On 07/23/2010 12:34 AM, wheres pythonmonks wrote:
> 2. Is there a better way to loopup by id? I'm not very familiar with
> sys.exc_info, but creating the id->name hash each time seems like
> overkill.

I just had the most horrendous idea. Really, looking up objects by ID,
or even swapping two objects, isn't that difficult if you do some C
black magic. Don't try this in front of the kids.

I wrote a little module, called "hell":

Python 3.1.2 (release31-maint, Jul 8 2010, 09:18:08)
[GCC 4.4.4] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from hell import swap, getptr
>>> dir
<built-in function dir>
>>> len
<built-in function len>
>>> swap(dir, len)
>>> dir
<built-in function len>
>>> len
<built-in function dir>
>>> a = "this was a"
>>> b = "this was b, hell yeah"
>>> (a, b, id(a), id(b))
('this was a', 'this was b, hell yeah', 32417752, 32418144)
>>> tpl = (a, b, id(a), id(b))
>>> tpl
('this was a', 'this was b, hell yeah', 32417752, 32418144)
>>> swap(a, b)
>>> tpl
('this was b, hell yeah', 'this was a', 32417752, 32418144)
>>> getptr(32417752)
'this was b, hell yeah'

The code is below. Use it under the terms of the WTFPL version 2.

-- Thomas

PS: all of this is a very BAD IDEA. But interesting, in a way.

################# ############
from distutils.core import setup, Extension

hellmodule = Extension('hell',
sources = ['hellmodule.c'])

setup (name = 'hellmodule',
version = '0.1-apocalypse',
description = 'Functions from hell. Never ever use.',
ext_modules = [hellmodule])

################# hellmodule.c ##############
#include <Python.h>

static PyObject *swap(PyObject *self, PyObject *args);
static PyObject *getptr(PyObject *self, PyObject *args);

static PyMethodDef hell_methods[] = {
{"swap", &swap, METH_VARARGS, ""},
{"getptr", &getptr, METH_VARARGS, ""},

static struct PyModuleDef hell_module = {
"hell", /* module name */
"functions from hell. never use.", /* doc */

return PyModule_Create(&hell_module);

static PyObject *
swap(PyObject *self, PyObject *args)
PyObject *obj1, *obj2;
Py_ssize_t len;

PyObject *temp;

if (!PyArg_ParseTuple(args, "OO", &obj1, &obj2)) {
return NULL;

len = obj1->ob_type->tp_basicsize;
if (obj2->ob_type->tp_basicsize != len) {
"types have different sizes (incompatible)");
return NULL;

temp = PyMem_Malloc(len);
memcpy(temp, obj1, len);
memcpy(obj1, obj2, len);
memcpy(obj2, temp, len);
obj2->ob_refcnt = obj1->ob_refcnt;
obj1->ob_refcnt = temp->ob_refcnt;

return Py_None;

static PyObject *
getptr(PyObject *self, PyObject *args)
unsigned long lId;
PyObject *retv;

if (!PyArg_ParseTuple(args, "k", &lId)) {
return NULL;

retv = (PyObject*) lId;
return retv;