From: MRAB on 3 Aug 2010 20:24
Baz Walter wrote:
> On 03/08/10 21:24, MRAB wrote:
>>>> And, BTW, none of your examples pass a UTF-8 bytestring to
>>>> re.findall: all those string literals starting with the 'u' prefix
>>>> are Unicode strings!
>>> not sure what you mean by this: if the string was encoded as utf8,
>>> '\w' still wouldn't match any of the non-ascii characters.
>> Strings with the 'u' prefix are Unicode strings, not bytestrings.
>> They don't have an encoding.
> well, they do if they are given one, as i suggested!
> to be explicit, if the local encoding is 'utf8', none of the following
> will get a hit:
> (1) re.findall(r'\w', '\xe5 \xe6 \xe7', re.L)
This passes, for example, 0xE5 to the C library function isalnum() to
check whether it's alphanumeric. Apparently it's returning false when
the locale is set to UTF-8.
> (2) re.findall(r'\w', u'\xe5 \xe6 \xe7'.encode('utf8'), re.L)
u'\xe5' is encoded to '\xc3\xa5'. Both 0xC3 and 0xA5 are passed to the C
library function isalnum() to check whether they're alphanumeric.
Apparently it's returning false for both when the locale is set to
> (3) re.findall(r'\w', u'\xe5 \xe6 \xe7', re.L)
Same as (1) above.
> so i still don't know what you meant about passing a 'UTF-8
> bytestring' in your first comment :)
> only (3) could feasibly get a hit - and then only if the re module was
> smart enough to fall back to re.UNICODE for utf8 (and any other
> encodings of unicode it might know about).
LOCALE was really intended for all those 1-byte-per-character character
sets like CP1252. Trying to implement regex when different characters
occupy different numbers of bytes is, well, challenging! :-)
>> 2. LOCALE: bytestring with characters in the current locale (but only
>> 1 byte per character). Characters are categorised according to the
>> underlying C library; for example, 'a' is a letter if isalpha('a')
>> returns true.
> this is actually what my question was about. i suspected something
> like this might be the case, but i can't actually see it stated
> anywhere in the docs. maybe it's just me, but 'current locale' doesn't
> naturally imply 'only 8-bit encodings'. i would have thought it
> implied 'whatever encoding is discovered on the local system' - and
> these days, that's very commonly utf8.
> is there actually a use case for it working the way it currently does?
> it seems just broken to have it depending so heavily on implementation
As I said, it's for old-style 1-byte-per-character character sets. If
you have UTF-8, then you can decode to Unicode.
Is it broken? Well, it works well enough for its intended use. Could the
re module work with bytes which represent characters in an arbitrary
encoding? Would you like to have a go at implementing it? I wouldn't...
It would be easier to just decode to Unicode and work with that.
>> 3. UNICODE (default in Python 3): Unicode string.
> i've just read the python3 re docs, and they do now make an explicit
> distinction between matching bytes (with the new re.ASCII flag) and
> matching textual characters (i.e. unicode, the default). the re.LOCALE
> flag is still there, and there are now warnings about it's
> unreliability - but it still doesn't state that it can only work
> properly if the local encoding is 8-bit.
The recommendation for text is to use UTF-8 externally (input, output
and storage in files) and Unicode internally when processing.