Python provides a set of builtin codecs which are written in C for speed. All of these codecs are directly usable via the following functions.
Many of the following APIs take two arguments encoding and errors. These parameters encoding and errors have the same semantics as the ones of the builtin unicode() Unicode object constructor.
Setting encoding to NULL causes the default encoding to be used which is ASCII. The file system calls should use Py_FileSystemDefaultEncoding as the encoding for file names. This variable should be treated as read-only: On some systems, it will be a pointer to a static string, on others, it will change at run-time, e.g. when the application invokes setlocale.
Error handling is set by errors which may also be set to NULL meaning to use the default handling defined for the codec. Default error handling for all builtin codecs is ``strict'' (ValueError is raised).
The codecs all use a similar interface. Only deviation from the following generic ones are documented for simplicity.
These are the generic codec APIs:
const char *s, int size, const char *encoding, const char *errors) |
const Py_UNICODE *s, int size, const char *encoding, const char *errors) |
PyObject *unicode, const char *encoding, const char *errors) |
These are the UTF-8 codec APIs:
const char *s, int size, const char *errors) |
const Py_UNICODE *s, int size, const char *errors) |
PyObject *unicode) |
These are the UTF-16 codec APIs:
const char *s, int size, const char *errors, int *byteorder) |
If byteorder is non-NULL, the decoder starts decoding using the given byte order:
*byteorder == -1: little endian *byteorder == 0: native order *byteorder == 1: big endian
and then switches according to all byte order marks (BOM) it finds in the input data. BOMs are not copied into the resulting Unicode string. After completion, *byteorder is set to the current byte order at the end of input data.
If byteorder is NULL, the codec starts in native order mode.
Returns NULL if an exception was raised by the codec.
const Py_UNICODE *s, int size, const char *errors, int byteorder) |
0
,
output is written according to the following byte order:
byteorder == -1: little endian byteorder == 0: native byte order (writes a BOM mark) byteorder == 1: big endian
If byteorder is 0
, the output string will always start with
the Unicode BOM mark (U+FEFF). In the other two modes, no BOM mark
is prepended.
Note that Py_UNICODE data is being interpreted as UTF-16 reduced to UCS-2. This trick makes it possible to add full UTF-16 capabilities at a later point without comprimising the APIs.
Returns NULL if an exception was raised by the codec.
PyObject *unicode) |
These are the ``Unicode Esacpe'' codec APIs:
const char *s, int size, const char *errors) |
const Py_UNICODE *s, int size, const char *errors) |
PyObject *unicode) |
These are the ``Raw Unicode Esacpe'' codec APIs:
const char *s, int size, const char *errors) |
const Py_UNICODE *s, int size, const char *errors) |
PyObject *unicode) |
These are the Latin-1 codec APIs: Latin-1 corresponds to the first 256 Unicode ordinals and only these are accepted by the codecs during encoding.
const char *s, int size, const char *errors) |
const Py_UNICODE *s, int size, const char *errors) |
PyObject *unicode) |
These are the ASCII codec APIs. Only 7-bit ASCII data is accepted. All other codes generate errors.
const char *s, int size, const char *errors) |
const Py_UNICODE *s, int size, const char *errors) |
PyObject *unicode) |
These are the mapping codec APIs:
This codec is special in that it can be used to implement many different codecs (and this is in fact what was done to obtain most of the standard codecs included in the encodings package). The codec uses mapping to encode and decode characters.
Decoding mappings must map single string characters to single Unicode characters, integers (which are then interpreted as Unicode ordinals) or None (meaning "undefined mapping" and causing an error).
Encoding mappings must map single Unicode characters to single string characters, integers (which are then interpreted as Latin-1 ordinals) or None (meaning "undefined mapping" and causing an error).
The mapping objects provided must only support the __getitem__ mapping interface.
If a character lookup fails with a LookupError, the character is copied as-is meaning that its ordinal value will be interpreted as Unicode or Latin-1 ordinal resp. Because of this, mappings only need to contain those mappings which map characters to different code points.
const char *s, int size, PyObject *mapping, const char *errors) |
const Py_UNICODE *s, int size, PyObject *mapping, const char *errors) |
PyObject *unicode, PyObject *mapping) |
The following codec API is special in that maps Unicode to Unicode.
const Py_UNICODE *s, int size, PyObject *table, const char *errors) |
The mapping table must map Unicode ordinal integers to Unicode ordinal integers or None (causing deletion of the character).
Mapping tables need only provide the method__getitem__() interface; dictionaries and sequences work well. Unmapped character ordinals (ones which cause a LookupError) are left untouched and are copied as-is.
These are the MBCS codec APIs. They are currently only available on Windows and use the Win32 MBCS converters to implement the conversions. Note that MBCS (or DBCS) is a class of encodings, not just one. The target encoding is defined by the user settings on the machine running the codec.
const char *s, int size, const char *errors) |
const Py_UNICODE *s, int size, const char *errors) |
PyObject *unicode) |
See About this document... for information on suggesting changes.