The largest and most far-reaching changes in Python 2.2 are to Python's model of objects and classes. The changes should be backward compatible, so it's likely that your code will continue to run unchanged, but the changes provide some amazing new capabilities. Before beginning this, the longest and most complicated section of this article, I'll provide an overview of the changes and offer some comments.
A long time ago I wrote a Web page (http://www.amk.ca/python/writing/warts.html) listing flaws in Python's design. One of the most significant flaws was that it's impossible to subclass Python types implemented in C. In particular, it's not possible to subclass built-in types, so you can't just subclass, say, lists in order to add a single useful method to them. The UserList module provides a class that supports all of the methods of lists and that can be subclassed further, but there's lots of C code that expects a regular Python list and won't accept a UserList instance.
Python 2.2 fixes this, and in the process adds some exciting new capabilities. A brief summary:
Some users have voiced concern about all these changes. Sure, they say, the new features are neat and lend themselves to all sorts of tricks that weren't possible in previous versions of Python, but they also make the language more complicated. Some people have said that they've always recommended Python for its simplicity, and feel that its simplicity is being lost.
Personally, I think there's no need to worry. Many of the new features are quite esoteric, and you can write a lot of Python code without ever needed to be aware of them. Writing a simple class is no more difficult than it ever was, so you don't need to bother learning or teaching them unless they're actually needed. Some very complicated tasks that were previously only possible from C will now be possible in pure Python, and to my mind that's all for the better.
I'm not going to attempt to cover every single corner case and small change that were required to make the new features work. Instead this section will paint only the broad strokes. See section 2.5, ``Related Links'', for further sources of information about Python 2.2's new object model.
First, you should know that Python 2.2 really has two kinds of classes: classic or old-style classes, and new-style classes. The old-style class model is exactly the same as the class model in earlier versions of Python. All the new features described in this section apply only to new-style classes. This divergence isn't intended to last forever; eventually old-style classes will be dropped, possibly in Python 3.0.
So how do you define a new-style class? You do it by subclassing an existing new-style class. Most of Python's built-in types, such as integers, lists, dictionaries, and even files, are new-style classes now. A new-style class named object, the base class for all built-in types, has been also been added so if no built-in type is suitable, you can just subclass object:
class C(object): def __init__ (self): ... ...
This means that class statements that don't have any base classes are always classic classes in Python 2.2. (Actually you can also change this by setting a module-level variable named __metaclass__ -- see PEP 253 for the details -- but it's easier to just subclass object.)
The type objects for the built-in types are available as built-ins, named using a clever trick. Python has always had built-in functions named int(), float(), and str(). In 2.2, they aren't functions any more, but type objects that behave as factories when called.
>>> int <type 'int'> >>> int('123') 123
To make the set of types complete, new type objects such as dict and file have been added. Here's a more interesting example, adding a lock() method to file objects:
class LockableFile(file): def lock (self, operation, length=0, start=0, whence=0): import fcntl return fcntl.lockf(self.fileno(), operation, length, start, whence)
The now-obsolete posixfile module contained a class that emulated all of a file object's methods and also added a lock() method, but this class couldn't be passed to internal functions that expected a built-in file, something which is possible with our new LockableFile.
In previous versions of Python, there was no consistent way to discover what attributes and methods were supported by an object. There were some informal conventions, such as defining __members__ and __methods__ attributes that were lists of names, but often the author of an extension type or a class wouldn't bother to define them. You could fall back on inspecting the __dict__ of an object, but when class inheritance or an arbitrary __getattr__ hook were in use this could still be inaccurate.
The one big idea underlying the new class model is that an API for describing the attributes of an object using descriptors has been formalized. Descriptors specify the value of an attribute, stating whether it's a method or a field. With the descriptor API, static methods and class methods become possible, as well as more exotic constructs.
Attribute descriptors are objects that live inside class objects, and have a few attributes of their own:
For example, when you write obj.x
, the steps that Python
actually performs are:
descriptor = obj.__class__.x descriptor.__get__(obj)
For methods, descriptor.__get__ returns a temporary object that's callable, and wraps up the instance and the method to be called on it. This is also why static methods and class methods are now possible; they have descriptors that wrap up just the method, or the method and the class. As a brief explanation of these new kinds of methods, static methods aren't passed the instance, and therefore resemble regular functions. Class methods are passed the class of the object, but not the object itself. Static and class methods are defined like this:
class C(object): def f(arg1, arg2): ... f = staticmethod(f) def g(cls, arg1, arg2): ... g = classmethod(g)
The staticmethod() function takes the function
f, and returns it wrapped up in a descriptor so it can be
stored in the class object. You might expect there to be special
syntax for creating such methods (def static f()
,
defstatic f()
, or something like that) but no such syntax has
been defined yet; that's been left for future versions of Python.
More new features, such as slots and properties, are also implemented as new kinds of descriptors, and it's not difficult to write a descriptor class that does something novel. For example, it would be possible to write a descriptor class that made it possible to write Eiffel-style preconditions and postconditions for a method. A class that used this feature might be defined like this:
from eiffel import eiffelmethod class C(object): def f(self, arg1, arg2): # The actual function ... def pre_f(self): # Check preconditions ... def post_f(self): # Check postconditions ... f = eiffelmethod(f, pre_f, post_f)
Note that a person using the new eiffelmethod() doesn't have to understand anything about descriptors. This is why I think the new features don't increase the basic complexity of the language. There will be a few wizards who need to know about it in order to write eiffelmethod() or the ZODB or whatever, but most users will just write code on top of the resulting libraries and ignore the implementation details.
Multiple inheritance has also been made more useful through changing the rules under which names are resolved. Consider this set of classes (diagram taken from PEP 253 by Guido van Rossum):
class A: ^ ^ def save(self): ... / \ / \ / \ / \ class B class C: ^ ^ def save(self): ... \ / \ / \ / \ / class D
The lookup rule for classic classes is simple but not very smart; the base classes are searched depth-first, going from left to right. A reference to D.save will search the classes D, B, and then A, where save() would be found and returned. C.save() would never be found at all. This is bad, because if C's save() method is saving some internal state specific to C, not calling it will result in that state never getting saved.
New-style classes follow a different algorithm that's a bit more complicated to explain, but does the right thing in this situation.
Following this rule, referring to D.save() will return C.save(), which is the behaviour we're after. This lookup rule is the same as the one followed by Common Lisp. A new built-in function, super(), provides a way to get at a class's superclasses without having to reimplement Python's algorithm. The most commonly used form will be super(class, obj), which returns a bound superclass object (not the actual class object). This form will be used in methods to call a method in the superclass; for example, D's save() method would look like this:
class D: def save (self): # Call superclass .save() super(D, self).save() # Save D's private information here ...
super() can also return unbound superclass objects when called as super(class) or super(class1, class2), but this probably won't often be useful.
A fair number of sophisticated Python classes define hooks for
attribute access using __getattr__; most commonly this is
done for convenience, to make code more readable by automatically
mapping an attribute access such as obj.parent
into a method
call such as obj.get_parent()
. Python 2.2 adds some new ways
of controlling attribute access.
First, __getattr__(attr_name) is still supported by
new-style classes, and nothing about it has changed. As before, it
will be called when an attempt is made to access obj.foo
and no
attribute named "foo" is found in the instance's dictionary.
New-style classes also support a new method, __getattribute__(attr_name). The difference between the two methods is that __getattribute__ is always called whenever any attribute is accessed, while the old __getattr__ is only called if "foo" isn't found in the instance's dictionary.
However, Python 2.2's support for properties will often be a simpler way to trap attribute references. Writing a __getattr__ method is complicated because to avoid recursion you can't use regular attribute accesses inside them, and instead have to mess around with the contents of __dict__. __getattr__ methods also end up being called by Python when it checks for other methods such as __repr__ or __coerce__, and so have to be written with this in mind. Finally, calling a function on every attribute access results in a sizable performance loss.
property is a new built-in type that packages up three functions that get, set, or delete an attribute, and a docstring. For example, if you want to define a size attribute that's computed, but also settable, you could write:
class C(object): def get_size (self): result = ... computation ... return result def set_size (self, size): ... compute something based on the size and set internal state appropriately ... # Define a property. The 'delete this attribute' # method is defined as None, so the attribute # can't be deleted. size = property(get_size, set_size, None, "Storage size of this instance")
That is certainly clearer and easier to write than a pair of __getattr__/__setattr__ methods that check for the size attribute and handle it specially while retrieving all other attributes from the instance's __dict__. Accesses to size are also the only ones which have to perform the work of calling a function, so references to other attributes run at their usual speed.
Finally, it's possible to constrain the list of attributes that can be
referenced on an object using the new __slots__ class attribute.
Python objects are usually very dynamic; at any time it's possible to
define a new attribute on an instance by just doing
obj.new_attr=1
. This is flexible and convenient, but this
flexibility can also lead to bugs, as when you meant to write
obj.template = 'a'
but made a typo and wrote
obj.templtae
by accident.
A new-style class can define a class attribute named __slots__ to constrain the list of legal attribute names. An example will make this clear:
>>> class C(object): ... __slots__ = ('template', 'name') ... >>> obj = C() >>> print obj.template None >>> obj.template = 'Test' >>> print obj.template Test >>> obj.templtae = None Traceback (most recent call last): File "<stdin>", line 1, in ? AttributeError: 'C' object has no attribute 'templtae'
Note how you get an AttributeError on the attempt to assign to an attribute not listed in __slots__.
This section has just been a quick overview of the new features, giving enough of an explanation to start you programming, but many details have been simplified or ignored. Where should you go to get a more complete picture?
http://www.python.org/2.2/descrintro.html is a lengthy tutorial introduction to the descriptor features, written by Guido van Rossum. If my description has whetted your appetite, go read this tutorial next, because it goes into much more detail about the new features while still remaining quite easy to read.
Next, there are two relevant PEPs, PEP 252 and PEP 253. PEP 252 is titled "Making Types Look More Like Classes", and covers the descriptor API. PEP 253 is titled "Subtyping Built-in Types", and describes the changes to type objects that make it possible to subtype built-in objects. PEP 253 is the more complicated PEP of the two, and at a few points the necessary explanations of types and meta-types may cause your head to explode. Both PEPs were written and implemented by Guido van Rossum, with substantial assistance from the rest of the Zope Corp. team.
Finally, there's the ultimate authority: the source code. Most of the machinery for the type handling is in Objects/typeobject.c, but you should only resort to it after all other avenues have been exhausted, including posting a question to python-list or python-dev.
See About this document... for information on suggesting changes.