Long ago, I had a few gripes about the python type system, but at that
time they were more theoretical than practical. All of the new Class
capabilities now make that complaint more practical:
The python type system is too narrow for builtin types, and too broad
for classes and instances ( where *everything* is <type 'class'> or
<type 'instance>' ).
For example: with numeric types, we are often not interested in their
actual type - that's why there is type coercion. We just what to check
that an argument is, in fact, a numeric type of some sort. For builtin
types, we can check:
"if type( arg ) in ( type(0), type( 0L ), type( 0.0 ) ) :"
But this does not extend to user written classes like Complex or Rat.
One approach would be to create a new convention that all numeric
classes ( That is all classes that define methods for ( __add__, __sub__,
..., __neg__, ... , __int__, __long__, __float__ ) - all the standard
numeric methods, and where the semantics of those methods are something
that would be considered numeric ( NOT '+' for string concatenation, for
example. ) define an attribute "__isnumeric__ " . A function: 'isnumeric',
would check for either builtin numeric types or class instances with
__isnumeric__ defined. This would probably be a builtin, but a python
version is:
--------------begin-----------
#!/usr/local/bin/python
#
# the version kludge is because I have msdos version 0.9.8 at home,
# unix version 0.9.7b installed on Sun + AIX, and 0.9.9 experimentally
# in use under AIX.
#
import string
import sys
attr = '_isnumeric_' # '__isnumeric__' is Read Only
ver = string.split( sys.version )[0]
if ( ver >= '0.9.9' ):
def isnumeric(obj):
if type(obj) in ( type(0), type(0L), type(0.0) ) :
return 1
elif hasattr( obj, attr ) :
return getattr( obj, attr )
else:
return 0
else:
def isnumeric(obj):
if type(obj) in ( type(0), type(0L), type(0.0) ) :
return 1
try:
return getattr( obj, attr )
except ( NameError, TypeError ):
return 0
def number( obj ):
if isnumeric( obj ):
print `obj` + ' is a number.'
else:
print `obj` + ' is NOT a number.'
def test():
class TestNumb:
_isnumeric_ = 0
def init(self):
self._isnumeric_ = 1
return self
N = TestNumb().init()
number( 1 )
number( 1L )
number( 1000.0 )
number( 'this string' )
number( type( 0L ) )
number( TestNumb )
number( N )
test()
-------------end-------------
I'm not sure that I digested and understood all of the previous discussion
between Jaap and Guido about instance vs class attributes. From looking
at some of the 0.9.9 changes, it looks as if some of the semantics may
have been changed in the light of that discussion. I may be wrong.
But this brings up the question of whether the __numeric__ attribute
should be a class attribute or an instance attribute. It is the instance
which is numeric, NOT the class - but it doesn't look quite right to
create a new instance attribute ( with the identical value ) for each
instance. I suppose the solution is to make it a function/method,
which returns 0 or 1, rather than a value attribute. Then it is in
the class namespace but responds as an instance method.
The same conventions could be followed for '__issequence__' and
'__ismapping__' ( It is the ability to override operators like
'+' or `[indx]' or '[i:j]' for user defined types that makes
the limits of type() more obvious. )
If the typeing of builtin's is too narrow, then the typing
of user instances is too broad. Everything is '<type class>'
or '<type instance>'. ( I guess I don't really have a problem
with '<type class>', but '<type 'instance'>' is not very useful.)
One idea would be to make 'type' of user instances programmable
just like 'repr'. If I want two classes to be "equivalent" , then
I just define their type strings to be identical.
How fixed is the notion that type(thing) is a single value?
Perhaps type should be a 2-tuple of ( most-restrictive-type, least-
restrictive-type )
type( 0 ) ==> ( <type 'int'>, <type 'numeric'> )
type( 1.0 ) ==> ( <type 'float'> , <type 'numeric'> )
type( [] ) ==> ( <type 'list' >, <type 'sequence' > )
type( (1,2) ) ==> ( <type 'tuple' >, <type 'sequence' > )
Since equality comparison works on tuples, "type(agr) == type( [] )"
would still work. But "type(type(None))" would no longer be
<type 'type'>, it would become '<type 'tuple' > instead.
Also: when you try to fit user instances into this scheme, they are:
(extremely) generically: instances
specifically: instances of some Class
(moderately) generically: possibly of some type "family"
( numeric, sequence, iosource, iosink, ... )
Maybe there should be functions 'type()' and 'typefamily()' ?
Where typefamily() => number | sequence | mapping, for builtins,
and whatever __typefamily__(self) returns for user defined
instances, or type(self) if undefined.
More ambitious perhaps, might be an entire type hierarchy, with
base class types automatically added to the list:
"if type(thing) in typeset( object ): "
where typeset(obj) yields a tuple built from object's class +
base class type strings ( something more specific than: )
+ <type 'instance'> OR a builtin tuple ( for example:
( <type 'int'>, <type 'number'> ) )
( Actually - that doesn't sound so bad. It require one
more builtin function + one more attribute for both
builtin and user defined objects, it doesn't break
anything that uses 'type()', and it's general enough
to make "type inheritance" usable. What do you think? )
What sort of approach would make the best fit with python?
-Steve Majewski (804-982-0831) <sdm7g@Virginia.EDU>
-Univ. of Virginia Department of Molecular Physiology and Biological Physics