Could you elaborate on this? All I get from this answer is "it's
always been done that way." Which does not really address the issue
of design. I ask the question because the string and tuple semantics
are different from other types for reasons unexplained (to me,
anyway). Not to be a language purist, but when I find myself
searching the library reference for the table appropriate to the type
I have chosen, I wonder: why can't there be only one table? Why all
the exceptions?
High speed is not a real big issue with me: if I really need speed,
I would use a compiled language. With Python, I can very quickly
create small and medium-sized programs. Having to refer to reference
books slows me down. When I feel nostalgic for reference books, I use
PERL.
Concerning the question "Why must 'self' be declared and used
explicitly in method definitions and calls?" and what it reveals about
my C++ background: Actually, my experience has been with a object
dialect of Pascal (used by Apple before they dumped it) and SCOOPS in
Scheme. I have only recently begun to learn C++ (as another
marketable skill), and I think it sucks.
Sure, I can see where an explicitly defined 'self' is useful. It's
just a pain to type over and over again. I continually find that most
of my symantic errors arise because I forgot to preceed some variable
or method with "self.".
This ties in with my other question (which was not a clear as I
intended):
> 7) More about contexts: having to fully specify variable contexts
> really bugs me....Because: I prefer a language that does the
> work for me. Why isn't there a predefined context search order?
By "does the work for me", I mean: when I type a variable name
without specifying a context, the runtime environment looks through
a set of symbol tables to find it. The current implementation looks
through just one: the local table. I propose that if this lookup
fails, that other tables be searched.
Example: in a function, I reference variable "v". The runtime tries
to find "i" in the local function table. Failing to find "v", it next
searches the symbol table for the object instance (if appropriate).
Then: class table, module table, and finally some global table.
Result: I don't have to specify, or remember the details of, the
context. The runtime is doing the work for me.
Of course, this will lead to a slower program and set of symantic
errors which are converse to the ones I currently make. But,
hey...overall, it's less work for me. And if I want speed, then I
could simply specify the full context. Which is what I would have to
do anyway to create a new variable somewhere (the declaration
problem).
Finally, I'll offer a look inside my mind, to explain what goes on
when I write OO code: Suppose that I'm writing a method. To me, this
is like scribbling on some paper that I will store in my desk. That
paper is the method. My desk is the class. And when I'm writing that
method, the method is executing -- it's executing in my head. I *am*
the method. And when I use variables, I use variables from local and
neighboring contexts; just like when I reach for a pen, I first look
on my desk. I think, "I want pen." And this means to me: the pen on
my desk in the cup. Not "my_office.desk.cup.pen". Just "pen". A
longer context is appropriate when I want something from someone
else's desk, but only as much context as required to get the thing.
To overspecify is to be mired with unnecessary information, because
everything is relative to the current location (or context, or
method).
The preceding directed fantasy was brought to you by my current
interest in psychology. Hope you language designers can use that.
And who knows? Someday they may not be called "programs", but
"directed fantasies".
-- Craig Lawson claw@rahul.net