Date: | 2003-07-29 |
---|---|
Version: | 1.5 |
Web site: | http://www.python.org/ |
This is a work in progress. Please feel free to ask questions and/or provide answers; send comments on the FAQ to webmaster@python.org.
Sections still in need of proofreading and updating: 5 6 7 8 9 10 11 12 13 14 15 16.
Python is an interpreted, interactive, object-oriented programming language. It incorporates modules, exceptions, dynamic typing, very high level dynamic data types, and classes. Python combines remarkable power with very clear syntax. It has interfaces to many system calls and libraries, as well as to various window systems, and is extensible in C or C++. It is also usable as an extension language for applications that need a programmable interface. Finally, Python is portable: it runs on many brands of UNIX, on the Mac, and on PCs under MS-DOS, Windows, Windows NT, and OS/2.
To find out more, start with the Introduction to Python topic guide.
Here's a very brief summary of what started it all, written by Guido van Rossum:
I had extensive experience with implementing an interpreted language in the ABC group at CWI, and from working with this group I had learned a lot about language design. This is the origin of many Python features, including the use of indentation for statement grouping and the inclusion of very-high-level data types (although the details are all different in Python).
I had a number of gripes about the ABC language, but also liked many of its features. It was impossible to extend the ABC language (or its implementation) to remedy my complaints -- in fact its lack of extensibility was one of its biggest problems. I had some experience with using Modula-2+ and talked with the designers of Modula-3 and read the Modula-3 report. Modula-3 is the origin of the syntax and semantics used for exceptions, and some other Python features.
I was working in the Amoeba distributed operating system group at CWI. We needed a better way to do system administration than by writing either C programs or Bourne shell scripts, since Amoeba had its own system call interface which wasn't easily accessible from the Bourne shell. My experience with error handling in Amoeba made me acutely aware of the importance of exceptions as a programming language feature.
It occurred to me that a scripting language with a syntax like ABC but with access to the Amoeba system calls would fill the need. I realized that it would be foolish to write an Amoeba-specific language, so I decided that I needed a language that was generally extensible.
During the 1989 Christmas holidays, I had a lot of time on my hand, so I decided to give it a try. During the next year, while still mostly working on it in my own time, Python was used in the Amoeba project with increasing success, and the feedback from colleagues made me add many early improvements.
In February 1991, after just over a year of development, I decided to post to USENET. The rest is in the Misc/HISTORY file.
Python is used in many situations where a great deal of dynamism, ease of use, power, and flexibility are required.
For tasks such as manipulating the operating system or processing text, Python is easier to use and is roughly as fast as just about any language. This makes Python good for many system administration type tasks, for CGI programming and other application areas that manipulate text and strings and such.
When augmented with standard extensions (such as PIL, COM, Numeric, oracledb, kjbuckets, tkinter, win32api, etc.) or special purpose extensions that you write yourself (perhaps using helper tools such as SWIG, or using object protocols such as ILU/CORBA or COM) Python becomes a very convenient "glue" or "steering" language that helps make heterogeneous collections of unrelated software packages work together. For example, by combining Numeric with oracledb you can help your SQL database do statistical analysis, or even Fourier transforms. One of the features that makes Python excel in the "glue language" role is Python's simple, usable, and powerful C language runtime API. Several commercial computer games have used Python to implement artificial intelligence or game logic, running Python code that controls speed-critical components written in C/C++.
Python's support for several different graphical user interfaces means that you can write a prototype interface in Python and then either translate the prototype into C/C++/Java/Objective-C or, if you find the Python version is fast enough, just continue using the Python version.
Because Python code is easy to read and language features such as garbage collection and high-level data types make it easy to write Python programs, it's also a great language for learning programming concepts.
Python versions are numbered A.B.C or A.B. A is the major version number -- it is only incremented for really major changes in the language. B is the minor version number, incremented for less earth-shattering changes. C is the micro-level -- it is incremented for each bugfix release. See PEP 6 for more information about bugfix releases.
Not all releases are bugfix releases. In the run-up to a new major release, a series of development releases are made, denoted as alpha, beta, or release candidate. Alphas are early releases in which interfaces aren't yet finalized; it's not unexpected to see an interface change between two alpha releases. Betas are more stable, preserving existing interfaces but possibly adding new modules, and release candidates are frozen, making no changes except as needed to fix critical bugs.
Alpha, beta and release candidate versions have an additional suffixes. The suffix for an alpha version is "aN" for some small number N, the suffix for a beta version is "bN" for some small number N, and the suffix for a release candidate version is "cN" for some small number N. In other words, all versions labeled 2.0aN precede the versions labeled 2.0bN, which precede versions labeled 2.0cN, and those precede 2.0.
You may also find version numbers with a "+" suffix, e.g. "2.2+". These are unreleased versions, built directly from the CVS trunk.
See also the documentation for sys.version, sys.hexversion, and sys.version_info.
Not really. You can do anything you want with the source, as long as you leave the copyrights in, and display those copyrights in any documentation about Python that you produce. Also, don't use the author's institute's name in publicity without prior written permission, and don't hold them responsible for anything (read the actual copyright for a precise legal wording).
If you honor the copyright rules, it's OK to use Python for commercial use, to sell copies of Python in source or binary form (modified or unmodified), or to sell products that enhance Python or incorporate Python (or part of it) in some form. We would still like to know about all commercial use of Python, of course.
The latest Python source distribution is always available from python.org, at http://www.python.org/download/. The latest development sources can be obtained via anonymous CVS from SourceForge, at http://www.sf.net/projects/python.
The source distribution is a gzipped tar file containing the complete C source, LaTeX documentation, Python library modules, example programs, and several useful pieces of freely distributable software. This will compile and run out of the box on most UNIX platforms.
Older versions of Python are also available from python.org.
All documentation is available on-line, starting at http://www.python.org/doc.
The LaTeX source for the documentation is part of the source distribution. If you don't have LaTeX, the latest Python documentation set is available, in various formats like postscript and html, by anonymous FTP - visit the above URL for links to the current versions.
There are numerous tutorials and books available. Consult the Introduction to Python topic guide to find information for beginning Python programmers.
Consult the list of python.org mirrors at http://www.python.org/doc/Mirrors.html.
There is a newsgroup, comp.lang.python, and a mailing list, python-list. The newsgroup and mailing list are gatewayed into each other -- if you can read news it's unnecessary to subscribe to the mailing list. comp.lang.python is high-traffic, receiving hundreds of postings every day, and Usenet readers are often more able to cope with this volume.
Announcements of new software releases and events can be found in comp.lang.python.announce, a low-traffic moderated list that receives about five postings per day. It's available as the python-announce mailing list.
More info about the newsgroup and mailing list, and about other lists, can be found at http://www.python.org/psa/MailingLists.html.
Archives of the newsgroup are kept by Google Groups and accessible through the "Python newsgroup search" web page, http://www.python.org/search/search_news.html. This page also contains pointers to other archival collections.
There are several - you can find links to some of them collected at http://www.python.org/doc/Hints.html#intros <http://www.python.org/doc/Hints.html#intros>`_.
It's probably best to reference your favorite book about Python.
The very first article about Python is this very old article by Python's author that's very out of date and shouldn't be referenced.
Guido van Rossum and Jelke de Boer, "Interactively Testing Remote Servers Using the Python Programming Language", CWI Quarterly, Volume 4, Issue 4 (December 1991), Amsterdam, pp 283-303.
Yes, there are many, and more are being published. See the python.org Wiki at http://www.python.org/cgi-bin/moinmoin/PythonBooks for a list.
You can also search online bookstores for "Python" (and filter out the Monty Python references; or perhaps search for "Python" and "language").
On Unix, the first choice is Emacs/XEmacs. There's an elaborate mode for editing Python code, which is available in the Python source distribution (Misc/python-mode.el). It's also bundled with XEmacs (we're still working on legal details to make it possible to bundle it with FSF Emacs). And it has its own web page: at http://www.python.org/emacs/python-mode/index.html.
There are many other choices for Unix, Windows and Macintosh. http://www.python.org/editors/ has a list.
If you are using XEmacs 19.14 or later, any XEmacs 20, FSF Emacs 19.34 or any Emacs 20, font-lock should work automatically for you if you are using the latest python-mode.el.
If you are using an older version of XEmacs or Emacs you will need to put this in your .emacs file:
(defun my-python-mode-hook () (setq font-lock-keywords python-font-lock-keywords) (font-lock-mode 1)) (add-hook 'python-mode-hook 'my-python-mode-hook)
It's currently in Amsterdam, graciously hosted by ` XS4ALL <http://www.xs4all.nl>`_. Thanks to Thomas Wouters for his work in arranging python.org's hosting.
All releases, including alphas, betas and release candidates, are announced on comp.lang.python and comp.lang.python.announce newsgroups. All announcements also appear on the Python home page, at http://www.python.org; an RSS feed of news is available.
You can also access the development version of Python through CVS. See http://sourceforge.net/cvs/?group_id=5470 for details. If you're not familiar with CVS, documents such as http://linux.oreillynet.com/pub/a/linux/2002/01/03/cvs_intro.html provide an introduction.
To report a bug or submit a patch, please use the relevant service from the Python project at SourceForge.
Bugs: http://sourceforge.net/tracker/?group_id=5470&atid=105470
Patches: http://sourceforge.net/tracker/?group_id=5470&atid=305470
If you have a SourceForge account, please log in before submitting your bug report; this will make it easier for us to contact you regarding your report in the event we have follow-up questions. It will also enable SourceForge to send you update information as we act on your bug. If you do not have a SourceForge account, please consider leaving your name and email address as part of the report.
For more information on how Python is developed, consult the Python Developer's Guide.
Apart from being a computer scientist, I'm also a fan of "Monty Python's Flying Circus" (a BBC comedy series from the seventies, in the unlikely case you didn't know). It occurred to me one day that I needed a name that was short, unique, and slightly mysterious. And I happened to be reading some scripts from the series at the time... So I decided to call my language Python.
No, but it helps. :)
Very stable. New, stable releases have been coming out roughly every 6 to 12 months since 1991, and this seems likely to continue.
With the introduction of retrospective "bugfix" releases the stability of existing releases is being improved. Bugfix releases, indicated by a third component of the version number (e.g. 2.1.3, 2.2.2), are managed for stability, only containing fixes for known problems and guaranteeing that interfaces will remain the same.
The 2.2 release, currently at bugfix release 2.2.3, is the most stable platform at this point in time.
Certainly thousands, and quite probably tens of thousands of users. More are seeing the light each day. The comp.lang.python newsgroup is very active, but overall there is no accurate estimate of the number of subscribers or Python users.
Jacek Artymiak has created a Python Users Counter; you can see the current count by visiting http://www.wszechnica.safenet.pl/cgi-bin/checkpythonuserscounter.py (this will not increment the counter; use the link there if you haven't added yourself already). Most Python users appear not to have registered themselves.
See http://www.python.org/psa/Users.html for a list of projects that use Python. Consulting the proceedings for past Python conferences will reveal contributions from many different companies and organizations.
High-profile Python projects include the Mailman mailing list manager, the Zope application server, . Several Linux distributions, most notably Red Hat, have written part or all of their installer and system administration software in Python.
See http://www.python.org/peps for the Python Enhancement Proposals (PEPs). PEPs are design documents describing a suggested new feature for Python, providing a concise technical specification and a rationale. PEP 1 explains the PEP process and PEP format; read it first if you want to submit a PEP.
New developments are discussed on the python-dev mailing list.
In general, no. There are already millions of lines of Python code around the world, so any change in the language that invalidates more than a very small fraction of existing programs has to be frowned upon. Even if you can provide a conversion program, there still is the problem of updating all documentation; many books have been written about Python, and we don't want to invalidate them all at a single stroke.
Providing a gradual upgrade path is the only way if a feature has to be changed. See http://www.python.org/peps/pep-0005.html for the procedure for introducing backward-incompatible changes while minimizing disruption for users.
The Python Software Foundation is an independent non-profit organization that holds the copyright on Python versions 2.1 and newer. The PSF's mission is to advance open source technology related to the Python programming language and to publicize the use of Python. The PSF's home page is at http://www.python.org/psf/.
Donations to the PSF are tax-exempt in the US. If you use Python and find it helpful, please contribute via the PSF donation page.
As of January, 2001 no major problems have been reported and Y2K compliance seems to be a non-issue.
Python does very few date calculations and for those it does perform relies on the C library functions. Python generally represents times either as seconds since 1970 or as a (year, month, day, ...) tuple where the year is expressed with four digits, which makes Y2K bugs unlikely. So as long as your C library is okay, Python should be okay. Of course, it's possible that a particular application written in Python makes assumptions about 2-digit years.
Because Python is available free of charge, there are no absolute guarantees. If there are unforeseen problems, liability is the user's problem rather than the developers', and there is nobody you can sue for damages. The Python copyright notice contains the following disclaimer:
4. PSF is making Python 2.3 available to Licensee on an "AS IS" basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 2.3 WILL NOT INFRINGE ANY THIRD PARTY RIGHTS.
5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON 2.3 FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON 2.3, OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
The good news is that if you encounter a problem, you have full source available to track it down and fix it. This is one advantage of an open source programming environment.
Yes. If you want to discuss Python's use in education, then you may be interested in joining the edu-sig mailinglist. See http://www.python.org/sigs/edu-sig.
It is still common to start students with a procedural (subset of a) statically typed language such as Pascal, C, or a subset of C++ or Java. Students may be better served by learning Python as their first language. Python has a very simple and consistent syntax and a large standard library and, most importantly, using Python in a beginning programming course permits students to concentrate on important programming skills such as problem decomposition and data type design.
With Python, students can be quickly introduced to basic concepts such as loops and procedures. They can even probably work with user-defined objects in their very first course. They could implement a tree structure as nested Python lists, for example. They could be introduced to objects in their first course if desired. For a student who has never programmed before, using a statically typed language seems unnatural. It presents additional complexity that the student must master and slows the pace of the course. The students are trying to learn to think like a computer, decompose problems, design consistent interfaces, and encapsulate data. While learning to use a statically typed language is important in the long term, it is not necessarily the best topic to address in the students' first programming course.
Many other aspects of Python make it a good first language. Like Java, Python has a large standard library so that students can be assigned programming projects very early in the course that do something. Assignments aren't restricted to the standard four-function calculator and check balancing programs. By using the standard library, students can gain the satisfaction of working on realistic applications as they learn the fundamentals of programming. Using the standard library also teaches students about code reuse.
Python's interactive interpreter also enables students to test language features while they're programming. They can keep a window with the interpreter running while they enter their programs' source in another window. If they can't remember the methods for a list, they can do something like this:
>>> L = [] >>> dir(L) ['append', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort'] >>> help(L.append) Help on built-in function append: append(...) L.append(object) -- append object to end >>> L.append(1) >>> L [1]
With the interpreter, documentation is never far from the student as he's programming.
There are also good IDEs for Python. IDLE is a cross-platform IDE for Python that is written in Python using Tkinter. PythonWin is a Windows-specific IDE. Emacs users will be happy to know that there is a very good Python mode for Emacs. All of these programming environments provide syntax highlighting, auto-indenting, and access to the interactive interpreter while coding. For more information about IDEs, see XXX editor question.
If your department is currently using Pascal because it was designed to be a teaching language, then you'll be happy to know that Guido van Rossum designed Python to be simple to teach to everyone but powerful enough to implement real world applications. Python makes a good language for first time programmers because that was one of Python's design goals. There are papers at http://www.python.org/doc/essays on the Python website by Python's creator explaining his objectives for the language. "Computer Programming for Everybody" is the text of a funding proposal that describes a vision for using Python for teaching. If you're seriously considering Python as a language for your school, Guido van Rossum may be able to correspond with you about how the language would fit in your curriculum.
While Python jobs may not be as prevalent as C/C++/Java jobs, teachers should not worry about teaching students critical job skills in their first course. The skills that win students a job are those they learn in their senior classes and internships. Their first programming courses are there to lay a solid foundation in programming fundamentals. The primary question in choosing the language for such a course should be which language permits the students to learn this material without hindering or limiting them.
Another argument for Python is that there are many tasks for which something like C++ is overkill. That's where languages like Python, Perl, Tcl, and Visual Basic thrive. It's critical for students to know something about these languages. (Every employer for whom I've worked used at least one such language.) Of the languages listed above, Python probably makes the best language in a programming curriculum since its syntax is simple, consistent, and not unlike other languages (C/C++/Java) that are probably in the curriculum. By starting students with Python, a department simultaneously lays the foundations for other programming courses and introduces students to the type of language that is often used as a "glue" language. As an added bonus, Python can be used to interface with Microsoft's COM components (thanks to Mark Hammond). There is also Jython, a Java implementation of the Python interpreter, that can be used to connect Java components.
If you currently start students with Pascal or C/C++ or Java, you may be worried they will have trouble learning a statically typed language after starting with Python. I think that this fear most often stems from the fact that the teacher started with a statically typed language, and we tend to like to teach others in the same way we were taught. In reality, the transition from Python to one of these other languages is quite simple.
To motivate a statically typed language such as C++, begin the course by explaining that unlike Python, their first language, C++ is compiled to a machine dependent executable. Explain that the point is to make a very fast executable. To permit the compiler to make optimizations, programmers must help it by specifying the "types" of variables. By restricting each variable to a specific type, the compiler can reduce the book-keeping it has to do to permit dynamic types. The compiler also has to resolve references at compile time. Thus, the language gains speed by sacrificing some of Python's dynamic features. Then again, the C++ compiler provides type safety and catches many bugs at compile time instead of run time (a critical consideration for many commercial applications). C++ is also designed for very large programs where one may want to guarantee that others don't touch an object's implementation. C++ provides very strong language features to separate an object's implementation from its interface. Explain why this separation is a good thing.
The first day of a C++ course could then be a whirlwind introduction to what C++ requires and provides. The point here is that after a semester or two of Python, students are hopefully competent programmers. They know how to handle loops and write procedures. They've also worked with objects, thought about the benefits of consistent interfaces, and used the technique of subclassing to specialize behavior. Thus, a whirlwind introduction to C++ could show them how objects and subclassing looks in C++. The potentially difficult concepts of object-oriented design were taught without the additional obstacles presented by a language such as C++ or Java. When learning one of these languages, the students would already understand the "road map." They understand objects; they would just be learning how objects fit in a statically typed languages. Language requirements and compiler errors that seem unnatural to beginning programmers make sense in this new context. Many students will find it helpful to be able to write a fast prototype of their algorithms in Python. Thus, they can test and debug their ideas before they attempt to write the code in the new language, saving the effort of working with C++ types for when they've discovered a working solution for their assignments. When they get annoyed with the rigidity of types, they'll be happy to learn about containers and templates to regain some of the lost flexibility Python afforded them. Students may also gain an appreciation for the fact that no language is best for every task. They'll see that C++ is faster, but they'll know that they can gain flexibility and development speed with a Python when execution speed isn't critical.
If you have any concerns that weren't addressed here, please discuss them on the Edu-SIG. We'd love to hear about it if you choose Python for your course.
Yes. You can run it after building with make test, or you can run it manually with this command at the Python prompt:
import test.autotest
The test set doesn't test all features of Python, but it goes a long way to confirm that Python is actually working.
NOTE: if "make test" fails, don't just mail the output to the newsgroup -- this doesn't give enough information to debug the problem. Instead, find out which test fails, and run that test manually from an interactive interpreter. For example, if "make test" reports that test_spam fails, try this interactively:
import test.test_spam
This generally produces more verbose output which is helpful in debugging the problem. If you find a bug in Python or the libraries, or in the tests, please report this in the Python bug tracker.
The test set makes occasional unwarranted assumptions about the semantics of C floating point operations. Until someone donates a better floating point test set, you will have to comment out the offending floating point tests and execute similar tests manually.
Try the following:
hostname$ python >>> import _tkinter >>> import Tkinter >>> Tkinter._test()
This should pop up a window with two buttons, one "Click me" and one "Quit".
If the first statement (import _tkinter) fails, your Python installation probably has not been configured to support Tcl/Tk. On Unix, if you have installed Tcl/Tk, you have to rebuild Python after editing the Modules/Setup file to enable the _tkinter module and the TKPATH environment variable.
It is also possible to get complaints about Tcl/Tk version number mismatches or missing TCL_LIBRARY or TK_LIBRARY environment variables. These have to do with Tcl/Tk installation problems.
A common problem is to have installed versions of tcl.h and tk.h that don't match the installed version of the Tcl/Tk libraries; this usually results in linker errors or, when using dynamic loading, complaints about missing symbols during loading the shared library.
It is generally necessary to run make clean after a configuration change.
On some systems (e.g. Sun), if the target already exists in the source directory, it is created there instead of in the build directory. This is usually because you have previously built without VPATH. Try running make clobber in the source directory.
Check the Misc/ directory in the Python source distribution for special build instructions for your platform.
If you can't find any relevant instructions, please submit the details to the Python bug tracker and we'll look into it. Please provide as many details as possible. In particular, if you don't tell us what type of computer and what operating system (and version) you are using it will be difficult for us to figure out what is the matter. If you have compilation output logs, please use file uploads -- don't paste everything in the message box.
In many cases, we won't have access to the same hardware or operating system version, so you must have a SourceForge account. Anonymous bug reports are rarely helpful because often the reporter can't be contacted if there are further questions. With an account, you will also receive updates as we act on your report.
Most likely, there's a version mismatch between the Tcl/Tk header files (tcl.h and tk.h) and the Tcl/Tk libraries you are using e.g. "-ltk8.4" and "-ltcl8.4" arguments for _tkinter in the Setup file).
Most likely, all test compilations run by the configure script are failing for some reason or another. Have a look in config.log to see what could be the reason.
Static type object initializers in extension modules may cause compiles to fail with an error message like "initializer not a constant". Fredrik Lundh explains:
This shows up when building DLL under MSVC. There's two ways to address this: either compile the module as C++, or change your code to something like:
statichere PyTypeObject bstreamtype = { PyObject_HEAD_INIT(NULL) /* must be set by init function */ 0, "bstream", sizeof(bstreamobject), ... void initbstream() { /* Patch object type */ bstreamtype.ob_type = &PyType_Type; Py_InitModule("bstream", functions); ... }
On Unix, if you have enabled the readline module (i.e. if Emacs-style command line editing and bash-style history works for you), you can add this by importing the standard library module rlcompleter. When completing a simple identifier, it completes keywords, built-ins and globals in __main__; when completing NAME.NAME..., it evaluates (!) the expression up to the last dot and completes its attributes.
This way, you can do "import string", type "string.", hit the completion key twice, and see the list of names defined by the string module.
Tip: to use the tab key as the completion key, call:
readline.parse_and_bind("tab: complete")
You can put this in a ~/.pythonrc file, and set the PYTHONSTARTUP environment variable to ~/.pythonrc. This will cause the completion to be enabled whenever you run Python interactively.
Notes.
Use "make clobber" instead.
Use "make clean" to reduce the size of the source/build directory after you're happy with your build and installation. If you have already tried to build python and you'd like to start over, you should use "make clobber". It does a "make clean" and also removes files such as the partially built Python library from a previous build.
If you have been using the profile module, and have properly calibrated a copy of the module as described in the documentation for the profiler:
<A HREF="http://www.python.org/doc/current/lib/profile-calibration.html">http://www.python.org/doc/current/lib/profile-calibration.html</A>
then it is possible that the regression test "test___all__" will fail if you run the regression test manually rather than using "make test" in the Python source directory. This will happen if you have set your PYTHONPATH environment variable to include the directory containing your calibrated profile module. You have probably calibrated the profiler using an older version of the profile module which does not define the __all__ value, added to the module as of Python 2.1.
The problem can be fixed by removing the old calibrated version of the profile module and using the latest version to do a fresh calibration. In general, you will need to re-calibrate for each version of Python anyway, since the performance characteristics can change in subtle ways that impact profiling.
This linker error occurs on Solaris if you attempt to build an extension module which incorporates position-dependent (non-PIC) code. A common source of problems is that a static library (.a file), such as libreadline.a or libcrypto.a is linked with the extension module. The error specifically occurs when using gcc as the compiler, but /usr/ccs/bin/ld as the linker.
The following solutions and work-arounds are known:
Options 3 and 4 are not recommended, since the ability to share code across processes is lost.
Yes.
The pdb module is a simple but adequate console-mode debugger for Python. It is part of the standard Python library, and is documented in the Library Reference Manual. You can also write your own debugger by using the code for pdb as an example.
The IDLE interactive development environment, which is part of the standard Python distribution (normally available as Tools/scripts/idle), includes a graphical debugger. There is documentation for the IDLE debugger at http://www.python.org/idle/doc/idle2.html#Debugger
PythonWin is a Python IDE that includes a GUI debugger based on pdb. The Pythonwin debugger colors breakpoints and has quite a few cool features such as debugging non-Pythonwin programs. A reference can be found at http://www.python.org/ftp/python/pythonwin/pwindex.html More recent versions of PythonWin are available as a part of the ActivePython distribution (see http://www.activestate.com/Products/ActivePython/index.html).
Pydb is a version of the standard Python debugger pdb, modified for use with DDD (Data Display Debugger), a popular graphical debugger front end. Pydb can be found at http://packages.debian.org/unstable/devel/pydb.html> and DDD can be found at http://www.gnu.org/software/ddd.
There are a number of commmercial Python IDEs that include graphical debuggers. They include:
- Wing IDE (http://wingide.com)
- Komodo IDE (http://www.activestate.com/Products/Komodo)
That's a tough one, in general. There are many tricks to speed up Python code; I would consider rewriting parts in C only as a last resort. One thing to notice is that function and (especially) method calls are rather expensive; if you have designed a purely OO interface with lots of tiny functions that don't do much more than get or set an instance variable or call another method, you might consider using a more direct way, e.g. directly accessing instance variables. Also see the standard module "profile" (described in the Library Reference manual) which makes it possible to find out where your program is spending most of its time (if you have some patience -- the profiling itself can slow your program down by an order of magnitude).
Remember that many standard optimization heuristics you may know from other programming experience may well apply to Python. For example it may be faster to send output to output devices using larger writes rather than smaller ones in order to avoid the overhead of kernel system calls. Thus CGI scripts that write all output in "one shot" may be faster than those that write lots of small pieces of output.
Also, be sure to use Python's core features where appropriate. For example, slicing allows programs to chop up lists and other sequence objects in a single tick of the interpreter's mainloop using highly optimized C implementations. Thus to get the same effect as:
L2 = [] for i in range[3]: L2.append(L1[i])
it is much shorter and far faster to use
L2 = list(L1[:3]) # "list" is redundant if L1 is a list.
Note that the map() function, particularly used with built-in methods or builtin functions can be a convenient accelerator. For example to pair the elements of two lists together:
>>> map(None, [1,2,3], [4,5,6]) [(1, 4), (2, 5), (3, 6)]
or to compute a number of sines:
>>> map( math.sin, (1,2,3,4)) [0.841470984808, 0.909297426826, 0.14112000806, -0.756802495308]
The map operation completes very quickly in such cases.
Other examples include the join and split methods of string objects. For example if s1..s7 are large (10K+) strings then "".join([s1,s2,s3,s4,s5,s6,s7])` may be far faster than the more obvious ``s1+s2+s3+s4+s5+s6+s7, since the "summation" will compute many subexpressions, whereas join does all the copying in one pass. For manipulating strings also consider the regular expression libraries and the "substitution" operations String % tuple and String % dictionary. Also be sure to use the list.sort() builtin method to do sorting, and see FAQ's 4.51 and 4.59 for examples of moderately advanced usage -- list.sort beats other techniques for sorting in all but the most extreme circumstances.
Another common trick is to "push loops into functions or methods." For example suppose you have a program that runs slowly and you use the profiler (profile.run) to determine that a Python function ff() is being called lots of times. If you notice that ff ():
def ff(x): ...do something with x computing result... return result
tends to be called in loops like:
list = map(ff, oldlist)
or:
for x in sequence: value = ff(x) ...do something with value...
then you can often eliminate function call overhead by rewriting ff() to:
def ffseq(seq): resultseq = [] for x in seq: ...do something with x computing result... resultseq.append(result) return resultseq
and rewrite the two examples to list = ffseq(oldlist) and to:
for value in ffseq(sequence): ...do something with value...
Single calls to ff(x) translate to ffseq([x])[0] with little penalty. Of course this technique is not always appropriate and there are other variants, which you can figure out.
You can gain some performance by explicitly storing the results of a function or method lookup into a local variable. A loop like:
for key in token: dict[key] = dict.get(key, 0) + 1
resolves dict.get every iteration. If the method isn't going to change, a slightly faster implementation is:
dict_get = dict.get # look up the method once for key in token: dict[key] = dict_get(key, 0) + 1
Default arguments can be used to determine values once, at compile time instead of at run time. This can only be done for functions or objects which will not be changed during program execution, such as replacing
def degree_sin(deg): return math.sin(deg * math.pi / 180.0)
with
def degree_sin(deg, factor = math.pi/180.0, sin = math.sin): return sin(deg * factor)
Because this trick uses default arguments for terms which should not be changed, it should only be used when you are not concerned with presenting a possibly confusing API to your users.
Don't bother applying these optimization tricks until you know you need them, after profiling has indicated that a particular function is the heavily executed hot spot in the code. Optimizations almost always make the code less clear, and you shouldn't pay the costs of reduced clarity (increased development time, greater likelihood of bugs) unless the resulting performance benefit is worth it.
For an anecdote related to optimization, see http://www.python.org/doc/essays/list2str.html.
Yes. PyChecker is a static analysis tool for finding bugs in Python source code as well as warning about code complexity and style.
You can get PyChecker from http://pychecker.sf.net.
Did you do something like this?
x = 1 # make a global def f(): print x # try to print the global ... for j in range(100): if q>3: x=4
Any variable assigned in a function is local to that function. unless it is specifically declared global. Since a value is bound to x as the last statement of the function body, the compiler assumes that x is local. Consequently the "print x" attempts to print an uninitialized local variable and will trigger a NameError.
In such cases the solution is to insert an explicit global declaration at the start of the function, making it:
def f(): global x print x # try to print the global ... for j in range(100): if q>3: x=4
In this case, all references to x are interpreted as references to the x from the module namespace.
In Python, variables inside a function are implicitly global, unless they are assigned anywhere within the function's body. If a variable is ever assigned a new value inside the function, the variable is implicitly local, and you need to explicitly declare it as 'global'.
Though a bit surprising at first, a moment's consideration explains this. On one hand, requirement of global for assigned vars provides a bar against unintended side-effects. On the other hand, if global were required for all global references, you'd be using global all the time. Eg, you'd have to declare as global every reference to a builtin function, or to a component of an imported module. This clutter would defeat the usefulness of the global declaration for identifying side-effects.
First, the standard modules are great. Use them! The standard Python library is large and varied. Using modules can save you time and effort and will reduce the maintenance cost of your code. Other programmers are finding and fixing bugs in the standard Python modules, so they're likely to be of better qulity. Coworkers may also be familiar with the modules that you use, reducing the amount of time it takes them to understand your code.
The rest of this answer is largely a matter of personal preference, but here are some observations from comp.lang.python; thanks to all who responded.
In general, don't use from modulename import *. Doing so clutters the importer's namespace. Some avoid this idiom even with the few modules that were designed to be imported in this manner. Modules designed in this manner include Tkinter, threading, and wxPython.)
Import modules at the top of a file. Doing so makes it clear what other modules your code requires and avoids questions of whether the module name is in scope. Using one import per line makes it easy to add and delete module imports, but using multiple imports per line uses less screen space.
Move imports into a local scope, such as at the top of a function definition, only if there are many import statement and you're trying to reduce the initialization time of a module. This technique is especially helpful if many of the imports are unnecessary depending on how the program executes. You may also want to move imports into a function if the modules are only ever used in that function. Note that loading a module the first time may be expensive (because of the one time initialization of the module) but that loading a module multiple times is virtually free (a couple of dictionary lookups). Even if the module name has gone out of scope, the module is probably available in sys.modules. Thus, there isn't really anything wrong with putting no imports at the module level (if they aren't needed) and putting all of the imports at the function level.
It is sometimes necessary to move imports to a function or class to avoid problems with circular imports. Gordon McMillan says:
Circular imports are fine where both modules use the "import <module>" form of import. They fail when the 2nd module wants to grab a name out of the first ("from module import name") and the import is at the top level. That's because names in the 1st are not yet available, because the first module is busy importing the 2nd.
In this case, if the second module is only used in one function, then the import can easily be moved into that function. By the time the import is called, the first module will have finished initializing, and the second module can do its import.
It may also be necessary to move imports out of the top level of code if some of the modules are platform-specific. In that case, it may not even be possible to import all of the modules at the top of the file. In this case, importing the correct modules in the corresponding platform-specific code is a good option.
If only instances of a specific class use a module, then it is reasonable to import the module in the class's __init__ method and then assign the module to an instance variable so that the module is always available (via that instance variable) during the life of the object. Note that to delay an import until the class is instantiated, the import must be inside a method. Putting the import inside the class but outside of any method still causes the import to occur when the module is initialized.
Collect the arguments using the * and ** specifiers in the function's parameter list; this gives you the positional arguments as a tuple and the keyword arguments as a dictionary. You can then pass these arguments when calling another function by using * and **:
def f(x, *tup, **kwargs): ... kwargs['width']='14.3c' ... g(x, *tup, **kwargs)
In the unlikely case that you care about compatibility with Python versions older than 2.0, use 'apply':
def f(x, *tup, **kwargs): ... kwargs['width']='14.3c' ... apply(g, (x,)+tup, kwargs)
The thing to remember is that arguments are passed by assignment in Python. Since assignment just creates references to objects, there's no alias between an argument name in the caller and callee, and so no call-by-reference per se. But you can simulate it in a number of ways:
By returning a tuple, holding the final values of arguments:
def func2(a, b): a = 'new-value' # a and b are local names b = b + 1 # assigned to new objects return a, b # return new values x, y = 'old-value', 99 x, y = func2(x, y) print x, y # output: new-value 100
By using global variables; but you probably shouldn't :-)
By passing a mutable (changeable in-place) object:
def func1(a): a[0] = 'new-value' # 'a' references a mutable list a[1] = a[1] + 1 # changes a shared object args = ['old-value', 99] func1(args) print args[0], args[1] # output: new-value 100
By passing in a dictionary that gets mutated:
def func3(args): args['a'] = 'new-value' # args is a mutable dictionary args['b'] = args['b'] + 1 # change it in-place args = {'a':' old-value', 'b': 99} func3(args) print args['a'], args['b']
Or bundle up values in a class instance:
class callByRef: def __init__(self, **args): for (key, value) in args.items(): setattr(self, key, value) def func4(args): args.a = 'new-value' # args is a mutable callByRef args.b = args.b + 1 # change object in-place args = callByRef(a='old-value', b=99) func4(args) print args.a, args.b
But there's probably no good reason to get this complicated :-).
Choice 1 is the probably the best style in most cases.
You have two choices: you can use default arguments and override them or you can use "callable objects." For example suppose you wanted to define linear(a,b) which returns a function f where f(x) computes the value a*x+b. Using default arguments:
def linear(a,b): def result(x, a=a, b=b): return a*x + b return result
Or using callable objects:
class linear: def __init__(self, a, b): self.a, self.b = a,b def __call__(self, x): return self.a * x + self.b
In both cases:
taxes = linear(0.3,2)
gives a callable object where taxes(10e6) == 0.3 * 10e6 + 2.
The defaults strategy has the disadvantage that the default arguments could be accidentally or maliciously overridden. The callable objects approach has the disadvantage that it is a bit slower and a bit longer. Note however that a collection of callables can share their signature via inheritance. EG
class exponential(linear): # __init__ inherited def __call__(self, x): return self.a * (x ** self.b)
On comp.lang.python, zenin@bawdycaste.org points out that an object can encapsulate state for several methods in order to emulate the "closure" concept from functional programming languages, for example:
class counter: value = 0 def set(self, x): self.value = x def up(self): self.value=self.value+1 def down(self): self.value=self.value-1 count = counter() inc, dec, reset = count.up, count.down, count.set
Here inc, dec and reset act like "functions which share the same closure containing the variable count.value" (if you like that way of thinking).
Try copy.copy() or copy.deepcopy() for the general case. Not all objects can be copied, but most can.
Dictionaries have a copy method. Sequences can be copied by slicing:
new_l = l[:]
This depends on the object type.
For an instance x of a user-defined class, instance attributes are found in the dictionary x.__dict__, and methods and attributes defined by its class are found in x.__class__.__bases__[i].__dict__ (for i in range(len(x.__class__.__bases__))). You'll have to walk the tree of base classes to find all class methods and attributes.
Many, but not all built-in types define a list of their method names in x.__methods__, and if they have data attributes, their names may be found in x.__members__. However this is only a convention.
For more information, read the source of the standard (but undocumented) module newdir.
Yes, Guido has written the "Python Style Guide". See http://www.python.org/doc/essays/styleguide.html
Generally speaking, it can't, because objects don't really have names. The assignment statement does not store the assigned value in the name but a reference to it. Essentially, assignment creates a binding of a name to a value. The same is true of def and class statements, but in that case the value is a callable. Consider the following code:
class A: pass B = A a = B() b = a print b <__main__.A instance at 016D07CC> print a <__main__.A instance at 016D07CC>
Arguably the class has a name: even though it is bound to two names and invoked through the name B the created instance is still reported as an instance of class A. However, it is impossible to say whether the instance's name is a or b, since both names are bound to the same value.
Generally speaking it should not be necessary for your code to "know the names" of particular values. Unless you are deliberately writing introspective programs, this is usually an indication that a change of approach might be beneficial.
Not directly. In many cases you can mimic a?b:c with "a and b or c", but there's a flaw: if b is zero (or empty, or None -- anything that tests false) then c will be selected instead. In many cases you can prove by looking at the code that this can't happen (e.g. because b is a constant or has a type that can never be false), but in general this can be a problem.
Tim Peters (who wishes it was Steve Majewski) suggested the following solution: (a and [b] or [c])[0]. Because [b] is a singleton list it is never false, so the wrong path is never taken; then applying [0] to the whole thing gets the b or c that you really wanted. Ugly, but it gets you there in the rare cases where it is really inconvenient to rewrite your code using 'if'.
As a last resort it is possible to implement the "?:" operator as a function:
def q(cond,on_true,on_false): from inspect import isfunction if cond: if not isfunction(on_true): return on_true else: return apply(on_true) else: if not isfunction(on_false): return on_false else: return apply(on_false)
In most cases you'll pass b and c directly: q(a,b,c). To avoid evaluating b or c when they shouldn't be, encapsulate them within a lambda function, e.g.: q(a,lambda: b, lambda: c).
It has been asked why Python has no if-then-else expression, since most language have one; it is a frequently requested feature.
There are several possible answers: just as many languages do just fine without one; it can easily lead to less readable code; no sufficiently "Pythonic" syntax has been discovered; a search of the standard library found remarkably few places where using an if-then-else expression would make the code more understandable.
Nevertheless, in an effort to decide once and for all whether an if-then-else expression should be added to the language, PEP 308 (http://www.python.org/peps/pep-0308.html) has been put forward, proposing a specific syntax. The community can now vote on this issue. XXX update
Yes. See the following three examples, due to Ulf Bartelt:
# Primes < 1000 print filter(None,map(lambda y:y*reduce(lambda x,y:x*y!=0, map(lambda x,y=y:y%x,range(2,int(pow(y,0.5)+1))),1),range(2,1000))) # First 10 Fibonacci numbers print map(lambda x,f=lambda x,f:(x<=1) or (f(x-1,f)+f(x-2,f)): f(x,f), range(10)) # Mandelbrot set print (lambda Ru,Ro,Iu,Io,IM,Sx,Sy:reduce(lambda x,y:x+y,map(lambda y, Iu=Iu,Io=Io,Ru=Ru,Ro=Ro,Sy=Sy,L=lambda yc,Iu=Iu,Io=Io,Ru=Ru,Ro=Ro,i=IM, Sx=Sx,Sy=Sy:reduce(lambda x,y:x+y,map(lambda x,xc=Ru,yc=yc,Ru=Ru,Ro=Ro, i=i,Sx=Sx,F=lambda xc,yc,x,y,k,f=lambda xc,yc,x,y,k,f:(k<=0)or (x*x+y*y >=4.0) or 1+f(xc,yc,x*x-y*y+xc,2.0*x*y+yc,k-1,f):f(xc,yc,x,y,k,f):chr( 64+F(Ru+x*(Ro-Ru)/Sx,yc,0,0,i)),range(Sx))):L(Iu+y*(Io-Iu)/Sy),range(Sy ))))(-2.1, 0.7, -1.2, 1.2, 30, 80, 24) # \___ ___ \___ ___ | | |__ lines on screen # V V | |______ columns on screen # | | |__________ maximum of "iterations" # | |_________________ range on y axis # |____________________________ range on x axis
Don't try this at home, kids!
When a statement suite (as opposed to an expression) is compiled by compile(), exec or execfile(), it must end in a newline. In some cases, when the source ends in an indented block it appears that at least two newlines are required. XXX fixed now?
To specify an octal digit, precede the octal value with a zero. For example, to set the variable "a" to the octal value "10" (8 in decimal), type:
>>> a = 010
To verify that this works, you can type "a" and hit enter while in the interpreter, which will cause Python to spit out the current value of "a" in decimal:
>>> a 8
Hexadecimal is just as easy. Simply precede the hexadecimal number with a zero, and then a lower or uppercase "x". Hexadecimal digits can be specified in lower or uppercase. For example, in the Python interpreter:
>>> a = 0xa5 >>> a 165 >>> b = 0XB2 >>> b 178
For integers, use the built-in int() function, e.g. int('144') == 144. Similarly, long() converts from string to long integer, e.g. long('144') == 144L; and float() to floating-point, e.g. float('144') == 144.0.
Note that these are restricted to decimal interpretation, so that int('0144') == 144 and int('0x144') raises ValueError. For Python 2.0 int takes the base to convert from as a second optional argument, so int('0x144', 16) == 324.
For greater flexibility, or before Python 1.5, import the module string and use the string.atoi() function for integers, string.atol() for long integers, or string.atof() for floating-point. E.g., string.atoi('100', 16) == string.atoi('0x100', 0) == 256. See the library reference manual section for the string module for more details.
While you could use the built-in function eval() instead of any of those, this is not recommended, because someone could pass you a Python expression that might have unwanted side effects (like reformatting your disk). It also has the effect of interpreting numbers as Python expressions, so that e.g. eval('09') gives a syntax error since Python regards numbers starting with '0' as octal (base 8).
To convert, e.g., the number 144 to the string '144', use the built-in function repr() or the backquote notation (these are equivalent). If you want a hexadecimal or octal representation, use the built-in functions hex() or oct(), respectively. For fancy formatting, use the % operator on strings, just like C printf formats, e.g. "%04d" % 144 yields '0144' and "%.3f" % (1/3.0) yields '0.333'. See the library reference manual for details.
Strings are immutable (see question 6.2) so you cannot modify a string directly. If you need an object with this ability, try converting the string to a list or take a look at the array module.
>>> s = "Hello, world" >>> a = list(s) >>> print a ['H', 'e', 'l', 'l', 'o', ',', ' ', 'w', 'o', 'r', 'l', 'd'] >>> a[7:] = list("there!") >>> import string >>> print string.join(a, '') 'Hello, there!' >>> import array >>> a = array.array('c', s) >>> print a array('c', 'Hello, world') >>> a[0] = 'y' ; print a array('c', 'yello world') >>> a.tostring() 'yello, world'
There are various techniques:
Use a dictionary pre-loaded with strings and functions. The primary advantage of this technique is that the strings do not need to match the names of the functions. This is also the primary technique used to emulate a case construct:
def a(): pass def b(): pass dispatch = {'go': a, 'stop': b} # Note lack of parens for funcs dispatch[get_input()]() # Note trailing parens to call function
Use the built-in function getattr():
import foo getattr(foo, 'bar')()
Note that getattr() works on any object, including classes, class instances, modules, and so on.
This is used in several places in the standard library, like this:
class Foo: def do_foo(self): ... def do_bar(self): ... f = getattr(foo_instance, 'do_' + opname) f()
Use locals() or eval() to resolve the function name:
def myFunc(): print "hello" fname = "myFunc" f = locals()[fname] f() f = eval(fname) f()
Note: Using eval() can be dangerous. If you don't have absolute control over the contents of the string, all sorts of things could happen...
There are two partial substitutes. If you want to remove all trailing whitespace, use the rstrip() method of string objects. Otherwise, if there is only one line in the string S, use S.splitlines()[0].
#!/usr/bin/python import re, os, StringIO lines=StringIO.StringIO( "The Python Programming Language\r\n" "The Python Programming Language \r \r \r\r\n" "The\rProgramming\rLanguage\r\n" "The\rProgramming\rLanguage\r\r\r\r\n" "The\r\rProgramming\r\rLanguage\r\r\r\r\n" ) ln=re.compile("(?:[\r]?\n|\r)$") # dos:\r\n, unix:\n, mac:\r, others: unknown # os.linesep does not work if someone ftps(in binary mode) a dos/mac text file # to your unix box #ln=re.compile(os.linesep + "$") while 1: s=lines.readline() if not s: break print "1.(%s)" % `s.rstrip()` print "2.(%s)" % `ln.sub( "", s, 1)` print "3.(%s)" % `s.splitlines()[0]` print "4.(%s)" % `s.splitlines()` print lines.close()
Not as such.
For simple input parsing, the easiest approach is usually to split the line into whitespace-delimited words using string.split(), and to convert decimal strings to numeric values using int(), long() or float(). (Python's int() is 32-bit and its long() is arbitrary precision.) string.split supports an optional "sep" parameter which is useful if the line uses something other than whitespace as a delimiter.
For more complicated input parsing, regular expressions (see module re) are better suited and more powerful than C's sscanf().
There's a contributed module that emulates sscanf(), by Steve Clift; see contrib/Misc/sscanfmodule.c of the ftp site:
http://www.python.org/ftp/python/contrib-09-Dec-1999/Misc
The function tuple(seq) converts any sequence into a tuple with the same items in the same order. For example, tuple([1, 2, 3]) yields (1, 2, 3) and tuple('abc') yields ('a', 'b', 'c'). If the argument is a tuple, it does not make a copy but returns the same object, so it is cheap to call tuple() when you aren't sure that an object is already a tuple.
The function list(seq) converts any sequence into a list with the same items in the same order. For example, list((1, 2, 3)) yields [1, 2, 3] and list('abc') yields ['a', 'b', 'c']. If the argument is a list, it makes a copy just like seq[:] would.
Python sequences are indexed with positive numbers and negative numbers. For positive numbers 0 is the first index 1 is the second index and so forth. For negative indices -1 is the last index and -2 is the pentultimate (next to last) index and so forth. Think of seq[-n] as the same as seq[len(seq)-n].
Using negative indices can be very convenient. For example if the string Line ends in a newline then Line[:-1] is all of Line except the newline.
XXX update Sadly the list builtin method L.insert does not observe negative indices. This feature could be considered a mistake but since existing programs depend on this feature it may stay around forever. L.insert for negative indices inserts at the start of the list. To get "proper" negative index behaviour use L[n:n] = [x] in place of the insert method.
If it is a list, the fastest solution is
list.reverse() try: for x in list: "do something with x" finally: list.reverse()
This has the disadvantage that while you are in the loop, the list is temporarily reversed. If you don't like this, you can make a copy. This appears expensive but is actually faster than other solutions:
rev = list[:] rev.reverse() for x in rev: <do something with x>
If it's not a list, a more general but slower solution is:
for i in range(len(sequence)-1, -1, -1): x = sequence[i] <do something with x>
A more elegant solution, is to define a class which acts as a sequence and yields the elements in reverse order (solution due to Steve Majewski):
class Rev: def __init__(self, seq): self.forw = seq def __len__(self): return len(self.forw) def __getitem__(self, i): return self.forw[-(i + 1)]
You can now simply write:
for x in Rev(list): <do something with x>
Unfortunately, this solution is slowest of all, due to the method call overhead...
See the Python Cookbook for a long discussion of many cool ways:
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52560
Generally, if you don't mind reordering the List
if List: List.sort() last = List[-1] for i in range(len(List)-2, -1, -1): if last==List[i]: del List[i] else: last=List[i]
If all elements of the list may be used as dictionary keys (ie, they are all hashable) this is often faster
d = {} for x in List: d[x]=x List = d.values()
Also, for extremely large lists you might consider more optimal alternatives to the first one. The second one is pretty good whenever it can be used.
["this", 1, "is", "an", "array"]
Lists are arrays in the C or Pascal sense of the word (see question 6.16). The array module also provides methods for creating arrays of fixed types with compact representations (but they are slower to index than lists). Also note that the Numerics extensions and others define array-like structures with various characteristics as well.
To get Lisp-like lists, emulate cons cells
lisp_list = ("like", ("this", ("example", None) ) )
using tuples (or lists, if you want mutability). Here the analogue of lisp car is lisp_list[0] and the analogue of cdr is lisp_list[1]. Only do this if you're sure you really need to (it's usually a lot slower than using Python lists).
Think of Python lists as mutable heterogeneous arrays of Python objects (say that 10 times fast :) ).
You probably tried to make a multidimensional array like this.
A = [[None] * 2] * 3
This makes a list containing 3 references to the same list of length two. Changes to one row will show in all rows, which is probably not what you want. The following works much better:
A = [None]*3 for i in range(3): A[i] = [None] * 2
This generates a list containing 3 different lists of length two.
If you feel weird, you can also do it in the following way:
w, h = 2, 3 A = map(lambda i,w=w: [None] * w, range(h))
For Python 2.0 the above can be spelled using a list comprehension:
w,h = 2,3 A = [ [None]*w for i in range(h) ]
Get fancy!
def method_map(objects, method, arguments): """method_map([a,b], "flog", (1,2)) gives [a.flog(1,2), b.flog(1,2)]""" nobjects = len(objects) methods = map(getattr, objects, [method]*nobjects) return map(apply, methods, [arguments]*nobjects)
It's generally a good idea to get to know the mysteries of map and apply and getattr and the other dynamic features of Python.
In general, dictionaries store their keys in an unpredictable order, so the display order of a dictionary's elements will be similarly unpredictable. (See Question 6.12 to understand why this is so.)
This can be frustrating if you want to save a printable version to a file, make some changes and then compare it with some other printed dictionary. If you have such needs you can subclass UserDict.UserDict to create a SortedDict class that prints itself in a predictable order. Here's one simpleminded implementation of such a class:
import UserDict, string class SortedDict(UserDict.UserDict): def __repr__(self): result = [] append = result.append keys = self.data.keys() keys.sort() for k in keys: append("%s: %s" % (`k`, `self.data[k]`)) return "{%s}" % string.join(result, ", ") ___str__ = __repr__
This will work for many common situations you might encounter, though it's far from a perfect solution. (It won't have any effect on the pprint module and does not transparently handle values that are or contain dictionaries.
Yes, and in Python you only have to write it once:
def st(List, Metric): def pairing(element, M = Metric): return (M(element), element) paired = map(pairing, List) paired.sort() return map(stripit, paired) def stripit(pair): return pair[1]
This technique, attributed to Randal Schwartz, sorts the elements of a list by a metric which maps each element to its "sort value". For example, if L is a list of string then
import string Usorted = st(L, string.upper) def intfield(s): return string.atoi( string.strip(s[10:15] ) ) Isorted = st(L, intfield)
Usorted gives the elements of L sorted as if they were upper case, and Isorted gives the elements of L sorted by the integer values that appear in the string slices starting at position 10 and ending at position 15. In Python 2.0 this can be done more naturally with list comprehensions:
tmp1 = [ (x.upper(), x) for x in L ] # Schwartzian transform tmp1.sort() Usorted = [ x[1] for x in tmp1 ] tmp2 = [ (int(s[10:15]), s) for s in L ] # Schwartzian transform tmp2.sort() Isorted = [ x[1] for x in tmp2 ]
Note that Isorted may also be computed by
def Icmp(s1, s2): return cmp( intfield(s1), intfield(s2) ) Isorted = L[:] Isorted.sort(Icmp)
but since this method computes intfield many times for each element of L, it is slower than the Schwartzian Transform.
You can sort lists of tuples.
>>> list1 = ["what", "I'm", "sorting", "by"] >>> list2 = ["something", "else", "to", "sort"] >>> pairs = map(None, list1, list2) >>> pairs [('what', 'something'), ("I'm", 'else'), ('sorting', 'to'), ('by', 'sort')] >>> pairs.sort() >>> pairs [("I'm", 'else'), ('by', 'sort'), ('sorting', 'to'), ('what', 'something')] >>> result = pairs[:] >>> for i in xrange(len(result)): result[i] = result[i][1] ... >>> result ['else', 'sort', 'to', 'something']
And if you didn't understand the question, please see the example above ;c). Note that "I'm" sorts before "by" because uppercase "I" comes before lowercase "b" in the ascii order. Also see 4.51.
In Python 2.0 this can be done like:
>>> list1 = ["what", "I'm", "sorting", "by"] >>> list2 = ["something", "else", "to", "sort"] >>> pairs = zip(list1, list2) >>> pairs [('what', 'something'), ("I'm", 'else'), ('sorting', 'to'), ('by', 'sort')] >>> pairs.sort() >>> result = [ x[1] for x in pairs ] >>> result ['else', 'sort', 'to', 'something']
[Followup]
Someone asked, why not this for the last steps:
result = [] for p in pairs: result.append(p[1])
This is much more legible. However, a quick test shows that it is almost twice as slow for long lists. Why? First of all, the append() operation has to reallocate memory, and while it uses some tricks to avoid doing that each time, it still has to do it occasionally, and apparently that costs quite a bit. Second, the expression "result.append" requires an extra attribute lookup. The attribute lookup could be done away with by rewriting as follows:
result = [] append = result.append for p in pairs: append(p[1])
which gains back some speed, but is still considerably slower than the original solution, and hardly less convoluted.
A class is the particular object type created by executing a class statement. Class objects are used as templates, to create instance objects, which embody both the data structure (attributes) and program routines (methods) specific to a datatype.
A class can be based on one or more other classes, called its base class(es). It then inherits the attributes and methods of its base classes. This allows an object model to be successively refined by inheritance.
The term "classic class" is used to refer to the original class implementation in Python. One problem with classic classes is their inability to use the built-in data types (such as list and dictionary) as base classes. Starting with Python 2.2 an attempt is in progress to unify user-defined classes and built-in types. It is now possible to declare classes that inherit from built-in types.
A method is a function that you normally call as x.name(arguments...) for some object x. The term is used for methods of classes and class instances as well as for methods of built-in objects. (The latter have a completely different implementation and only share the way their calls look in Python code.) Methods of classes (and class instances) are defined as functions inside the class definition.
Self is merely a conventional name for the first argument of a method -- i.e. a function defined inside a class definition. A method defined as meth(self, a, b, c) should be called as x.meth(a, b, c) for some instance x of the class in which the definition occurs; the called method will think it is called as meth(x, a, b, c).
An unbound method is a method defined in a class that is not yet bound to an instance. You get an unbound method if you ask for a class attribute that happens to be a function. You get a bound method if you ask for an instance attribute. A bound method knows which instance it belongs to and calling it supplies the instance automatically; an unbound method only knows which class it wants for its first argument (a derived class is also OK). Calling an unbound method doesn't "magically" derive the first argument from the context -- you have to provide it explicitly.
Trivia note regarding bound methods: each reference to a bound method of a particular object creates a bound method object. If you have two such references (a = inst.meth; b = inst.meth), they will compare equal (a == b) but are not the same (a is not b).
If you are developing the classes from scratch it might be better to program in a more proper object-oriented style -- instead of doing a different thing based on class membership, why not use a method and define the method differently in different classes?
However, there are some legitimate situations where you need to test for class membership.
In Python 1.5, you can use the built-in function isinstance(obj, cls).
The following approaches can be used with earlier Python versions:
An unobvious method is to raise the object as an exception and to try to catch the exception with the class you're testing for:
def is_instance_of(the_instance, the_class): try: raise the_instance except the_class: return 1 except: return 0
This technique can be used to distinguish "subclassness" from a collection of classes as well
try: raise the_instance except Audible: the_instance.play(largo) except Visual: the_instance.display(gaudy) except Olfactory: sniff(the_instance) except: raise ValueError, "dunno what to do with this!"
This uses the fact that exception catching tests for class or subclass membership.
A different approach is to test for the presence of a class attribute that is presumably unique for the given class. For instance:
class MyClass: ThisIsMyClass = 1 ... def is_a_MyClass(the_instance): return hasattr(the_instance, 'ThisIsMyClass')
This version is easier to inline, and probably faster (inlined it is definitely faster). The disadvantage is that someone else could cheat:
class IntruderClass: ThisIsMyClass = 1 # Masquerade as MyClass ...
but this may be seen as a feature (anyway, there are plenty of other ways to cheat in Python). Another disadvantage is that the class must be prepared for the membership test. If you do not "control the source code" for the class it may not be advisable to modify the class to support testability.
Delegation refers to an object oriented technique Python programmers may implement with particular ease. Consider the following:
from string import upper class UpperOut: def __init__(self, outfile): self.__outfile = outfile def write(self, str): self.__outfile.write( upper(str) ) def __getattr__(self, name): return getattr(self.__outfile, name)
Here the UpperOut class redefines the write method to convert the argument string to upper case before calling the underlying self.__outfile.write method, but all other methods are delegated to the underlying self.__outfile object. The delegation is accomplished via the "magic" __getattr__ method. Please see the language reference for more information on the use of this method.
Note that for more general cases delegation can get trickier. Particularly when attributes must be set as well as gotten the class must define a __settattr__ method too, and it must do so carefully.
The basic implementation of __setattr__ is roughly equivalent to the following:
class X: ... def __setattr__(self, name, value): self.__dict__[name] = value ...
Most __setattr__ implementations must modify self.__dict__ to store local state for self without causing an infinite recursion.
If your class definition starts with "class Derived(Base): ..." then you can call method meth defined in Base (or one of Base's base classes) as Base.meth(self, arguments...). Here, Base.meth is an unbound method (see previous question).
DON'T DO THIS. REALLY. I MEAN IT. It appears that you could call self.__class__.__bases__[0].meth(self, arguments...) but this fails when a doubly-derived method is derived from your class: for its instances, self.__class__.__bases__[0] is your class, not its base class -- so (assuming you are doing this from within Derived.meth) you would start a recursive call.
Often when you want to do this you are forgetting that classes are first class in Python. You can "point to" the class you want to delegate an operation to either at the instance or at the subclass level. For example if you want to use a "glorp" operation of a superclass you can point to the right superclass to use.
class subclass(superclass1, superclass2, superclass3): delegate_glorp = superclass2 ... def glorp(self, arg1, arg2): ... subclass specific stuff ... self.delegate_glorp.glorp(self, arg1, arg2) ... class subsubclass(subclass): delegate_glorp = superclass3 ...
Note, however that setting delegate_glorp to subclass in subsubclass would cause an infinite recursion on subclass.delegate_glorp. Careful! Maybe you are getting too fancy for your own good. Consider simplifying the design (?).
You could define an alias for the base class, assign the real base class to it before your class definition, and use the alias throughout your class. Then all you have to change is the value assigned to the alias. Incidentally, this trick is also handy if you want to decide dynamically (e.g. depending on availability of resources) which base class to use. Example:
BaseAlias = <real base class> class Derived(BaseAlias): def meth(self): BaseAlias.meth(self) ...
[Tim Peters, tim_one@email.msn.com]
Static data (in the sense of C++ or Java) is easy; static methods (again in the sense of C++ or Java) are not supported directly.
STATIC DATA
For example,
class C: count = 0 # number of times C.__init__ called def __init__(self): C.count = C.count + 1 def getcount(self): return C.count # or return self.count
c.count also refers to C.count for any c such that isinstance(c, C) holds, unless overridden by c itself or by some class on the base-class search path from c.__class__ back to C.
Caution: within a method of C,
self.count = 42
creates a new and unrelated instance vrbl named "count" in self's own dict. So rebinding of a class-static data name needs the
C.count = 314
form whether inside a method or not.
STATIC METHODS
Static methods (as opposed to static data) are unnatural in Python, because
C.getcount
returns an unbound method object, which can't be invoked without supplying an instance of C as the first argument.
The intended way to get the effect of a static method is via a module-level function:
def getcount(): return C.count
If your code is structured so as to define one class (or tightly related class hierarchy) per module, this supplies the desired encapsulation.
Several tortured schemes for faking static methods can be found by searching DejaNews. Most people feel such cures are worse than the disease. Perhaps the least obnoxious is due to Pekka Pessi (mailto:ppessi@hut.fi):
# helper class to disguise function objects class _static: def __init__(self, f): self.__call__ = f class C: count = 0 def __init__(self): C.count = C.count + 1 def getcount(): return C.count getcount = _static(getcount) def sum(x, y): return x + y sum = _static(sum) C(); C() c = C() print C.getcount() # prints 3 print c.getcount() # prints 3 print C.sum(27, 15) # prints 42
(This actually applies to all methods, but somehow the question usually comes up first in the context of constructors.)
Where in C++ you'd write
class C { C() { cout << "No arguments\n"; } C(int i) { cout << "Argument is " << i << "\n"; } }
in Python you have to write a single constructor that catches all cases using default arguments. For example:
class C: def __init__(self, i=None): if i is None: print "No arguments" else: print "Argument is", i
This is not entirely equivalent, but close enough in practice.
You could also try a variable-length argument list, e.g.
def __init__(self, *args): ....
The same approach works for all method definitions.
Variables with double leading underscore are "mangled" to provide a simple but effective way to define class private variables. See the chapter "New in Release 1.4" in the Python Tutorial. XXX update
There are several possible reasons for this.
The del statement does not necessarily call __del__ -- it simply decrements the object's reference count, and if this reaches zero __del__ is called.
If your data structures contain circular links (e.g. a tree where each child has a parent pointer and each parent has a list of children) the reference counts will never go back to zero. You'll have to define an explicit close() method which removes those pointers. Please don't ever call __del__ directly -- __del__ should call close() and close() should make sure that it can be called more than once for the same object.
If the object has ever been a local variable (or argument, which is really the same thing) to a function that caught an expression in an except clause, chances are that a reference to the object still exists in that function's stack frame as contained in the stack trace. Normally, deleting (better: assigning None to) sys.exc_traceback will take care of this. If a stack was printed for an unhandled exception in an interactive interpreter, delete sys.last_traceback instead.
There is code that deletes all objects when the interpreter exits, but it is not called if your Python has been configured to support threads (because other threads may still be active). You can define your own cleanup function using sys.exitfunc (see question 4.4).
Finally, if your __del__ method raises an exception, a warning message is printed to sys.stderr.
Starting with Python 2.0, a garbage collector periodically reclaims the space used by most cycles with no external references. (See the "gc" module documentation for details.) There are, however, pathological cases where it can be expected to fail. Moreover, the garbage collector runs some time after the last reference to your data structure vanishes, so your __del__ method may be called at an inconvenient and random time. This is inconvenient if you're trying to reproduce a problem. Worse, the order in which object's __del__ methods are executed is arbitrary.
Another way to avoid cyclical references is to use the "weakref" module, which allows you to point to objects without incrementing their reference count. Tree data structures, for instance, should use weak references for their parent and sibling pointers (if they need them!).
Python does not keep track of all instances of a class (or of a built-in type).
You can program the class's constructor to keep track of all instances, but unless you're very clever, this has the disadvantage that the instances never get deleted,because your list of all instances keeps a reference to them.
(The trick is to regularly inspect the reference counts of the instances you've retained, and if the reference count is below a certain level, remove it from the list. Determining that level is tricky -- it's definitely larger than 1.)
QUESTION:
I have a module and I wish to generate a .pyc file. How do I do it? Everything I read says that generation of a .pyc file is "automatic", but I'm not getting anywhere.
ANSWER:
When a module is imported for the first time (or when the source is more recent than the current compiled file) a .pyc file containing the compiled code should be created in the same directory as the .py file.
One reason that a .pyc file may not be created is permissions problems with the directory. This can happen, for example, if you develop as one user but run as another, such as if you are testing with a web server.
However, in most cases, that's not the problem.
Creation of a .pyc file is "automatic" if you are importing a module and Python has the ability (permissions, free space, etc...) to write the compiled module back to the directory. But note that running Python on a top level script is not considered an import and so no .pyc will be created automatically. For example, if you have a top-level module abc.py that imports another module xyz.py, when you run abc, xyz.pyc will be created since xyz is imported, but no abc.pyc file will be created since abc isn't imported.
If you need to create abc.pyc -- that is, to create a .pyc file for a module that is not imported -- you can. (Look up the py_compile and compileall modules in the Library Reference.)
You can manually compile any module using the "py_compile" module. One way is to use the compile() function in that module interactively:
>>> import py_compile >>> py_compile.compile('abc.py')
This will write the .pyc to the same location as abc.py (or you can override that with the optional parameter cfile).
You can also automatically compile all files in a directory or directories using the "compileall" module, which can also be run straight from the command line.
You can do it from the shell (or DOS) prompt by entering:
python compile.py abc.py
or
python compile.py *
Or you can write a script to do it on a list of filenames that you enter.
import sys from py_compile import compile if len(sys.argv) <= 1: sys.exit(1) for file in sys.argv[1:]: compile(file)
ACKNOWLEDGMENTS:
Steve Holden, David Bolen, Rich Somerfield, Oleg Broytmann, Steve Ferg
A module can find out its own module name by looking at the (predefined) global variable __name__. If this has the value '__main__' you are running as a script.
See the previous question. E.g. if you put the following on the last line of your module, main() is called only when your module is running as a script:
if __name__ == '__main__': main()
Suppose you have the following modules:
foo.py:
from bar import bar_var foo_var=1
bar.py:
from foo import foo_var bar_var=2
The problem is that the above is processed by the interpreter thus:
main imports foo Empty globals for foo are created foo is compiled and starts executing foo imports bar Empty globals for bar are created bar is compiled and starts executing bar imports foo (which is a no-op since there already is a module named foo) bar.foo_var = foo.foo_var ...
The last step fails, because Python isn't done with interpreting foo yet and the global symbol dict for foo is still empty.
The same thing happens when you use "import foo", and then try to access "foo.one" in global code.
There are (at least) three possible workarounds for this problem.
Guido van Rossum recommends to avoid all uses of "from <module> import ..." (so everything from an imported module is referenced as <module>.<name>) and to place all code inside functions. Initializations of global variables and class variables should use constants or built-in functions only.
Jim Roskind suggests the following order in each module:
exports (globals, functions, and classes that don't need imported base classes) import statements active code (including globals that are initialized from imported values).
Python's author doesn't like this approach much because the imports appear in a strange place, but has to admit that it works.
Matthias Urlichs recommends to restructure your code so that the recursive import is not necessary in the first place.
These solutions are not mutually exclusive.
Try
__import__('x.y.z').y.z
For more realistic situations, you may have to do something like
m = __import__(s) for i in string.split(s, ".")[1:]: m = getattr(m, i)
For reasons of efficiency as well as consistency, Python only reads the module file on the first time a module is imported. (Otherwise a program consisting of many modules, each of which imports the same basic module, would read the basic module over and over again.) To force rereading of a changed module, do this:
import modname reload(modname)
Warning: this technique is not 100% fool-proof. In particular, modules containing statements like
from modname import some_objects
will continue to work with the old version of the imported objects.
This is probably an optional module (written in C!) which hasn't been configured on your system. This especially happens with modules like "Tkinter", "stdwin", "gl", "Xt" or "Xm". For Tkinter, STDWIN and many other modules, see Modules/Setup.in for info on how to add these modules to your Python, if it is possible at all. Sometimes you will have to ftp and build another package first (e.g. Tcl and Tk for Tkinter). Sometimes the module only works on specific platforms (e.g. gl only works on SGI machines).
NOTE: if the complaint is about "Tkinter" (upper case T) and you have already configured module "tkinter" (lower case t), the solution is not to rename tkinter to Tkinter or vice versa. There is probably something wrong with your module search path. Check out the value of sys.path.
For X-related modules (Xt and Xm) you will have to do more work: they are currently not part of the standard Python distribution. You will have to ftp the Extensions tar file, i.e. ftp://ftp.python.org/pub/python/src/X-extension.tar.gz and follow the instructions there.
Depending on what platform(s) you are aiming at, there are several.
Currently supported solutions:
Cross-platform:
Tk:
There's a neat object-oriented interface to the Tcl/Tk widget set, called Tkinter. It is part of the standard Python distribution and well-supported -- all you need to do is build and install Tcl/Tk and enable the _tkinter module and the TKPATH definition in Modules/Setup when building Python. This is probably the easiest to install and use, and the most complete widget set. It is also very likely that in the future the standard Python GUI API will be based on or at least look very much like the Tkinter interface. For more info about Tk, including pointers to the source, see the Tcl/Tk home page at http://www.scriptics.com. Tcl/Tk is now fully portable to the Mac and Windows platforms (NT and 95 only); you need Python 1.4beta3 or later and Tk 4.1patch1 or later.
wxWindows:
There's an interface to wxWindows called wxPython. wxWindows is a portable GUI class library written in C++. It supports GTK, Motif, MS-Windows and Mac as targets. Ports to other platforms are being contemplated or have already had some work done on them. wxWindows preserves the look and feel of the underlying graphics toolkit, and there is quite a rich widget set and collection of GDI classes. See the wxWindows page at http://www.wxwindows.org for more details. wxPython is a python extension module that wraps many of the wxWindows C++ classes, and is quickly gaining popularity amongst Python developers. You can get wxPython as part of the source or CVS distribution of wxWindows, or directly from its home page at http://alldunn.com/wxPython.
Gtk+:
PyGtk bindings for the Gtk+ Toolkit by James Henstridge exist; see ftp://ftp.daa.com.au/pub/james/python. Note that there are two incompatible bindings. If you are using Gtk+ 1.2.x you should get the 0.6.x PyGtk bindings from
ftp://ftp.gtk.org/pub/python/v1.2
If you plan to use Gtk+ 2.0 with Python (highly recommended if you are just starting with Gtk), get the most recent distribution from
ftp://ftp.gtk.org/pub/python/v2.0
If you are adventurous, you can also check out the source from the Gnome CVS repository. Set your CVS directory to :pserver:`anonymous@anoncvs.gnome.org <mailto:anonymous@anoncvs.gnome.org>`_:/cvs/gnome and check the gnome-python module out from the repository.
Other:
There are also bindings available for the Qt toolkit (PyQt), and for KDE (PyKDE); see http://www.thekompany.com/projects/pykde.
For OpenGL bindings, see PyOpenGL.
Platform specific:
The Mac port by Jack Jansen has a rich and ever-growing set of modules that support the native Mac toolbox calls. See the documentation that comes with the Mac port. See ftp://ftp.python.org/pub/python/mac.
Pythonwin by Mark Hammond includes an interface to the Microsoft Foundation Classes and a Python programming environment using it that's written mostly in Python. See http://www.python.org/windows.
There's an object-oriented GUI based on the Microsoft Foundation Classes model called WPY, supported by Jim Ahlstrom. Programs written in WPY run unchanged and with native look and feel on Windows NT/95, Windows 3.1 (using win32s), and on Unix (using Tk). Source and binaries for Windows and Linux are available in ftp://ftp.python.org/pub/python/wpy.
Obsolete or minority solutions:
There's an interface to X11, including the Athena and Motif widget sets (and a few individual widgets, like Mosaic's HTML widget and SGI's GL widget) available from ftp://ftp.python.org/pub/python/src/X-extension.tar.gz. Support by Sjoerd Mullender sjoerd@cwi.nl.
On top of the X11 interface there's the vpApp toolkit by Per Spilling, now also maintained by Sjoerd Mullender sjoerd@cwi.nl. See ftp://ftp.cwi.nl/pub/sjoerd/vpApp.tar.gz.
For SGI IRIX only, there are unsupported interfaces to the complete GL (Graphics Library -- low level but very good 3D capabilities) as well as to FORMS (a buttons-and-sliders-etc package built on top of GL by Mark Overmars -- ftp'able from ftp://ftp.cs.ruu.nl/pub/SGI/FORMS). This is probably also becoming obsolete, as OpenGL takes over (see above).
There is an interface to WAFE, a Tcl interface to the X11 Motif and Athena widget sets. WAFE is at http://www.wu-wien.ac.at/wafe/wafe.html.
Freeze is a tool to create stand-alone applications (see 4.28).
When freezing Tkinter applications, the applications will not be truly stand-alone, as the application will still need the tcl and tk libraries.
One solution is to ship the application with the tcl and tk libraries, and point to them at run-time using the TCL_LIBRARY and TK_LIBRARY environment variables.
To get truly stand-alone applications, the Tcl scripts that form the library have to be integrated into the application as well. One tool supporting that is SAM (stand-alone modules), which is part of the Tix distribution (http://tix.mne.com). Build Tix with SAM enabled, perform the appropriate call to Tclsam_init etc inside Python's Modules/tkappinit.c, and link with libtclsam and libtksam (you might include the Tix libraries as well).
Yes, and you don't even need threads! But you'll have to restructure your I/O code a bit. Tk has the equivalent of Xt's XtAddInput() call, which allows you to register a callback function which will be called from the Tk mainloop when I/O is possible on a file descriptor. Here's what you need:
from Tkinter import tkinter tkinter.createfilehandler(file, mask, callback)
The file may be a Python file or socket object (actually, anything with a fileno() method), or an integer file descriptor. The mask is one of the constants tkinter.READABLE or tkinter.WRITABLE. The callback is called as follows:
callback(file, mask)
You must unregister the callback when you're done, using
tkinter.deletefilehandler(file)
Note: since you don't know how many bytes are available for reading, you can't use the Python file object's read or readline methods, since these will insist on reading a predefined number of bytes. For sockets, the recv() or recvfrom() methods will work fine; for other files, use os.read(file.fileno(), maxbytecount).
An oft-heard complaint is that event handlers bound to events with the bind() method don't get handled even when the appropriate key is pressed.
The most common cause is that the widget to which the binding applies doesn't have "keyboard focus". Check out the Tk documentation for the focus command. Usually a widget is given the keyboard focus by clicking in it (but not for labels; see the taketocus option).
Use os.remove(filename) or os.unlink(filename); for documentation, see the posix section of the library manual. They are the same, unlink() is simply the Unix name for this function. In earlier versions of Python, only os.unlink() was available.
To remove a directory, use os.rmdir(); use os.mkdir() to create one.
To rename a file, use os.rename().
To truncate a file, open it using f = open(filename, "r+"), and use f.truncate(offset); offset defaults to the current seek position. (The "r+" mode opens the file for reading and writing.) There's also os.ftruncate(fd, offset) for files opened with os.open() -- for advanced Unix hacks only.
The shutil module also contains a number of functions to work on files including copyfile, copytree, and rmtree amongst others.
There's the shutil module which contains a copyfile() function that implements a copy loop; it isn't good enough for the Macintosh, though: it doesn't copy the resource fork and Finder info.
For complex data formats, it's best to use use the struct module. It's documented in the library reference. It allows you to take a string read from a file containing binary data (usually numbers) and convert it to Python objects; and vice versa.
For example, the following code reads two 2-byte integers and one 4-byte integer in big-endian format from a file:
import struct f = open(filename, "rb") # Open in binary mode for portability s = f.read(8) x, y, z = struct.unpack(">hhl", s)
The '>' in the format string forces bin-endian data; the letter 'h' reads one "short integer" (2 bytes), and 'l' reads one "long integer" (4 bytes) from the string.
For data that is more regular (e.g. a homogeneous list of ints or floats), you can also use the array module, also documented in the library reference.
We presume for the purposes of this question you are interested in standalone testing, rather than testing your components inside a testing framework. The best-known testing framework for Python is the PyUnit module, maintained at
http://pyunit.sourceforge.net
For standalone testing, it helps to write the program so that it may be easily tested by using good modular design. In particular your program should have almost all functionality encapsulated in either functions or class methods -- and this sometimes has the surprising and delightful effect of making the program run faster (because local variable accesses are faster than global accesses). Furthermore the program should avoid depending on mutating global variables, since this makes testing much more difficult to do.
The "global main logic" of your program may be as simple as
if __name__=="__main__": main_logic()
at the bottom of the main module of your program.
Once your program is organized as a tractable collection of functions and class behaviours you should write test functions that exercise the behaviours. A test suite can be associated with each module which automates a sequence of tests. This sounds like a lot of work, but since Python is so terse and flexible it's surprisingly easy. You can make coding much more pleasant and fun by writing your test functions in parallel with the "production code", since this makes it easy to find bugs and even design flaws earlier.
"Support modules" that are not intended to be the main module of a program may include a "test script interpretation" which invokes a self test of the module.
if __name__ == "__main__": self_test()
Even programs that interact with complex external interfaces may be tested when the external interfaces are unavailable by using "fake" interfaces implemented in Python. For an example of a "fake" interface, the following class defines (part of) a "fake" file interface:
import string testdata = "just a random sequence of characters" class FakeInputFile: data = testdata position = 0 closed = 0 def read(self, n=None): self.testclosed() p = self.position if n is None: result= self.data[p:] else: result= self.data[p: p+n] self.position = p + len(result) return result def seek(self, n, m=0): self.testclosed() last = len(self.data) p = self.position if m==0: final=n elif m==1: final=n+p elif m==2: final=len(self.data)+n else: raise ValueError, "bad m" if final<0: raise IOError, "negative seek" self.position = final def isatty(self): return 0 def tell(self): return self.position def close(self): self.closed = 1 def testclosed(self): if self.closed: raise IOError, "file closed"
Try f=FakeInputFile() and test out its operations.
Use gendoc, by Daniel Larson. See
http://starship.python.net/crew/danilo
It can create HTML from the doc strings in your Python source code.
You need to do two things: the script file's mode must be executable (include the 'x' bit), and the first line must begin with #! followed by the pathname for the Python interpreter.
The first is done by executing 'chmod +x scriptfile' or perhaps 'chmod 755 scriptfile'.
The second can be done in a number of way. The most straightforward way is to write
#!/usr/local/bin/python
as the very first line of your file - or whatever the pathname is where the python interpreter is installed on your platform.
If you would like the script to be independent of where the python interpreter lives, you can use the "env" program. On almost all platforms, the following will work, assuming the python interpreter is in a directory on the user's $PATH:
#! /usr/bin/env python
Note -- don't do this for CGI scripts. The $PATH variable for CGI scripts is often very minimal, so you need to use the actual absolute pathname of the interpreter.
Occasionally, a user's environment is so full that the /usr/bin/env program fails; or there's no env program at all. In that case, you can try the following hack (due to Alex Rezinsky):
#! /bin/sh """:" exec python $0 ${1+"$@"} """
The disadvantage is that this defines the script's __doc__ string. However, you can fix that by adding
__doc__ = """...Whatever..."""
The standard library module "random" implements a random number generator. Usage is simple:
import random random.random()
This returns a random floating point number in the range [0, 1).
There are also many other specialized generators in this module, such as
randrange(a, b) chooses an integer in the range [a, b) uniform(a, b) chooses a floating point number in the range [a, b) normalvariate(mean, sdev) sample from normal (Gaussian) distribution
Some higher-level functions operate on sequences directly, such as
choice(S) chooses random element from a given sequence shuffle(L) shuffles a list in-place, i.e. permutes it randomly
There's also a class, Random, which you can instantiate to create independent multiple random number generators.
All this is documented in the library reference manual. Note that the module "whrandom" is obsolete.
There's a Windows serial communication module (for communication over RS 232 serial ports) at
ftp://ftp.python.org/pub/python/contrib/sio-151.zip http://www.python.org/ftp/python/contrib/sio-151.zip
For DOS, try Hans Nowak's Python-DX, which supports this, at:
http://www.cuci.nl/~hnowak
For Unix, see a usenet post by Mitch Chapman:
http://groups.google.com/groups?selm=34A04430.CF9@ohioee.com
For Win32, POSIX(Linux, BSD, *), Jython, Chris':
http://pyserial.sourceforge.net
The standard library module smtplib does this. Here's a very simple interactive mail sender that uses it. This method will work on any host that supports an SMTP listener.
import sys, smtplib fromaddr = raw_input("From: ") toaddrs = raw_input("To: ").split(',') print "Enter message, end with ^D:" msg = '' while 1: line = sys.stdin.readline() if not line: break msg = msg + line # The actual mail send server = smtplib.SMTP('localhost') server.sendmail(fromaddr, toaddrs, msg) server.quit()
If the local host doesn't have an SMTP listener, you need to find one. The simple method is to ask the user. Alternately, you can use the DNS system to find the mail gateway(s) responsible for the source address.
A Unix-only alternative uses sendmail. The location of the sendmail program varies between systems; sometimes it is /usr/lib/sendmail, sometime /usr/sbin/sendmail. The sendmail manual page will help you out. Here's some sample code:
SENDMAIL = "/usr/sbin/sendmail" # sendmail location import os p = os.popen("%s -t -i" % SENDMAIL, "w") p.write("To: `cary@ratatosk.org <mailto:cary@ratatosk.org>`_\n") p.write("Subject: test\n") p.write("\n") # blank line separating headers from body p.write("Some text\n") p.write("some more text\n") sts = p.close() if sts != 0: print "Sendmail exit status", sts
The select module is widely known to help with asynchronous I/O on sockets once they are connected. However, it is less than common knowledge how to avoid blocking on the initial connect() call. Jeremy Hylton has the following advice (slightly edited):
To prevent the TCP connect from blocking, you can set the socket to non-blocking mode. Then when you do the connect(), you will either connect immediately (unlikely) or get an exception that contains the errno. errno.EINPROGRESS indicates that the connection is in progress, but hasn't finished yet. Different OSes will return different errnos, so you're going to have to check. I can tell you that different versions of Solaris return different errno values.
In Python 1.5 and later, you can use connect_ex() to avoid creating an exception. It will just return the errno value.
To poll, you can call connect_ex() again later -- 0 or errno.EISCONN indicate that you're connected -- or you can pass this socket to select (checking to see if it is writeable).
The library module "pickle" now solves this in a very general way (though you still can't store things like open files, sockets or windows), and the library module "shelve" uses pickle and (g)dbm to create persistent mappings containing arbitrary Python objects. For possibly better performance also look for the latest version of the relatively recent cPickle module.
A more awkward way of doing things is to use pickle's little sister, marshal. The marshal module provides very fast ways to store noncircular basic Python types to files and strings, and back again. Although marshal does not do fancy things like store instances or handle shared references properly, it does run extremely fast. For example loading a half megabyte of data may take less than a third of a second (on some machines). This often beats doing something more complex and general such as using gdbm with pickle/shelve.
For Windows, see question 8.2. Here is an answer for Unix (see also 4.94).
There are several solutions; some involve using curses, which is a pretty big thing to learn. Here's a solution without curses, due to Andrew Kuchling (adapted from code to do a PGP-style randomness pool):
import termios, sys, os fd = sys.stdin.fileno() old = termios.tcgetattr(fd) new = termios.tcgetattr(fd) new[3] = new[3] & ~termios.ICANON & ~termios.ECHO new[6][termios.VMIN] = 1 new[6][termios.VTIME] = 0 termios.tcsetattr(fd, termios.TCSANOW, new) s = '' # We'll save the characters typed and add them to the pool. try: while 1: c = os.read(fd, 1) print "Got character", `c` s = s+c finally: termios.tcsetattr(fd, termios.TCSAFLUSH, old)
You need the termios module for any of this to work, and I've only tried it on Linux, though it should work elsewhere. It turns off stdin's echoing and disables canonical mode, and then reads a character at a time from stdin, noting the time after each keystroke.
There are several solutions; some involve using curses, which is a pretty big thing to learn. Here's a solution without curses. (see also 4.74, for Windows, see question 8.2)
import termios, fcntl, sys, os fd = sys.stdin.fileno() oldterm = termios.tcgetattr(fd) newattr = termios.tcgetattr(fd) newattr[3] = newattr[3] & ~termios.ICANON & ~termios.ECHO termios.tcsetattr(fd, termios.TCSANOW, newattr) oldflags = fcntl.fcntl(fd, fcntl.F_GETFL) fcntl.fcntl(fd, fcntl.F_SETFL, oldflags | os.O_NONBLOCK) try: while 1: try: c = sys.stdin.read(1) print "Got character", `c` except IOError: pass finally: termios.tcsetattr(fd, termios.TCSAFLUSH, oldterm) fcntl.fcntl(fd, fcntl.F_SETFL, oldflags)
You need the termios and the fcntl module for any of this to work, and I've only tried it on Linux, though it should work elsewhere.
In this code, characters are read and printed one at a time.
termios.tcsetattr() turns off stdin's echoing and disables canonical mode. fcntl.fnctl() is used to obtain stdin's file descriptor flags and modify them for non-blocking mode. Since reading stdin when it is empty results in an IOError, this error is caught and ignored.
The standard Python source distribution comes with a curses module in the Modules/ subdirectory, though it's not compiled by default (note that this is not available in the Windows distribution -- there is no curses module for Windows).
In Python versions before 2.0 the module only supported plain curses; you couldn't use ncurses features like colors with it (though it would link with ncurses).
In Python 2.0, the curses module has been greatly extended, starting from Oliver Andrich's enhanced version, to provide many additional functions from ncurses and SYSV curses, such as colour, alternative character set support, pads, and mouse support. This means the module is no longer compatible with operating systems that only have BSD curses, but there don't seem to be any currently maintained OSes that fall into this category.
For Python 2.0: The new atexit module provides a register function that is similar to C's onexit. See the Library Reference for details. For 2.0 you should not assign to sys.exitfunc!
Yes! See the Database Topic Guide at http://www.python.org/topics/database for details.
os.read() is a low-level function which takes a file descriptor (a small integer). os.popen() creates a high-level file object -- the same type used for sys.std{in,out,err} and returned by the builtin open() function. Thus, to read n bytes from a pipe p created with os.popen(), you need to use p.read(n).
Even though there are Python compilers being developed, you probably don't need a real compiler, if all you want is a stand-alone program. There are three solutions to that.
One is to use the freeze tool, which is included in the Python source tree as Tools/freeze. It converts Python byte code to C arrays. Using a C compiler, you can embed all your modules into a new program, which is then linked with the standard Python modules.
It works by scanning your source recursively for import statements (in both forms) and looking for the modules in the standard Python path as well as in the source directory (for built-in modules). It then 1 the modules written in Python to C code (array initializers that can be turned into code objects using the marshal module) and creates a custom-made config file that only contains those built-in modules which are actually used in the program. It then compiles the generated C code and links it with the rest of the Python interpreter to form a self-contained binary which acts exactly like your script.
(Hint: the freeze program only works if your script's filename ends in ".py".)
There are several utilities which may be helpful. The first is Gordon McMillan's installer at
http://www.mcmillan-inc.com/install1.html
which works on Windows, Linux and at least some forms of Unix.
Another is Thomas Heller's py2exe (Windows only) at
http://starship.python.net/crew/theller/py2exe
A third is Christian Tismer's SQFREEZE (http://starship.python.net/crew/pirx) which appends the byte code to a specially-prepared Python interpreter, which will find the byte code in executable.
A fourth is Fredrik Lundh's Squeeze (http://www.pythonware.com/products/python/squeeze).
See the chapters titled "Internet Protocols and Support" and "Internet Data Handling" in the Library Reference Manual. Python is full of good things which will help you build server-side and client-side web systems.
A summary of available frameworks is maintained by Paul Boddie at
http://thor.prohosting.com/~pboddie/Python/web_modules.html
Cameron Laird maintains a useful set of pages about Python web technologies at
http://starbase.neosoft.com/~claird/comp.lang.python/web_python.html
There was a web browser written in Python, called Grail -- see http://sourceforge.net/project/grail. This project has been terminated; http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/grail/grail/README gives more details.
Use the standard popen2 module. For example:
import popen2 fromchild, tochild = popen2.popen2("command") tochild.write("input\n") tochild.flush() output = fromchild.readline()
Warning: in general, it is unwise to do this, because you can easily cause a deadlock where your process is blocked waiting for output from the child, while the child is blocked waiting for input from you. This can be caused because the parent expects the child to output more text than it does, or it can be caused by data being stuck in stdio buffers due to lack of flushing. The Python parent can of course explicitly flush the data it sends to the child before it reads any output, but if the child is a naive C program it can easily have been written to never explicitly flush its output, even if it is interactive, since flushing is normally automatic.
Note that a deadlock is also possible if you use popen3 to read stdout and stderr. If one of the two is too large for the internal buffer (increasing the buffersize does not help) and you read() the other one first, there is a deadlock, too.
Note on a bug in popen2: unless your program calls wait() or waitpid(), finished child processes are never removed, and eventually calls to popen2 will fail because of a limit on the number of child processes. Calling os.waitpid with the os.WNOHANG option can prevent this; a good place to insert such a call would be before calling popen2 again.
Another way to produce a deadlock: Call a wait() and there is still more output from the program than what fits into the internal buffers.
In many cases, all you really need is to run some data through a command and get the result back. Unless the data is infinite in size, the easiest (and often the most efficient!) way to do this is to write it to a temporary file and run the command with that temporary file as input. The standard module tempfile exports a function mktemp() which generates unique temporary file names.
import tempfile import os class Popen3: """ This is a deadlock-save version of popen, that returns an object with errorlevel, out (a string) and err (a string). (capturestderr may not work under windows.) Example: print Popen3('grep spam','\n\nhere spam\n\n').out """ def __init__(self,command,input=None,capturestderr=None): outfile=tempfile.mktemp() command="( %s ) > %s" % (command,outfile) if input: infile=tempfile.mktemp() open(infile,"w").write(input) command=command+" <"+infile if capturestderr: errfile=tempfile.mktemp() command=command+" 2>"+errfile self.errorlevel=os.system(command) >> 8 self.out=open(outfile,"r").read() os.remove(outfile) if input: os.remove(infile) if capturestderr: self.err=open(errfile,"r").read() os.remove(errfile)
Note that many interactive programs (e.g. vi) don't work well with pipes substituted for standard input and output. You will have to use pseudo ttys ("ptys") instead of pipes. There is some undocumented code to use these in the library module pty.py -- I'm afraid you're on your own here.
A different answer is a Python interface to Don Libes' "expect" library. A Python extension that interfaces to expect is called "expy" and available from http://expectpy.sourceforge.net.
A pure Python solution that works like expect is pexpect of Noah Spurrier. A beta version is available from http://pexpect.sourceforge.net
The most common problem is that the signal handler is declared with the wrong argument list. It is called as
handler(signum, frame)
so it should be declared with two arguments:
def handler(signum, frame): ...
I would like to retrieve web pages that are the result of POSTing a form. Is there existing code that would let me do this easily?
Yes. Here's a simple example that uses httplib.
#!/usr/local/bin/python import httplib, sys, time ### build the query string qs = "First=Josephine&MI=Q&Last=Public" ### connect and send the server a path httpobj = httplib.HTTP('www.some-server.out-there', 80) httpobj.putrequest('POST', '/cgi-bin/some-cgi-script') ### now generate the rest of the HTTP headers... httpobj.putheader('Accept', '*/*') httpobj.putheader('Connection', 'Keep-Alive') httpobj.putheader('Content-type', 'application/x-www-form-urlencoded') httpobj.putheader('Content-length', '%d' % len(qs)) httpobj.endheaders() httpobj.send(qs) ### find out what the server said in response... reply, msg, hdrs = httpobj.getreply() if reply != 200: sys.stdout.write(httpobj.getfile().read())
Note that in general for "url encoded posts" (the default) query strings must be "quoted" to, for example, change equals signs and spaces to an encoded form when they occur in name or value. Use urllib.quote to perform this quoting. For example to send name="Guy Steele, Jr.":
>>> from urllib import quote >>> x = quote("Guy Steele, Jr.") >>> x 'Guy%20Steele,%20Jr.' >>> query_string = "name="+x >>> query_string 'name=Guy%20Steele,%20Jr.'
Databases opened for write access with the bsddb module (and often by the anydbm module, since it will preferentially use bsddb) must explicitly be closed using the close method of the database. The underlying libdb package caches database contents which need to be converted to on-disk form and written, unlike regular open files which already have the on-disk bits in the kernel's write buffer, where they can just be dumped by the kernel with the program exits.
If you have initialized a new bsddb database but not written anything to it before the program crashes, you will often wind up with a zero-length file and encounter an exception the next time the file is opened.
If you can't find a source file for a module it may be a builtin or dynamically loaded module implemented in C, C++ or other compiled language. In this case you may not have the source file or it may be something like mathmodule.c, somewhere in a C source directory (not on the Python Path).
Fredrik Lundh (fredrik@pythonware.com) explains (on the python-list):
There are (at least) three kinds of modules in Python: 1) modules written in Python (.py); 2) modules written in C and dynamically loaded (.dll, .pyd, .so, .sl, etc); 3) modules written in C and linked with the interpreter; to get a list of these, type:
import sys print sys.builtin_module_names
XXX update Use the binary option. We'd like to make that the default, but it would break backward compatibility:
largeString = 'z' * (100 * 1024) myPickle = cPickle.dumps(largeString, 1)
Don't panic! Your data are probably intact. The most frequent cause for the error is that you tried to open an earlier Berkeley DB file with a later version of the Berkeley DB library.
Many Linux systems now have all three versions of Berkeley DB available. If you are migrating from version 1 to a newer version use db_dump185 to dump a plain text version of the database. If you are migrating from version 2 to version 3 use db2_dump to create a plain text version of the database. In either case, use db_load to create a new native database for the latest version installed on your computer. If you have version 3 of Berkeley DB installed, you should be able to use db2_load to create a native version 2 database.
You should probably move away from Berkeley DB version 1 files because the hash file code contains known bugs that can corrupt your data.
Python file objects are a high-level layer of abstraction on top of C streams, which in turn are a medium-level layer of abstraction on top of (among other things) low-level C file descriptors.
For most file objects f you create in Python via the builtin "open" function, f.close() marks the Python file object as being closed from Python's point of view, and also arranges to close the underlying C stream. This happens automatically too, in f's destructor, when f becomes garbage.
But stdin, stdout and stderr are treated specially by Python, because of the special status also given to them by C: doing
sys.stdout.close() # ditto for stdin and stderr
marks the Python-level file object as being closed, but does not close the associated C stream (provided sys.stdout is still bound to its default value, which is the stream C also calls "stdout").
To close the underlying C stream for one of these three, you should first be sure that's what you really want to do (e.g., you may confuse the heck out of extension modules trying to do I/O). If it is, use os.close:
os.close(0) # close C's stdin stream os.close(1) # close C's stdout stream os.close(2) # close C's stderr stream
Check out HTMLgen written by Robin Friedrich. It's a class library of objects corresponding to all the HTML 3.2 markup tags. It's used when you are writing in Python and wish to synthesize HTML pages for generating a web or for CGI forms, etc.
It can be found in the FTP contrib area on python.org or on the Starship. Use the search engines there to locate the latest version.
It might also be useful to consider DocumentTemplate, which offers clear separation between Python code and HTML code. DocumentTemplate is part of the Bobo objects publishing system (http:/www.digicool.com/releases) but can be used independantly of course!
Please note that there is no way to take advantage of multiprocessor hardware using the Python thread model. The interpreter uses a global interpreter lock (GIL), which does not allow multiple threads to be concurrently active.
If you write a simple test program like this:
import thread def run(name, n): for i in range(n): print name, i for i in range(10): thread.start_new(run, (i, 100))
none of the threads seem to run! The reason is that as soon as the main thread exits, all threads are killed.
A simple fix is to add a sleep to the end of the program, sufficiently long for all threads to finish:
import thread, time def run(name, n): for i in range(n): print name, i for i in range(10): thread.start_new(run, (i, 100)) time.sleep(10) # <----------------------------!
But now (on many platforms) the threads don't run in parallel, but appear to run sequentially, one at a time! The reason is that the OS thread scheduler doesn't start a new thread until the previous thread is blocked.
A simple fix is to add a tiny sleep to the start of the run function:
import thread, time def run(name, n): time.sleep(0.001) # <---------------------! for i in range(n): print name, i for i in range(10): thread.start_new(run, (i, 100)) time.sleep(10)
Some more hints:
Instead of using a time.sleep() call at the end, it's better to use some kind of semaphore mechanism. One idea is to use a the Queue module to create a queue object, let each thread append a token to the queue when it finishes, and let the main thread read as many tokens from the queue as there are threads.
Use the threading module instead of the thread module. It's part of Python since version 1.5.1. It takes care of all these details, and has many other nice features too!
[adapted from c.l.py responses by Gordon McMillan & GvR]
A global interpreter lock (GIL) is used internally to ensure that only one thread runs in the Python VM at a time. In general, Python offers to switch among threads only between bytecode instructions (how frequently it offers to switch can be set via sys.setcheckinterval). Each bytecode instruction-- and all the C implementation code reached from it --is therefore atomic.
In theory, this means an exact accounting requires an exact understanding of the PVM bytecode implementation. In practice, it means that operations on shared vrbls of builtin data types (ints, lists, dicts, etc) that "look atomic" really are.
For example, these are atomic (L, L1, L2 are lists, D, D1, D2 are dicts, x, y are objects, i, j are ints):
L.append(x) L1.extend(L2) x = L[i] x = L.pop() L1[i:j] = L2 L.sort() x = y x.field = y D[x] = y D1.update(D2) D.keys()
These aren't:
i = i+1 L.append(L[-1]) L[i] = L[j] D[x] = D[x] + 1
Note: operations that replace other objects may invoke those other objects' __del__ method when their reference count reaches zero, and that can affect things. This is especially true for the mass updates to dictionaries and lists. When in doubt, use a mutex!
The Global Interpreter Lock (GIL) is often seen as a hindrance to Python's deployment on high-end multiprocessor server machines, because a multi-threaded Python program effectively only uses one CPU, due to the insistence that (almost) all Python code can only run while the GIL is held.
Back in the days of Python 1.5, Greg Stein actually implemented a comprehensive patch set ("free threading") that removed the GIL, replacing it with fine-grained locking. Unfortunately, even on Windows (where locks are very efficient) this ran ordinary Python code about twice as slow as the interpreter using the GIL. On Linux the performance loss was even worse (pthread locks aren't as efficient).
Since then, the idea of getting rid of the GIL has occasionally come up but nobody has found a way to deal with the expected slowdown; Greg's free threading patch set has not been kept up-to-date for later Python versions.
This doesn't mean that you can't make good use of Python on multi-CPU machines! You just have to be creative with dividing the work up between multiple processes rather than multiple threads.
It has been suggested that the GIL should be a per-interpreter-state lock rather than truly global; interpreters then wouldn't be able to share objects. Unfortunately, this isn't likely to happen either.
It would be a tremendous amount of work, because many object implementations currently have global state. E.g. small ints and small strings are cached; these caches would have to be moved to the interpreter state. Other object types have their own free list; these free lists would have to be moved to the interpreter state. And so on.
And I doubt that it can even be done in finite time, because the same problem exists for 3rd party extensions. It is likely that 3rd party extensions are being written at a faster rate than you can convert them to store all their global state in the interpreter state.
And finally, once you have multiple interpreters not sharing any state, what have you gained over running each interpreter in a separate process?
Yes, you can create built-in modules containing functions, variables, exceptions and even new types in C. This is explained in the document "Extending and Embedding the Python Interpreter" (http://www.python.org/doc/current/ext/ext.html). Also read the chapter on dynamic loading.
There's more information on this in each of the Python books: Programming Python, Internet Programming with Python, and Das Python-Buch (in German).
Yes, using the C-compatibility features found in C++. Basically you place extern "C" { ... } around the Python include files and put extern "C" before each function that is going to be called by the Python interpreter. Global or static C++ objects with constructors are probably not a good idea.
The highest-level function to do this is PyRun_SimpleString() which takes a single string argument which is executed in the context of module __main__ and returns 0 for success and -1 when an exception occurred (including SyntaxError). If you want more control, use PyRun_String(); see the source for PyRun_SimpleString() in Python/pythonrun.c.
Call the function PyRun_String() from the previous question with the start symbol eval_input (Py_eval_input starting with 1.5a1); it parses an expression, evaluates it and returns its value.
That depends on the object's type. If it's a tuple, PyTupleSize(o) returns its length and PyTuple_GetItem(o, i) returns its i'th item; similar for lists with PyListSize(o) and PyList_GetItem(o, i). For strings, PyString_Size(o) returns its length and PyString_AsString(o) a pointer to its value (note that Python strings may contain null bytes so strlen() is not safe). To test which type an object is, first make sure it isn't NULL, and then use PyString_Check(o), PyTuple_Check(o), PyList_Check(o), etc.
There is also a high-level API to Python objects which is provided by the so-called 'abstract' interface -- read Include/abstract.h for further details. It allows for example interfacing with any kind of Python sequence (e.g. lists and tuples) using calls like PySequence_Length(), PySequence_GetItem(), etc.) as well as many other useful protocols.
You can't. Use t = PyTuple_New(n) instead, and fill it with objects using PyTuple_SetItem(t, i, o) -- note that this "eats" a reference count of o. Similar for lists with PyList_New(n) and PyList_SetItem(l, i, o). Note that you must set all the tuple items to some value before you pass the tuple to Python code -- PyTuple_New(n) initializes them to NULL, which isn't a valid Python value.
The PyObject_CallMethod() function can be used to call an arbitrary method of an object. The parameters are the object, the name of the method to call, a format string like that used with Py_BuildValue(), and the argument values:
PyObject * PyObject_CallMethod(PyObject *object, char *method_name, char *arg_format, ...);
This works for any object that has methods -- whether built-in or user-defined. You are responsible for eventually DECREF'ing the return value.
To call, e.g., a file object's "seek" method with arguments 10, 0 (assuming the file object pointer is "f"):
res = PyObject_CallMethod(f, "seek", "(ii)", 10, 0); if (res == NULL) { ... an exception occurred ... } else { Py_DECREF(res); }
Note that since PyObject_CallObject() always wants a tuple for the argument list, to call a function without arguments, pass "()" for the format, and to call a function with one argument, surround the argument in parentheses, e.g. "(i)".
(Due to Mark Hammond):
In Python code, define an object that supports the "write()" method. Redirect sys.stdout and sys.stderr to this object. Call print_error, or just allow the standard traceback mechanism to work. Then, the output will go wherever your write() method sends it.
The easiest way to do this is to use the StringIO class in the standard library.
Sample code and use for catching stdout:
>>> class StdoutCatcher: ... def __init__(self): ... self.data = '' ... def write(self, stuff): ... self.data = self.data + stuff ... >>> import sys >>> sys.stdout = StdoutCatcher() >>> print 'foo' >>> print 'hello world!' >>> sys.stderr.write(sys.stdout.data) foo hello world!
You can get a pointer to the module object as follows:
module = PyImport_ImportModule("<modulename>");
If the module hasn't been imported yet (i.e. it is not yet present in sys.modules), this initializes the module; otherwise it simply returns the value of sys.modules["<modulename>"]. Note that it doesn't enter the module into any namespace -- it only ensures it has been initialized and is stored in sys.modules.
You can then access the module's attributes (i.e. any name defined in the module) as follows:
attr = PyObject_GetAttrString(module, "<attrname>");
Calling PyObject_SetAttrString(), to assign to variables in the module, also works.
Depending on your requirements, there are many approaches. To do this manually, begin by reading the "Extending and Embedding" document (Doc/ext.tex, see also http://www.python.org/doc). Realize that for the Python run-time system, there isn't a whole lot of difference between C and C++ -- so the strategy to build a new Python type around a C structure (pointer) type will also work for C++ objects.
A useful automated approach (which also works for C) is SWIG: http://www.swig.org.
XXX sip, Boost
Setup must end in a newline, if there is no newline there it gets very sad. Aside from this possibility, maybe you have other non-Python-specific linkage problems.
When using gdb with dynamically loaded extensions, you can't set a breakpoint in your extension until your extension is loaded.
In your .gdbinit file (or interactively), add the command
br _PyImport_LoadDynamicModule
$ gdb /local/bin/python
gdb) run myscript.py
gdb) continue # repeat until your extension is loaded
gdb) finish # so that your extension is loaded
gdb) br myfunction.c:50
gdb) continue
Red Hat's RPM for Python doesn't include the /usr/lib/python2.x/config/ directory, which contains various files required for compiling Python extensions. Install the python-devel RPM to get the necessary files.
This means that you have created an extension module named "yourmodule", but your module init function does not initialize with that name.
Every module init function will have a line similar to:
module = Py_InitModule("yourmodule", yourmodule_functions);
If the string passed to this function is not the same name as your extenion module, the SystemError will be raised.
Sometimes you want to emulate the Python interactive interpreter's behavior, where it gives you a continuation prompt when the input is incomplete (e.g. you typed the start of an "if" statement or you didn't close your parentheses or triple string quotes), but it gives you a syntax error message immediately when the input is invalid.
In Python you can use the codeop module, which approximates the parser's behavior sufficiently. IDLE uses this, for example.
The easiest way to do it in C is to call PyRun_InteractiveLoop() (in a separate thread maybe) and let the Python interpreter handle the input for you. You can also set the PyOS_ReadlineFunctionPointer to point at your custom input function. See Modules/readline.c and Parser/myreadline.c for more hints.
However sometimes you have to run the embedded Python interpreter in the same thread as your rest application and you can't allow the PyRun_InteractiveLoop() to stop while waiting for user input. The one solution then is to call PyParser_ParseString() and test for e.error equal to E_EOF (then the input is incomplete). Sample code fragment, untested, inspired by code from Alex Farber:
#include <Python.h> #include <node.h> #include <errcode.h> #include <grammar.h> #include <parsetok.h> #include <compile.h> int testcomplete(char *code) /* code should end in \n */ /* return -1 for error, 0 for incomplete, 1 for complete */ { node *n; perrdetail e; n = PyParser_ParseString(code, &_PyParser_Grammar, Py_file_input, &e); if (n == NULL) { if (e.error == E_EOF) return 0; return -1; } PyNode_Free(n); return 1; }
Another solution is trying to compile the received string with Py_CompileString(). If it compiles fine - try to execute the returned code object by calling PyEval_EvalCode(). Otherwise save the input for later. If the compilation fails, find out if it's an error or just more input is required - by extracting the message string from the exception tuple and comparing it to the "unexpected EOF while parsing". Here is a complete example using the GNU readline library (you may want to ignore SIGINT while calling readline()):
#include <stdio.h> #include <readline.h> #include <Python.h> #include <object.h> #include <compile.h> #include <eval.h> int main (int argc, char* argv[]) { int i, j, done = 0; /* lengths of line, code */ char ps1[] = ">>> "; char ps2[] = "... "; char *prompt = ps1; char *msg, *line, *code = NULL; PyObject *src, *glb, *loc; PyObject *exc, *val, *trb, *obj, *dum; Py_Initialize (); loc = PyDict_New (); glb = PyDict_New (); PyDict_SetItemString (glb, "__builtins__", PyEval_GetBuiltins ()); while (!done) { line = readline (prompt); if (NULL == line) /* CTRL-D pressed */ { done = 1; } else { i = strlen (line); if (i > 0) add_history (line); /* save non-empty lines */ if (NULL == code) /* nothing in code yet */ j = 0; else j = strlen (code); code = realloc (code, i + j + 2); if (NULL == code) /* out of memory */ exit (1); if (0 == j) /* code was empty, so */ code[0] = '\0'; /* keep strncat happy */ strncat (code, line, i); /* append line to code */ code[i + j] = '\n'; /* append '\n' to code */ code[i + j + 1] = '\0'; src = Py_CompileString (code, "<stdin>", Py_single_input); if (NULL != src) /* compiled just fine - */ { if (ps1 == prompt || /* ">>> " or */ '\n' == code[i + j - 1]) /* "... " and double '\n' */ { /* so execute it */ dum = PyEval_EvalCode ((PyCodeObject *)src, glb, loc); Py_XDECREF (dum); Py_XDECREF (src); free (code); code = NULL; if (PyErr_Occurred ()) PyErr_Print (); prompt = ps1; } } /* syntax error or E_EOF? */ else if (PyErr_ExceptionMatches (PyExc_SyntaxError)) { PyErr_Fetch (&exc, &val, &trb); /* clears exception! */ if (PyArg_ParseTuple (val, "sO", &msg, &obj) && !strcmp (msg, "unexpected EOF while parsing")) /* E_EOF */ { Py_XDECREF (exc); Py_XDECREF (val); Py_XDECREF (trb); prompt = ps2; } else /* some other syntax error */ { PyErr_Restore (exc, val, trb); PyErr_Print (); free (code); code = NULL; prompt = ps1; } } else /* some non-syntax error */ { PyErr_Print (); free (code); code = NULL; prompt = ps1; } free (line); } } Py_XDECREF(glb); Py_XDECREF(loc); Py_Finalize(); exit(0); }
To dynamically load g++ extension modules, you must recompile python, relink python using g++ (change LINKCC in the python Modules Makefile), and link your extension module using g++ (e.g., "g++ -shared -o mymodule.so mymodule.o").
Usually you would like to be able to inherit from a Python type when you ask this question. The bottom line for Python 2.2 is: types and classes are miscible. You build instances by calling classes, and you can build subclasses to your heart's desire.
You need to be careful when instantiating immutable types like integers or strings. See http://www.amk.ca/python/2.2, section 2, for details.
Prior to version 2.2, Python (like Java) insisted that there are first-class and second-class objects (the former are types, the latter classes), and never the twain shall meet.
The library has, however, done a good job of providing class wrappers for the more commonly desired objects (see UserDict, UserList and UserString for examples), and more are always welcome if you happen to be in the mood to write code. These wrappers still exist in Python 2.2.
In Python 2.2, you can inherit from builtin classes such as int, list, dict, etc.
In previous versions of Python, you can easily create a Python class which serves as a wrapper around a built-in object, e.g. (for dictionaries):
# A user-defined class behaving almost identical # to a built-in dictionary. class UserDict: def __init__(self): self.data = {} def __repr__(self): return repr(self.data) def __cmp__(self, dict): if type(dict) == type(self.data): return cmp(self.data, dict) else: return cmp(self.data, dict.data) def __len__(self): return len(self.data) def __getitem__(self, key): return self.data[key] def __setitem__(self, key, item): self.data[key] = item def __delitem__(self, key): del self.data[key] def keys(self): return self.data.keys() def items(self): return self.data.items() def values(self): return self.data.values() def has_key(self, key): return self.data.has_key(key)
A2. See Jim Fulton's ExtensionClass for an example of a mechanism which allows you to have superclasses which you can inherit from in Python -- that way you can have some methods from a C superclass (call it a mixin) and some methods from either a Python superclass or your subclass. ExtensionClass is distributed as a part of Zope (see http://www.zope.org), but will be phased out with Zope 3, since Zope 3 uses Python 2.2 or later which supports direct inheritance from built-in types. Here's a link to the original paper about ExtensionClass: http://debian.acm.ndsu.nodak.edu/doc/python-extclass/ExtensionClass.html
A3. The Boost Python Library (BPL, http://www.boost.org/libs/python/doc/index.html) provides a way of doing this from C++ (i.e. you can inherit from an extension class written in C++ using the BPL).
This error indicates that your Python installation can handle only 7-bit ASCII strings. There are a couple ways to fix or workaround the problem.
If your programs must handle data in arbitary character set encodings, the environment the application runs in will generally identify the encoding of the data it is handing you. You need to convert the input to Unicode data using that encoding. For instance, a program that handles email or web input will typically find character set encoding information in Content-Type headers. This can then be used to properly convert input data to Unicode. Assuming the string referred to by "value" is encoded as UTF-8:
value = unicode(value, "utf-8")
will return a Unicode object. If the data is not correctly encoded as UTF-8, the above call will raise a UnicodeError.
If you only want strings coverted to Unicode which have non-ASCII data, you can try converting them first assuming an ASCII encoding, and then generate Unicode objects if that fails:
try: x = unicode(value, "ascii") except UnicodeError: value = unicode(value, "utf-8") else: # value was valid ASCII data pass
If you normally use a character set encoding other than US-ASCII and only need to handle data in that encoding, the simplest way to fix the problem may be simply to set the encoding in sitecustomize.py. The following code is just a modified version of the encoding setup code from site.py with the relevant lines uncommented.
# Set the string encoding used by the Unicode implementation. # The default is 'ascii' encoding = "ascii" # <= CHANGE THIS if you wish # Enable to support locale aware default string encodings. import locale loc = locale.getdefaultlocale() if loc[1]: encoding = loc[1] if encoding != "ascii": import sys sys.setdefaultencoding(encoding)
Also note that on Windows, there is an encoding known as "mbcs", which uses an encoding specific to your current locale. In many cases, and particularly when working with COM, this may be an appropriate default encoding to use.
You are using a version of Python that uses a 4-byte representation for Unicode characters, but the extension module you are importing (possibly indirectly) was compiled using a Python that uses a 2-byte representation for Unicode characters (the default).
If instead the name of the undefined symbol starts with PyUnicodeUCS4, the problem is the same but the relationship is reversed: Python was built using 2-byte Unicode characters, and the extension module was compiled using a Python with 4-byte Unicode characters.
This can easily occur when using pre-built extension packages. RedHat Linux 7.x, in particular, provides a "python2" binary that is compiled with 4-byte Unicode. This only causes the link failure if the extension uses any of the PyUnicode_*() functions. It is also a problem if if an extension uses any of the Unicode-related format specifiers for Py_BuildValue (or similar) or parameter-specifications for PyArg_ParseTuple().
You can check the size of the Unicode character a Python interpreter is using by checking the value of sys.maxunicode:
>>> import sys >>> if sys.maxunicode > 65535: ... print 'UCS4 build' ... else: ... print 'UCS2 build'
The only way to solve this problem is to use extension modules compiled with a Python binary built using the same size for Unicode characters.
Strings became much more like other standard types starting in release 1.6, when methods were added which give the same functionality that has always been available using the functions of the string module. These new methods have been widely accepted, but the one which appears to make (some) programmers feel uncomfortable is:
", ".join(['1', '2', '4', '8', '16'])
which gives the result
"1, 2, 4, 8, 16"
There are two usual arguments against this usage.
The first runs along the lines of: "It looks really ugly using a method of a string literal (string constant)", to which the answer is that it might, but a string literal is just a fixed value. If the methods are to be allowed on names bound to strings there is no logical reason to make them unavailable on literals. Get over it!
The second objection is typically cast as: "I am really telling a sequence to join its members together with a string constant". Sadly, you aren't. For some reason there seems to be much less difficulty with having split() as a string method, since in that case it is easy to see that
"1, 2, 4, 8, 16".split(", ")
is an instruction to a string literal to return the substrings delimited by the given separator (or, by default, arbitrary runs of white space). In this case a Unicode string returns a list of Unicode strings, an ASCII string returns a list of ASCII strings, and everyone is happy.
join() is a string method because in using it you are telling the separator string to iterate over an arbitrary sequence, forming string representations of each of the elements, and inserting itself between the elements' representations. This method can be used with any argument which obeys the rules for sequence objects, inluding any new classes you might define yourself.
Because this is a string method it can work for Unicode strings as well as plain ASCII strings. If join() were a method of the sequence types then the sequence types would have to decide which type of string to return depending on the type of the separator.
If none of these arguments persuade you, then for the moment you can continue to use the join() function from the string module, which allows you to write
string.join(['1', '2', '4', '8', '16'], ", ")
You will just have to try and forget that the string module actually uses the syntax you are compaining about to implement the syntax you prefer!
The development version of the Python Tutorial now contains an Appendix with more info:
http://www.python.org/doc/current/tut/node14.html
People are often very surprised by results like this:
>>> 1.2-1.0 0.199999999999999996
And think it is a bug in Python. It's not. It's a problem caused by the internal representation of floating point numbers. A floating point number is stored as a fixed number of binary digits.
In decimal math, there are many numbers that can't be represented with a fixed number of decimal digits, i.e. 1/3 = 0.3333333333.......
In the binary case, 1/2 = 0.1, 1/4 = 0.01, 1/8 = 0.001, etc. There are a lot of numbers that can't be represented. The digits are cut off at some point.
Since Python 1.6, a floating point's repr() function prints as many digits are necessary to make eval(repr(f)) == f true for any float f. The str() function prints the more sensible number that was probably intended:
>>> 0.2 0.20000000000000001 >>> print 0.2 0.2
Again, this has nothing to do with Python, but with the way the underlying C platform handles floating point numbers, and ultimately with the inaccuracy you'll always have when writing down numbers as a string of a fixed number of digits.
One of the consequences of this is that it is dangerous to compare the result of some computation to a float with == ! Tiny inaccuracies may mean that == fails.
Instead try something like this:
epsilon = 0.0000000000001 # Tiny allowed error expected_result = 0.4 if expected_result-epsilon <= computation() <= expected_result+epsilon: ...
A try/except block is extremely efficient. Actually executing an exception is expensive. In versions of Python prior to 2.0 it was common to use this idiom:
try: value = dict[key] except KeyError: dict[key] = getvalue(key) value = dict[key]
This only made sense when you expected the dict to have the key almost all the time. If that wasn't the case, you coded it like this:
if dict.has_key(key): value = dict[key] else: dict[key] = getvalue(key) value = dict[key]
In Python 2.0 and higher, of course, you can code this as
value = dict.setdefault(key, getvalue(key))
However this evaluates getvalue(key) always, regardless of whether it's needed or not. So if it's slow or has a side effect you should use one of the above variants.
You can do this easily enough with a sequence of if... elif... elif... else. There have been some proposals for switch statement syntax, but there is no consensus (yet) on whether and how to do range tests. See PEP 275 for complete details and current status.
Basically I believe that using indentation for grouping is extremely elegant and contributes a lot to the clarity of the average Python program. Most people learn to love this feature after awhile. Some arguments for it:
Since there are no begin/end brackets there cannot be a disagreement between grouping perceived by the parser and the human reader. I remember long ago seeing a C fragment like this:
if (x <= y) x++; y--; z++;
and staring a long time at it wondering why y was being decremented even for x > y... (And I wasn't a C newbie then either.)
Since there are no begin/end brackets, Python is much less prone to coding-style conflicts. In C there are many different ways to place the braces (including the choice about whether to place braces around single statements in certain cases, for consistency). If you're used to reading (and writing) code that uses one style, you will feel at least slightly uneasy when reading (or being required to write) another style.
Many coding styles place begin/end brackets on a line by themself. This makes programs considerably longer and wastes valuable screen space, making it harder to get a good overview over a program. Ideally, a function should fit on one basic tty screen (say, 20 lines). 20 lines of Python are worth a LOT more than 20 lines of C. This is not solely due to the lack of begin/end brackets (the lack of declarations also helps, and the powerful operations of course), but it certainly helps!
There are two advantages. One is performance: knowing that a string is immutable makes it easy to lay it out at construction time -- fixed and unchanging storage requirements. (This is also one of the reasons for the distinction between tuples and lists.) The other is that strings in Python are considered as "elemental" as numbers. No amount of activity will change the value 8 to anything else, and in Python, no amount of activity will change the string "eight" to anything else. (Adapted from Jim Roskind)
The major reason is history. Functions were used for those operations that were generic for a group of types and which were intended to work even for objects that didn't have methods at all (e.g. numbers before type/class unification began, or tuples). It is also convenient to have a function that can readily be applied to an amorphous collection of objects when you use the functional features of Python (map(), apply() et al).
In fact, implementing len(), max(), min() as a built-in function is actually less code than implementing them as methods for each type. One can quibble about individual cases but it's a part of Python, and it's too late to make such fundamental changes now. The functions have to remain to avoid massive code breakage.
Note that for string operations Python has moved from external functions (the string module) to methods. However, len() is still a function.
So, is your current programming language C++ or Java? :-) When classes were added to Python, this was (again) the simplest way of implementing methods without too many changes to the interpreter. The idea was borrowed from Modula-3. It turns out to be very useful, for a variety of reasons.
First, it makes it more obvious that you are using a method or instance attribute instead of a local variable. Reading "self.x" or "self.meth()" makes it absolutely clear that an instance variable or method is used even if you don't know the class definition by heart. In C++, you can sort of tell by the lack of a local variable declaration (assuming globals are rare or easily recognizable) -- but in Python, there are no local variable declarations, so you'd have to look up the class definition to be sure.
Second, it means that no special syntax is necessary if you want to explicitly reference or call the method from a particular class. In C++, if you want to use a method from a base class which is overridden in a derived class, you have to use the :: operator -- in Python you can write baseclass.methodname(self, <argument list>). This is particularly useful for __init__() methods, and in general in cases where a derived class method wants to extend the base class method of the same name and thus has to call the base class method somehow.
Lastly, for instance variables, it solves a syntactic problem with assignment: since local variables in Python are (by definition!) those variables to which a value assigned in a function body (and that aren't explicitly declared global), there has to be some way to tell the interpreter that an assignment was meant to assign to an instance variable instead of to a local variable, and it should preferably be syntactic (for efficiency reasons). C++ does this through declarations, but Python doesn't have declarations and it would be a pity having to introduce them just for this purpose. Using the explicit "self.var" solves this nicely. Similarly, for using instance variables, having to write "self.var" means that references to unqualified names inside a method don't have to search the instance's directories.
Answer 1: Unfortunately, the interpreter pushes at least one C stack frame for each Python stack frame. Also, extensions can call back into Python at almost random moments. Therefore, a complete threads implementation requires thread support for C.
Answer 2: Fortunately, there is Stackless Python, which has a completely redesigned interpreter loop that avoids the C stack. It's still experimental but looks very promising. Although it is binary compatible with standard Python, it's still unclear whether Stackless will make it into the core -- maybe it's just too revolutionary. A microthread implementation for Stackless is available.
Python lambda forms cannot contain statements because Python's syntactic framework can't handle statements nested inside expressions. However, in Python, this is not a serious problem. Unlike lambda forms in other languages, where they add functionality, Python lambdas are only a shorthand notation if you're too lazy to define a function.
Functions are already first class objects in Python, and can be declared in a local scope. Therefore the only advantage of using a lambda form instead of a locally-defined function is that you don't need to invent a name for the function -- but that's just a local variable to which the function object (which is exactly the same type of object that a lambda form yields) is assigned!
Not easily. Python's high level data types, dynamic typing of objects and run-time invocation of the interpreter (using eval() or exec) together mean that a "compiled" Python program would probably consist mostly of calls into the Python run-time system, even for seemingly simple operations like x+1.
Several projects described in the Python newsgroup or at past Python conferences have shown that this approach is feasible, although the speedups reached so far are only modest (e.g. 2x). JPython uses the same strategy for compiling to Java bytecode. (Jim Hugunin has demonstrated that in combination with whole-program analysis, speedups of 1000x are feasible for small demo programs. See the proceedings from the 1997 Python conference.)
Internally, Python source code is always translated into a "virtual machine code" or "byte code" representation before it is interpreted (by the "Python virtual machine" or "bytecode interpreter"). In order to avoid the overhead of parsing and translating modules that rarely change over and over again, this byte code is written on a file whose name ends in ".pyc" whenever a module is parsed (from a file whose name ends in ".py"). When the corresponding .py file is changed, it is parsed and translated again and the .pyc file is rewritten.
There is no performance difference once the .pyc file has been loaded (the bytecode read from the .pyc file is exactly the same as the bytecode created by direct translation). The only difference is that loading code from a .pyc file is faster than parsing and translating a .py file, so the presence of precompiled .pyc files will generally improve start-up time of Python scripts. If desired, the Lib/compileall.py module/script can be used to force creation of valid .pyc files for a given set of modules.
Note that the main script executed by Python, even if its filename ends in .py, is not compiled to a .pyc file. It is compiled to bytecode, but the bytecode is not saved to a file.
If you are looking for a way to translate Python programs in order to distribute them in binary form, without the need to distribute the interpreter and library as well, a couple solutions are available, Gordon McMillan's Installer and Thomas Heller's py2exe.
There are also several programs which more tighly intermingle Python and C code in various ways to increase performance. See, for example, Psyco, Pyrex <http://www.cosc.canterbury.ac.nz/~greg/python/Pyrex/>_, PyInline <http://pyinline.sourceforge.net/>_, Py2Cmod, and Weave.
The details of Python memory management depend on the implementation. The standard Python implementation (the C implementation) uses reference counting and another mechanism to collect reference cycles.
Jython relies on the Java runtime; so it uses the JVM's garbage collector. This difference can cause some subtle porting problems if your Python code depends on the behavior of the reference counting implementation.
The reference cycle collector was added in CPython 2.0. It periodically executes a cycle detection algorithm which looks for inaccessible cycles and deletes the objects involved. A new gc module provides functions to perform a garbage collection, obtain debugging statistics, and tuning the collector's parameters.
The detection of cycles can be disabled when Python is compiled, if you can't afford even a tiny speed penalty or suspect that the cycle collection is buggy, by specifying the "--without-cycle-gc" switch when running the configure script.
Sometimes objects get stuck in "tracebacks" temporarily and hence are not deallocated when you might expect. Clear the tracebacks via
import sys sys.exc_traceback = sys.last_traceback = None
Tracebacks are used for reporting errors, implementing debuggers and related things. They contain a portion of the program state extracted during the handling of an exception (usually the most recent exception).
In the absence of circularities and modulo tracebacks, Python programs need not explicitly manage memory.
Why doesn't Python use a more traditional garbage collection scheme? For one thing, unless this were added to C as a standard feature, it's a portability pain in the ass. And yes, I know about the Xerox library. It has bits of assembler code for most common platforms. Not for all. And although it is mostly transparent, it isn't completely transparent (when I once linked Python with it, it dumped core).
Traditional GC also becomes a problem when Python gets embedded into other applications. While in a stand-alone Python it may be fine to replace the standard malloc() and free() with versions provided by the GC library, an application embedding Python may want to have its own substitute for malloc() and free(), and may not want Python's. Right now, Python works with anything that implements malloc() and free() properly.
In Jython, the following code (which is fine in C Python) will probably run out of file descriptors long before it runs out of memory:
for file in <very long list of files>: f = open(file) c = f.read(1)
Using the current reference counting and destructor scheme, each new assignment to f closes the previous file. Using GC, this is not guaranteed. Sure, you can think of ways to fix this. But it's not off-the-shelf technology. If you want to write code that will work with any Python implementation, you should explicitly close the file; this will work regardless of GC:
for file in <very long list of files>: f = open(file) c = f.read(1) f.close()
Lists and tuples, while similar in many respects, are generally used in fundamentally different ways. Tuples can be thought of more like Pascal records or C structs, small collections of related data which may be of different types which are operated on as a group. For example, a cartesian coordinate is appropriately represented as a tuple of two or three numbers.
Lists, on the other hand, are more like arrays in other languages. They tend to hold a varying number of objects all of which have the same type and which are operated on one-by-one. For example, os.listdir('.') returns a list of strings representing the files in the current directory. Functions which operate on this output would generally not break if you added another file or two to the directory.
Despite what a Lisper might think, Python's lists are really variable-length arrays. The implementation uses a contiguous array of references to other objects, and keeps a pointer to this array (as well as its length) in a list head structure.
This makes indexing a list (a[i]) an operation whose cost is independent of the size of the list or the value of the index.
When items are appended or inserted, the array of references is resized. Some cleverness is applied to improve the performance of appending items repeatedly; when the array must be grown, some extra space is allocated so the next few times don't require an actual resize.
Python's dictionaries are implemented as resizable hash tables.
Compared to B-trees, this gives better performance for lookup (the most common operation by far) under most circumstances, and the implementation is simpler.
The hash table implementation of dictionaries uses a hash value calculated from the key value to find the key. If the key were a mutable object, its value could change, and thus its hash could change. But since whoever changes the key object can't tell that is incorporated in a dictionary, it can't move the entry around in the dictionary. Then, when you try to look up the same object in the dictionary, it won't be found, since its hash value is different; and if you try to look up the old value, it won't be found either, since the value of the object found in that hash bin differs.
If you think you need to have a dictionary indexed with a list, try to use a tuple instead. The function tuple(l) creates a tuple with the same entries as the list l.
Some unacceptable solutions that have been proposed:
if you construct a new list with the same value it won't be found; e.g.,
d = {[1,2]: '12'} print d[[1,2]]
will raise a KeyError exception because the id of the [1,2] used in the second line differs from that in the first line. In other words, dictionary keys should be compared using '==', not using 'is'.
the list (being a mutable object) could contain a reference to itself, and then the copying code would run into an infinite loop.
allow a class of hard-to-track bugs in programs that I'd rather not see; it invalidates an important invariant of dictionaries (every value in d.keys() is usable as a key of the dictionary).
The problem is that it's not just the top-level object that could change its value; you could use a tuple containing a list as a key. Entering anything as a key into a dictionary would require marking all objects reachable from there as read-only -- and again, self-referential objects could cause an infinite loop again (and again and again).
There is a trick to get around this if you need to, but use it at your own risk: You can wrap a mutable structure inside a class instance which has both a __cmp__ and a __hash__ method.
class listwrapper: def __init__(self, the_list): self.the_list = the_list def __cmp__(self, other): return self.the_list == other.the_list def __hash__(self): l = self.the_list result = 98767 - len(l)*555 for i in range(len(l)): try: result = result + (hash(l[i]) % 9999999) * 1001 + i except: result = (result % 7777777) + i * 333 return result
Note that the hash computation is complicated by the possibility that some members of the list may be unhashable and also by the possibility of arithmetic overflow.
You must make sure that the hash value for all such wrapper objects that reside in a dictionary (or other hash based structure), remain fixed while the object is in the dictionary (or other structure).
Furthermore it must always be the case that if o1 == o2 (ie o1.__cmp__(o2)==0) then hash(o1)==hash(o2) (ie, o1.__hash__() == o2.__hash__()), regardless of whether the object is in a dictionary or not. If you fail to meet these restrictions dictionaries and other hash based structures may misbehave!
In the case of listwrapper above whenever the wrapper object is in a dictionary the wrapped list must not change to avoid anomalies. Don't do this unless you are prepared to think hard about the requirements and the consequences of not meeting them correctly. You've been warned!
In situations where performance matters, making a copy of the list just to sort it would be wasteful. Therefore, list.sort() sorts the list in place. In order to remind you of that fact, it does not return the sorted list. This way, you won't be fooled into accidentally overwriting a list when you need a sorted copy but also need to keep the unsorted version around.
As a result, here's the idiom to iterate over the keys of a dictionary in sorted order:
keys = dict.keys() keys.sort() for key in keys: ...do whatever with dict[key]...
An interface specification for a module as provided by languages such as C++ and java describes the prototypes for the methods and functions of the module. Many feel that compile time enforcement of interface specifications help aid in the construction of large programs. Python does not support interface specifications directly, but many of their advantages can be obtained by an appropriate test discipline for components, which can often be very easily accomplished in Python. There is also a tool, PyChecker, which can be used to find problems due to subclassing.
A good test suite for a module can at once provide a regression test and serve as a module interface specification (even better since it also gives example usage). Look to many of the standard libraries which often have a "script interpretation" which provides a simple "self test." Even modules which use complex external interfaces can often be tested in isolation using trivial "stub" emulations of the external interface.
An appropriate testing discipline (if enforced) can help build large complex applications in Python as well as having interface specifications would do (or better). Of course Python allows you to get sloppy and not do it. Also you might want to design your code with an eye to make it easily tested.
The Pythonic use of the word "type" is quite different from common usage in much of the rest of the programming language world. A "type" in Python is a description for an object's operations as implemented in C. All classes have the same operations implemented in C which sometimes "call back" to differing program fragments implemented in Python, and hence all classes have the same type. Similarly at the C level all class instances have the same C implementation, and hence all instances have the same type.
Remember that in Python usage "type" refers to a C implementation of an object. To distinguish among instances of different classes use Instance.__class__, and also look to 4.47. Sorry for the terminological confusion, but at this point in Python's development nothing can be done!
Objects referenced from Python module global namespaces are not always deallocated when Python exits.
This may happen if there are circular references (see question 4.17). There are also certain bits of memory that are allocated by the C library that are impossible to free (e.g. a tool like Purify will complain about these).
But in general, Python 1.5 and beyond (in contrast with earlier versions) is quite aggressive about cleaning up memory on exit.
If you want to force Python to delete certain things on deallocation use the sys.exitfunc hook to force those deletions. For example if you are debugging an extension module using a memory analysis tool and you wish to make Python deallocate almost everything you might use an exitfunc like this one:
import sys def my_exitfunc(): print "cleaning up" import sys # do order dependant deletions here ... # now delete everything else in arbitrary order for x in sys.modules.values(): d = x.__dict__ for name in d.keys(): del d[name] sys.exitfunc = my_exitfunc
Other exitfuncs can be less drastic, of course.
(In fact, this one just does what Python now already does itself; but the example of using sys.exitfunc to force cleanups is still useful.)
The notation
instance.attribute(arg1, arg2)
usually translates to the equivalent of
Class.attribute(instance, arg1, arg2)
where Class is a (super)class of instance. Similarly
instance.attribute = value
sets an attribute of an instance (overriding any attribute of a class that instance inherits).
Sometimes programmers want to have different behaviours -- they want a method which does not bind to the instance and a class attribute which changes in place. Python does not preclude these behaviours, but you have to adopt a convention to implement them. One way to accomplish this is to use "list wrappers" and global functions.
def C_hello(): print "hello" class C: hello = [C_hello] counter = [0] I = C()
Here I.hello[0]() acts very much like a "class method" and I.counter[0] = 2 alters C.counter (and doesn't override it). If you don't understand why you'd ever want to do this, that's because you are pure of mind, and you probably never will want to do it! This is dangerous trickery, not recommended when avoidable. (Inspired by Tim Peter's discussion.)
In Python 2.2, you can do this using the new built-in operations classmethod and staticmethod. See http://www.python.org/2.2/descrintro.html#staticmethods
Actually, you can use exceptions to provide a "structured goto" that even works across function calls. Many feel that exceptions can conveniently emulate all reasonable uses of the "go" or "goto" constructs of C, Fortran, and other languages. For example:
class label: pass # declare a label try: ... if (condition): raise label() # goto label ... except label: # where to goto pass ...
This doesn't allow you to jump into the middle of a loop, but that's usually considered an abuse of goto anyway. Use sparingly.
This is an implementation limitation, caused by the extremely simple-minded way Python generates bytecode. The try block pushes something on the "block stack" which the continue would have to pop off again. The current code generator doesn't have the data structures around so that 'continue' can generate the right code.
Note that JPython doesn't have this restriction!
More precisely, they can't end with an odd number of backslashes: the unpaired backslash at the end escapes the closing quote character, leaving an unterminated string.
Raw strings were designed to ease creating input for processors (chiefly regular expression engines) that want to do their own backslash escape processing. Such processors consider an unmatched trailing backslash to be an error anyway, so raw strings disallow that. In return, they allow you to pass on the string quote character by escaping it with a backslash. These rules work well when r-strings are used for their intended purpose.
If you're trying to build Windows pathnames, note that all Windows system calls accept forward slashes too:
f = open("/mydir/file.txt") # works fine!
If you're trying to build a pathname for a DOS command, try e.g. one of
dir = r"\this\is\my\dos\dir" "\\" dir = r"\this\is\my\dos\dir\ "[:-1] dir = "\\this\\is\\my\\dos\\dir\\"
Many people used to C or Perl complain that they want to be able to use e.g. this C idiom:
while (line = readline(f)) { ...do something with line... }
where in Python you're forced to write this:
while 1: line = f.readline() if not line: break ...do something with line...
This issue comes up in the Python newsgroup with alarming frequency -- search Deja News for past messages about assignment expression. The reason for not allowing assignment in Python expressions is a common, hard-to-find bug in those other languages, caused by this construct:
if (x = 0) { ...error handling... } else { ...code that only works for nonzero x... }
Many alternatives have been proposed. Most are hacks that save some typing but use arbitrary or cryptic syntax or keywords, and fail the simple criterion that I use for language change proposals: it should intuitively suggest the proper meaning to a human reader who has not yet been introduced with the construct.
The earliest time something can be done about this will be with Python 2.0 -- if it is decided that it is worth fixing. An interesting phenomenon is that most experienced Python programmers recognize the "while 1" idiom and don't seem to be missing the assignment in expression construct much; it's only the newcomers who express a strong desire to add this to the language.
One fairly elegant solution would be to introduce a new operator for assignment in expressions spelled ":=" -- this avoids the "=" instead of "==" problem. It would have the same precedence as comparison operators but the parser would flag combination with other comparisons (without disambiguating parentheses) as an error.
Finally -- there's an alternative way of spelling this that seems attractive but is generally less robust than the "while 1" solution:
line = f.readline() while line: ...do something with line... line = f.readline()
The problem with this is that if you change your mind about exactly how you get the next line (e.g. you want to change it into sys.stdin.readline()) you have to remember to change two places in your program -- the second one hidden at the bottom of the loop.
Basically, because such a construct would be terribly ambiguous. Thanks to Carlos Ribeiro for the following remarks:
Some languages, such as Object Pascal, Delphi, and C++, use static types. So it is possible to know, in an unambiguous way, what member is being assigned in a "with" clause. This is the main point - the compiler always knows the scope of every variable at compile time.
Python uses dynamic types. It is impossible to know in advance which attribute will be referenced at runtime. Member attributes may be added or removed from objects on the fly. This would make it impossible to know, from a simple reading, what attribute is being referenced - a local one, a global one, or a member attribute.
For instance, take the following snippet (it is incomplete btw, just to give you the idea):
def with_is_broken(a): with a: print x
The snippet assumes that "a" must have a member attribute called "x". However, there is nothing in Python that guarantees that. What should happen if "a" is, let us say, an integer? And if I have a global variable named "x", will it end up being used inside the with block? As you see, the dynamic nature of Python makes such choices much harder.
The primary benefit of "with" and similar language features (reduction of code volume) can, however, easily be achieved in Python by assignment. Instead of:
function(args).dict[index][index].a = 21 function(args).dict[index][index].b = 42 function(args).dict[index][index].c = 63
would become:
ref = function(args).dict[index][index] ref.a = 21 ref.b = 42 ref.c = 63
This also has the happy side-effect of increasing execution speed, since name bindings are resolved at run-time in Python, and the second method only needs to perform the resolution once. If the referenced object does not have a, b and c attributes, of course, the end result is still a run-time exception.
The colon is required primarily to enhance readability (one of the results of the experimental ABC language). Consider this:
if a==b print a
versus
if a==b: print a
Notice how the second one is slightly easier to read. Notice further how a colon sets off the example in the second line of this FAQ answer; it's a standard usage in English. Finally, the colon makes it easier for editors with syntax highlighting.
Yes, it is maintained by Jack Jansen. See Jack's MacPython Page:
http://www.cwi.nl/~jack/macpython.html
Yes. The core windows binaries are available from http://www.python.org/windows. There is a plethora of Windows extensions available, including a large number of not-always-compatible GUI toolkits. The core binaries include the standard Tkinter GUI extension.
Most windows extensions can be found (or referenced) at http://www.python.org/windows
Windows 3.1/DOS support seems to have dropped off recently. You may need to settle for an old version of Python one these platforms. One such port is WPY
WPY: Ports to DOS, Windows 3.1(1), Windows 95, Windows NT and OS/2. Also contains a GUI package that offers portability between Windows (not DOS) and Unix, and native look and feel on both. ftp://ftp.python.org/pub/python/wpy.
The documentation for the Unix version also applies to the Mac and PC versions. Where applicable, differences are indicated in the text.
Use an external editor. On the Mac, BBEdit seems to be a popular no-frills text editor. I work like this: start the interpreter; edit a module file using BBedit; import and test it in the interpreter; edit again in BBedit; then use the built-in function reload() to re-read the imported module; etc. In the 1.4 distribution you will find a BBEdit extension that makes life a little easier: it can tell the interpreter to execute the current window. See :Mac:Tools:BBPy:README.
Regarding the same question for the PC, Kurt Wm. Hemr writes: "While anyone with a pulse could certainly figure out how to do the same on MS-Windows, I would recommend the NotGNU Emacs clone for MS-Windows. Not only can you easily resave and "reload()" from Python after making changes, but since WinNot auto-copies to the clipboard any text you select, you can simply select the entire procedure (function) which you changed in WinNot, switch to QWPython, and shift-ins to reenter the changed program unit."
If you're using Windows95 or Windows NT, you should also know about PythonWin, which provides a GUI framework, with an mouse-driven editor, an object browser, and a GUI-based debugger. See
http://www.python.org/ftp/python/pythonwin
for details.
Jean-FranÁois PiÈronne has ported 2.1.3 to OpenVMS. It can be found at <http://vmspython.dyndns.org>.
I haven't heard about these, except I remember hearing about an OS/9 port and a port to Vxworks (both operating systems for embedded systems). If you're interested in any of this, go directly to the newsgroup and ask there, you may find exactly what you need. For example, a port to MPE/iX 5.0 on HP3000 computers was just announced, see http://www.allegro.com/software.
On the IBM mainframe side, for Z/OS there's a port of python 1.4 that goes with their open-unix package, formely OpenEdition MVS, (http://www-1.ibm.com/servers/eserver/zseries/zos/unix/python.html). On a side note, there's also a java vm ported - so, in theory, jython could run too.
I don't have access to most of these platforms, so in general I am dependent on material submitted by volunteers. However I strive to integrate all changes needed to get it to compile on a particular platform back into the standard sources, so porting of the next version to the various non-UNIX platforms should be easy. (Note that Linux is classified as a UNIX platform here. :-)
Some specific platforms:
Windows: all versions (95, 98, ME, NT, 2000, XP) are supported, all python.org releases come with a Windows installer.
MacOS: Jack Jansen does an admirable job of keeping the Mac version up to date (both MacOS X and older versions); see http://www.cwi.nl/~jack/macpython.html
For all supported platforms, see http://www.python.org/download (follow the link to "Other platforms" for less common platforms)
The standard sources can (almost) be used. Additional sources can be found in the platform-specific subdirectories of the distribution.
Yes. See the AmigaPython homepage at http://www.bigfoot.com/~irmen/python.html.
Remember that Python is extremely dynamic and that you can use this dynamism to configure a program at run-time to use available functionality on different platforms. For example you can test the sys.platform and import different modules based on its value.
import sys if sys.platform == "win32": import win32pipe popen = win32pipe.popen else: import os popen = os.popen
(See FAQ 7.13 for an explanation of why you might want to do something like this.) Also you can try to import a module and use a fallback if the import fails:
try: import really_fast_implementation choice = really_fast_implementation except ImportError: import slower_implementation choice = slower_implementation
This is not necessarily quite the straightforward question it appears to be. If you are already familiar with running programs from the Windows command line then everything will seem really easy and obvious. If your computer experience is limited then you might need a little more guidance. Also there are differences between Windows 95, 98, NT, ME, 2000 and XP which can add to the confusion. You might think of this as "why I pay software support charges" if you have a helpful and friendly administrator to help you set things up without having to understand all this yourself. If so, then great! Show them this page and it should be a done deal.
Unless you use some sort of integrated development environment (such as PythonWin or IDLE, to name only two in a growing family) then you will end up typing Windows commands into what is variously referred to as a "DOS window" or "Command prompt window". Usually you can create such a window from your Start menu (under Windows 2000 I use "Start | Programs | Accessories | Command Prompt"). You should be able to recognize when you have started such a window because you will see a Windows "command prompt", which usually looks like this:
C:\>
The letter may be different, and there might be other things after it, so you might just as easily see something like:
D:\Steve\Projects\Python>
depending on how your computer has been set up and what else you have recently done with it. Once you have started such a window, you are well on the way to running Python programs.
You need to realize that your Python scripts have to be processed by another program, usually called the "Python interpreter". The interpreter reads your script, "compiles" it into "Python bytecodes" (which are instructions for an imaginary computer known as the "Python Virtual Machine") and then executes the bytecodes to run your program. So, how do you arrange for the interpreter to handle your Python?
First, you need to make sure that your command window recognises the word "python" as an instruction to start the interpreter. If you have opened a command window, you should try entering the command:
python
and hitting return. If you then see something like:
Python 2.2 (#28, Dec 21 2001, 12:21:22) [MSC 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>>
then this part of the job has been correctly managed during Python's installation process, and you have started the interpreter in "interactive mode". That means you can enter Python statements or expressions interactively and have them executed or evaluated while you wait. This is one of Python's strongest features, but it takes a little getting used to. Check it by entering a few expressions of your choice and seeing the results...
>>> print "Hello" Hello >>> "Hello" * 3 HelloHelloHello
When you want to end your interactive Python session, enter a terminator (hold the Ctrl key down while you enter a Z, then hit the "Enter" key) to get back to your Windows command prompt. You may also find that you have a Start-menu entry such as "Start | Programs | Python 2.2 | Python (command line)" that results in you seeing the ">>>" prompt in a new window. If so, the window will disappear after you enter the terminator -- Windows runs a single "python" command in the window, which terminates when you terminate the interpreter.
If the "python" command, instead of displaying the interpreter prompt ">>>", gives you a message like
'python' is not recognized as an internal or external command, operable program or batch file.
or
Bad command or filename
then you need to make sure that your computer knows where to find the Python interpreter. To do this you will have to modify a setting called the PATH, which is a just list of directories where Windows will look for programs. Rather than just enter the right command every time you create a command window, you should arrange for Python's installation directory to be added to the PATH of every command window as it starts. If you installed Python fairly recently then the command
dir C:\py*
will probably tell you where it is installed. Alternatively, perhaps you made a note. Otherwise you will be reduced to a search of your whole disk ... break out the Windows explorer and use "Tools | Find" or hit the "Search" button and look for "python.exe". Suppose you discover that Python is installed in the C:Python22 directory (the default at the time of writing) then you should make sure that entering the command
c:\Python22\python
starts up the interpreter as above (and don't forget you'll need a "CTRL-Z" and an "Enter" to get out of it). Once you have verified the directory, you need to add it to the start-up routines your computer goes through. For older versions of Windows the easiest way to do this is to edit the C:AUTOEXEC.BAT file. You would want to add a line like the following to AUTOEXEC.BAT:
PATH C:\Python22;%PATH%
For Windows NT, 2000 and (I assume) XP, you will need to add a string such as
;C:\Python22
to the current setting for the PATH environment variable, which you will find in the properties window of "My Computer" under the "Advanced" tab. Note that if you have sufficient privilege you might get a choice of installing the settings either for the Current User or for System. The latter is preferred if you want everybody to be able to run Python on the machine.
If you aren't confident doing any of these manipulations yourself, ask for help! At this stage you may or may not want to reboot your system to make absolutely sure the new setting has "taken" (don't you love the way Windows gives you these freqeuent coffee breaks). You probably won't need to for Windows NT, XP or 2000. You can also avoid it in earlier versions by editing the file C:WINDOWSCOMMANDCMDINIT.BAT instead of AUTOEXEC.BAT.
You should now be able to start a new command window, enter
python
at the "C:>" (or whatever) prompt, and see the ">>>" prompt that indicates the Python interpreter is reading interactive commands.
Let's suppose you have a program called "pytest.py" in directory "C:SteveProjectsPython". A session to run that program might look like this:
C:\> cd \Steve\Projects\Python C:\Steve\Projects\Python> python pytest.py
Because you added a file name to the command to start the interpreter, when it starts up it reads the Python script in the named file, compiles it, executes it, and terminates (so you see another "C:>" prompt). You might also have entered
C:\> python \Steve\Projects\Python\pytest.py
if you hadn't wanted to change your current directory.
Under NT, 2000 and XP you may well find that the installation process has also arranged that the command
pytest.py
(or, if the file isn't in the current directory)
C:\Steve\Projects\Python\pytest.py
will automatically recognize the ".py" extension and run the Python interpreter on the named file. Using this feature is fine, but some versions of Windows have bugs which mean that this form isn't exactly equivalent to using the interpreter explicitly, so be careful. Easier to remember, for now, that
python C:\Steve\Projects\Python\pytest.py
works pretty close to the same, and redirection will work (more) reliably.
The important things to remember are:
1. Start Python from the Start Menu, or make sure the PATH is set correctly so Windows can find the Python interpreter.
python
should give you a '>>>" prompt from the Python interpreter. Don't forget the CTRL-Z and ENTER to terminate the interpreter (and, if you started the window from the Start Menu, make the window disappear).
python {program-file}
3. When you know the commands to use you can build Windows shortcuts to run the Python interpreter on any of your scripts, naming particular working directories, and adding them to your menus, but that's another lessFAQ. Take a look at
python --help
if your needs are complex.
4. Interactive mode (where you see the ">>>" prompt) is best used not for running programs, which are better executed as in steps 2 and 3, but for checking that individual statements and expressions do what you think they will, and for developing code by experiment.
[Blake Coverett]
Win2K:
The standard installer already associates the .py extension with a file type (Python.File) and gives that file type an open command that runs the interpreter (D:Program FilesPythonpython.exe "%1" %*). This is enough to make scripts executable from the command prompt as 'foo.py'. If you'd rather be able to execute the script by simple typing 'foo' with no extension you need to add .py to the PATHEXT environment variable.
WinNT:
The steps taken by the installed as described above allow you do run a script with 'foo.py', but a long time bug in the NT command processor prevents you from redirecting the input or output of any script executed in this way. This is often important.
An appropriate incantation for making a Python script executable under WinNT is to give the file an extension of .cmd and add the following as the first line:
@setlocal enableextensions & python -x %~f0 %* & goto :EOF
Win9x:
[Due to Bruce Eckel]
@echo off rem = """ rem run python on this bat file. Needs the full path where rem you keep your python files. The -x causes python to skip rem the first line of the file: python -x c:\aaa\Python\\"%0".bat %1 %2 %3 %4 %5 %6 %7 %8 %9 goto endofpython rem """ # The python program goes here: print "hello, Python" # For the end of the batch file: rem = """ :endofpython rem """
In MS-DOS derived environments, a unix variable such as $PYTHONPATH is set as PYTHONPATH, without the dollar sign. PYTHONPATH is useful for specifying the location of library files.
("Freeze" is a program that allows you to ship a Python program as a single stand-alone executable file. It is not a compiler, your programs don't run any faster, but they are more easily distributable (to platforms with the same OS and CPU). Read the README file of the freeze program for more disclaimers.)
You can use freeze on Windows, but you must download the source tree (see http://www.python.org/download/download_source.html). This is recommended for Python 1.5.2 (and betas thereof) only; older versions don't quite work.
You need the Microsoft VC++ 5.0 compiler (maybe it works with 6.0 too). You probably need to build Python -- the project files are all in the PCbuild directory.
The freeze program is in the Toolsfreeze subdirectory of the source tree.
Yes, .pyd files are dll's. But there are a few differences. If you have a DLL named foo.pyd, then it must have a function initfoo(). You can then write Python "import foo", and Python will search for foo.pyd (as well as foo.py, foo.pyc) and if it finds it, will attempt to call initfoo() to initialize it. You do not link your .exe with foo.lib, as that would cause Windows to require the DLL to be present.
Note that the search path for foo.pyd is PYTHONPATH, not the same as the path that Windows uses to search for foo.dll. Also, foo.pyd need not be present to run your program, whereas if you linked your program with a dll, the dll is required. Of course, foo.pyd is required if you want to say "import foo". In a dll, linkage is declared in the source code with __declspec(dllexport). In a .pyd, linkage is defined in a list of available functions.
Edward K. Ream <edream@tds.net> writes
When '##' appears in a file name below, it is an abbreviated version number. For example, for Python 2.1.1, ## will be replaced by 21.
Embedding the Python interpreter in a Windows app can be summarized as follows:
1. Do _not_ build Python into your .exe file directly. On Windows, Python must be a DLL to handle importing modules that are themselves DLL's. (This is the first key undocumented fact.) Instead, link to python##.dll; it is typically installed in c:WindowsSystem.
You can link to Python statically or dynamically. Linking statically means linking against python##.lib The drawback is that your app won't run if python##.dll does not exist on your system.
General note: python##.lib is the so-called "import lib" corresponding to python.dll. It merely defines symbols for the linker.
Borland note: convert python##.lib to OMF format using Coff2Omf.exe first.
Linking dynamically greatly simplifies link options; everything happens at run time. Your code must load python##.dll using the Windows LoadLibraryEx() routine. The code must also use access routines and data in python##.dll (that is, Python's C API's) using pointers obtained by the Windows GetProcAddress() routine. Macros can make using these pointers transparent to any C code that calls routines in Python's C API.
2. If you use SWIG, it is easy to create a Python "extension module" that will make the app's data and methods available to Python. SWIG will handle just about all the grungy details for you. The result is C code that you link into your .exe file (!) You do _not_ have to create a DLL file, and this also simplifies linking.
3. SWIG will create an init function (a C function) whose name depends on the name of the extension module. For example, if the name of the module is leo, the init function will be called initleo(). If you use SWIG shadow classes, as you should, the init function will be called initleoc(). This initializes a mostly hidden helper class used by the shadow class.
The reason you can link the C code in step 2 into your .exe file is that calling the initialization function is equivalent to importing the module into Python! (This is the second key undocumented fact.)
4. In short, you can use the following code to initialize the Python interpreter with your extension module.
#include "python.h" ... Py_Initialize(); // Initialize Python. initmyAppc(); // Initialize (import) the helper class. PyRun_SimpleString("import myApp") ; // Import the shadow class.
5. There are two problems with Python's C API which will become apparent if you use a compiler other than MSVC, the compiler used to build python##.dll.
Problem 1: The so-called "Very High Level" functions that take FILE * arguments will not work in a multi-compiler environment; each compiler's notion of a struct FILE will be different. From an implementation standpoint these are very _low_ level functions.
Problem 2: SWIG generates the following code when generating wrappers to void functions:
Py_INCREF(Py_None); _resultobj = Py_None; return _resultobj;
Alas, Py_None is a macro that expands to a reference to a complex data structure called _Py_NoneStruct inside python##.dll. Again, this code will fail in a mult-compiler environment. Replace such code by:
return Py_BuildValue("");
It may be possible to use SWIG's %typemap command to make the change automatically, though I have not been able to get this to work (I'm a complete SWIG newbie).
6. Using a Python shell script to put up a Python interpreter window from inside your Windows app is not a good idea; the resulting window will be independent of your app's windowing system. Rather, you (or the wxPythonWindow class) should create a "native" interpreter window. It is easy to connect that window to the Python interpreter. You can redirect Python's i/o to _any_ object that supports read and write, so all you need is a Python object (defined in your extension module) that contains read() and write() methods.
** Setting up the Microsoft IIS Server/Peer Server
On the Microsoft IIS server or on the Win95 MS Personal Web Server you set up python in the same way that you would set up any other scripting engine.
Run regedt32 and go to:
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesW3SVCParametersScriptMap
and enter the following line (making any specific changes that your system may need)
.py :REG_SZ: c:<path to python>python.exe -u %s %s
This line will allow you to call your script with a simple reference like: http://yourserver/scripts/yourscript.py provided "scripts" is an "executable" directory for your server (which it usually is by default). The "-u" flag specifies unbuffered and binary mode for stdin - needed when working with binary data
In addition, it is recommended by people who would know that using ".py" may not be a good idea for the file extensions when used in this context (you might want to reserve *.py for support modules and use *.cgi or *.cgp for "main program" scripts). However, that issue is beyond this Windows FAQ entry.
** Apache configuration
In the Apache configuration file httpd.conf, add the following line at the end of the file:
ScriptInterpreterSource Registry
Then, give your Python CGI-scripts the extension .py and put them in the cgi-bin directory.
** Netscape Servers: Information on this topic exists at: http://home.netscape.com/comprod/server_central/support/fasttrack_man/programs.htm#1010870
In order to set up Internet Information Services 5 to use Python for CGI processing, please see the following links:
http://www.e-coli.net/pyiis_server.html (for Win2k Server) http://www.e-coli.net/pyiis.html (for Win2k pro)
The FAQ does not recommend using tabs, and Guido's Python Style Guide recommends 4 spaces for distributed Python code; this is also the Emacs python-mode default; see
http://www.python.org/doc/essays/styleguide.html
Under any editor mixing tabs and spaces is a bad idea. MSVC is no different in this respect, and is easily configured to use spaces: Take Tools -> Options -> Tabs, and for file type "Default" set "Tab size" and "Indent size" to 4, and select the "Insert spaces" radio button.
If you suspect mixed tabs and spaces are causing problems in leading whitespace, run Python with the -t switch or, run Tools/Scripts/tabnanny.py to check a directory tree in batch mode.
Use the msvcrt module. This is a standard Windows-specific extensions in Python 1.5 and beyond. It defines a function kbhit() which checks whether a keyboard hit is present; also getch() which gets one character without echo. Plus a few other goodies.
(Search for "keypress" to find an answer for Unix as well.)
Use win32api:
def kill(pid): """kill function for Win32""" import win32api handle = win32api.OpenProcess(1, 0, pid) return (0 != win32api.TerminateProcess(handle, 0))
Be sure you have the latest python.exe, that you are using python.exe rather than a GUI version of python and that you have configured the server to execute
"...\python.exe -u ..."
for the cgi execution. The -u (unbuffered) option on NT and win95 prevents the interpreter from altering newlines in the standard input and output. Without it post/multipart requests will seem to have the wrong length and binary (eg, GIF) responses may get garbled (resulting in, eg, a "broken image").
The reason that os.popen() doesn't work from within PythonWin is due to a bug in Microsoft's C Runtime Library (CRT). The CRT assumes you have a Win32 console attached to the process.
You should use the win32pipe module's popen() instead which doesn't depend on having an attached Win32 console.
Example:
import win32pipe f = win32pipe.popen('dir /c c:\\') print f.readlines() f.close()
There is a bug in Win9x that prevents os.popen/win32pipe.popen* from working. The good news is there is a way to work around this problem. The Microsoft Knowledge Base article that you need to lookup is: Q150956. You will find links to the knowledge base at: http://www.microsoft.com/kb.
I've seen a number of reports of PyRun_SimpleFile() failing in a Windows port of an application embedding Python that worked fine on Unix. PyRun_SimpleString() works fine on both platforms.
I think this happens because the application was compiled with a different set of compiler flags than Python15.DLL. It seems that some compiler flags affect the standard I/O library in such a way that using different flags makes calls fail. You need to set it for the non-debug multi-threaded DLL (/MD on the command line, or can be set via MSVC under Project Settings->C++/Code Generation then the "Use rum-time library" dropdown.)
Also note that you can not mix-and-match Debug and Release versions. If you wish to use the Debug Multithreaded DLL, then your module _must_ have an "_d" appended to the base name.
Sometimes, the import of _tkinter fails on Windows 95 or 98, complaining with a message like the following:
ImportError: DLL load failed: One of the library files needed to run this application cannot be found.
It could be that you haven't installed Tcl/Tk, but if you did install Tcl/Tk, and the Wish application works correctly, the problem may be that its installer didn't manage to edit the autoexec.bat file correctly. It tries to add a statement that changes the PATH environment variable to include the Tcl/Tk 'bin' subdirectory, but sometimes this edit doesn't quite work. Opening it with notepad usually reveals what the problem is.
(One additional hint, noted by David Szafranski: you can't use long filenames here; e.g. use C:PROGRA~1Tclbin instead of C:Program FilesTclbin.)
Sometimes, when you download the documentation package to a Windows machine using a web browser, the file extension of the saved file ends up being .EXE. This is a mistake; the extension should be .TGZ.
Simply rename the downloaded file to have the .TGZ extension, and WinZip will be able to handle it. (If your copy of WinZip doesn't, get a newer one from http://www.winzip.com.)
This is very sensitive to the compiler vendor, version and (perhaps) even options. If the FILE* structure in your embedding program isn't the same as is assumed by the Python interpreter it won't work.
The Python 1.5.* DLLs (python15.dll) are all compiled with MS VC++ 5.0 and with multithreading-DLL options (/MD, I think).
If you can't change compilers or flags, try using Py_RunSimpleString(). A trick to get it to run an arbitrary file is to construct a call to execfile() with the name of your file as argument.
Sometimes, when using Tkinter on Windows, you get an error that cw3215mt.dll or cw3215.dll is missing.
Cause: you have an old Tcl/Tk DLL built with cygwin in your path (probably C:Windows). You must use the Tcl/Tk DLLs from the standard Tcl/Tk installation (Python 1.5.2 comes with one).
The Python installer issues a warning like this:
This version uses CTL3D32.DLL whitch is not the correct version. This version is used for windows NT applications only.
[Tim Peters] This is a Microsoft DLL, and a notorious source of problems. The message means what it says: you have the wrong version of this DLL for your operating system. The Python installation did not cause this -- something else you installed previous to this overwrote the DLL that came with your OS (probably older shareware of some sort, but there's no way to tell now). If you search for "CTL3D32" using any search engine (AltaVista, for example), you'll find hundreds and hundreds of web pages complaining about the same problem with all sorts of installation programs. They'll point you to ways to get the correct version reinstalled on your system (since Python doesn't cause this, we can't fix it).
David A Burton has written a little program to fix this. Go to http://www.burtonsys.com/download.html and click on "ctl3dfix.zip"