The reason this isn't just an abstruse philosophical argument where it
makes sense for us to obtusely cling to some indefensible point of view,
is that as the man points out, there are differences that we can't hide
forever from potential Python users. The most obvious to me is that
your Python program essential includes its interpreter - can't go anywhere
without it, and any change to the interpreter is a change to the program.
There are various strategies to address this, but pretending that Python
isn't interpreted is not one of them.
Python programs (.py files) don't contain an interpreter. Some of those
files are *really* small, you can't hide a full blown Python interpreter
in just half a dozen lines.
What you mean is that Python programs are only executable on a platform
which contains the Python interpreter, and if it is not already installed,
you have to install it yourself.
So how exactly is that different from the fact that my compiled C program
is only executable on a platform that already contains the correct machine
code interpreter, and if that interpreter is not already installed, I have
to install a machine code interpreter (either the correct CPU or a
software emulator) for it?
Moving compiled C programs from one system to another one with a different
CPU also changes the performance (and sometimes even the semantics!) of
the program. It is easier for the average user to change the version of
Python than it is to change the CPU.
Nobody denies that Python code running with no optimization tricks is
(currently) slower than compiled C code. That's a matter of objective
fact. Nobody denies that Python can be easily run in interactive mode.
Nobody denies that *at some level* Python code has to be interpreted.
But ALL code is interpreted at some level or another. And it is equally
true that at another level Python code is compiled. Why should one take
precedence over the other?
The current state of the art is that the Python virtual machine is slower
than the typical machine code virtual machine built into your CPU. That
speed difference is only going to shrink, possibly disappear completely.
But whatever happens in the future, it doesn't change two essential facts:
- Python code must be compiled to execute;
- machine code must be interpreted to execute.
Far from being indefensible, philosophically there is no difference
between the two. Python and C are both Turing Complete languages, and both
are compiled, and both are interpreted (just at different places).
Of course, the difference between theory and practice is that in theory
there is no difference between theory and practice, but in practice there
is. I've already allowed that in practice Python is slower than machine
code. But Python is faster than purely interpreted languages like bash.
Consider that Forth code can be as fast (and sometimes faster) than the
equivalent machine code despite being interpreted. I remember a
Apple/Texas Instruments collaborative PC back in the mid to late 1980s
with a Lisp chip. Much to their chagrin, Lisp code interpreted on a
vanilla Macintosh ran faster than compiled Lisp code running on their
expensive Lisp machine. Funnily enough, the Apple/TI Lisp machine sank
like a stone.
So speed in and of itself tells you nothing about whether a language is
interpreted or compiled. A fast interpreter beats a slow interpreter,
that's all, even when the slow interpreter is in hardware.
Describing C (or Lisp) as "compiled" and Python as "interpreted" is to
paint with an extremely broad brush, both ignoring what actually happens
in fact, and giving a false impression about Python. It is absolutely true
to say that Python does not compile to machine code. (At least not yet.)
But it is also absolutely true that Python is compiled. Why emphasise the
interpreter, and therefore Python's similarity to bash, rather than the
compiler and Python's similarity to (say) Java or Lisp?