I'd say that's fairly close to my definition.
Compiled languages translate the source code into machine language and
execute it directly on the host CPU. Source code can either be compiled
ahead-of-time (e.g. with a compiler you invoke that outputs a host
executable) or just-in-time (JIT) as the programming language is executing.
Interpreted languages use a virtual machine or AST walker to perform the
execution. There can be an intermediate step wherein the source code is
translated to an intermediate bytecode.
Many language runtimes implement a hybrid of these two approaches, and start
by interpreting all code then finding "hot spots" via runtime profiling and
compiling those.
These are all really implementation choices rather than a
characteristic of the language itself.
Traditionally, a compiler was a separate program which translated
source code into linkable object code. The way you get an executable
module in a 'traditional way' is to run several units of source code
through one or more compilers, which one depending on the language of
each source code unit; then run the resulting object code units
through a linker (or linkage editor if you have an IBM heritage) to
produce loadable modules.
I think the 'scripting language' moniker started out as describing
things like Unix shell scripts, which didn't go through this process
but were directly executable, being interpreted by the shell. They
were viewed as languages for quick tasks, with real programs to be
written in real languages with a traditional tool chain.
It's interesting that even in this traditional view, even a language
like C goes through a "middle man to make calls to the system" in the
form of a standard subroutine library, at least some of which is
likely written in Assembly language. And that an assembler is really a
kind of compiler, albeit for a very low level language close to the
target hardware.
More and more, implementations bring the compiler and linker functions
inside the tent. There are variations on what kind of output the
'compiler' produces. Typically it's some form of machine instructions
for a higher level virtual machine rather than the real hardware.
There are implementations of languages like C which actual compile to
such a virtual machine, and not to hardware instructions. The
performance of such implementations can be surprising, since the
virtual machine instructions can be more compact, resulting in less
virtual memory overhead. There have also been implementations of
languages like Smalltalk which have compiled to machine instructions
rather than byte codes, and again the performance was surprising. For
example, back in the 1980s the folks at Digitalk (which later merged
with ParcPlace and I guess is now Cincom) got tired of hearing about
their Smalltalk V being 'interpreted' and came out with a release
which did just as I described, it generated 80286 machine
instructions. The result was that it actually ran slower than the
implementation which used byte codes, because it's quicker for a
virtual machine to execute byte codes which are already in real memory
rather than waiting for the faster machine code to be swapped in from
disk.
Thats one of the reasons why hot-spot is good, it makes a nice
speed/space tradeoff, keeping the working set smaller while allowing
frequently executed code to be re-compiled for speed.
There are various kinds of interpreter. Traditionally an interpreter
executes some internal representation of code, which might be an
abstract syntax tree, or threaded code, or the like. On the other
hand a virtual machine can be viewed as an interpreter of what are
loosely called byte codes, and in reality a hardware computer is an
interpreter of machine instructions.
--
Rick DeNatale
Blog:
http://talklikeaduck.denhaven2.com/
Github:
http://github.com/rubyredrick
Twitter: @RickDeNatale
WWR:
http://www.workingwithrails.com/person/9021-rick-denatale
LinkedIn:
http://www.linkedin.com/in/rickdenatale