To be honest I don't either.
That is a question for a compiler construction class.
In short: all of the analytical work (scanner, parser, context checker)
as well as some sort of code generation (this varies very widely because
of the many different styles of interpreters) that a compiler does
upfront must to be done by the interpreter at runtime and thus adds
significant overhead to interpreted program execution.
Most interpreters (for example, perl) first compile the source code into
an intermediate form which is then interpreted. So the overhead you
mention (analytical work, code generation) only happens at startup and
is negligible unless your program is invoked very frequently.
There are two differences which are much more important:
* To interpret the "byte code" (which doesn't actually have 1 byte
opcodes, but the name is traditional), the interpreter has to fetch
every instruction from memory, decide what to do with it and call the
appropriate code, which will also manipulate data structures in main
memory. A CPU also contains an interpreter (either hardwired or
written in microcode) that has to fetch each instruction, decide what
to do with it and then do it - but that interpreter mainly needs to
manipulate data structures within the CPU, which is a lot faster, it
can do things in parallel (like fetching the next instruction while
decoding the current one and executing the previous one), etc.
Therefore interpreting native machine code is (sometimes a lot)
faster than interpreting a non-native code.
* Dynamically typed languages like perl need to do a lot of checks at
run time. The C compiler knows that two variables are of type signed
int, and for a division can generate code which does a 32-bit signed
division. At run time, this is fixed and doesn't have to be checked
any more. But the perl compiler only knows that it has two scalars.
So the interpreter needs to check at run time whether those scalars
are undef, signed int, unsigned int, or floating point numbers,
strings or objects and select the approriate code path.
I wrote above that the compilation overhead is negligible for most
scripts, but there is a catch: To keep that overhead negligible the
compiler needs to fast - so it cannot perform extensive analysis and
hence cannot produce very good code (for example, a data flow analysis
might show that some perl variables always only contain integer values
and the compiler could take advantage of that, but for most scripts that
would take more time than it saves - you don't want to do that every
time you run your script).
Perl is not slow. For what it does it's pretty darn fast.
For any given task, a Perl program is slower than the equivalent C
program. Whether that is "slow" or "fast" in any objective sense is IMHO
quite irrelevant to the discussion.
Also, interpreter technology has advanced quite a bit since the
mid-1990s. A newly designed language which does "what perl does" could
be quite a bit faster. Even a clean reimplementation of Perl (the
programming language as documented - not counting stuff like XS,
implementation quirks, etc.) probably could be noticably faster.
No, because Perl supports self-modifying programs, see "perldoc -f
eval".
Eval is probably the least of your worries. Including an interpreter (or
compiler) for that purpose is simple. The interconnections between
compiler and interpreter during the compilation phase are probably
harder. Still could be possible but I wonder whether it's worth the
effort - you still have to do all that run-time type checking stuff.
hp