M
Mark Lawrence
I believe we need to talk about the Dunning-Kruger effect
No need for me to discuss that as I used to be big headed but now I'm
perfect.
I believe we need to talk about the Dunning-Kruger effect
Go initializes variables to defined zero values, not simply to
all-bits zero as (I think) C does.
That's not English. Do you mean "parsed"?Neil Cerutti said:Context-sensitive grammars can be parse, too.
That's not English. Do you mean "parsed"?
But context-sentitive grammars cannot be specified by BNF.
That's not English. Do you mean "parsed"?
But context-sentitive grammars cannot be specified by BNF.
Please be kind enough to disambiguate Mark, as I would not wish
to be tarred with the same brush.
C initializes to defined zero values. For most machines in use today,
those values _happen_ to be all-bits-zero.
This makes the implementation trivial: chuck them all into some
pre-defined section (e.g. ".bss"), and then on startup, you zero-out
all the bits in the section without having to know what's where within
that section. If you design a machine such that integer, pointer, and
FP representations where 0, NULL, and 0.0 are all zero-bits, then life ^
not
get's tougher for the guys writing the compiler and startup code.
Mark Janssen said:Well, if your language is not Turing complete, it is not clear that
you will be able to compile it at all. That's the difference between
a calculator and a computer.
Well, if your language is not Turing complete, it is not clear that
you will be able to compile it at all.
That's the difference between a calculator and a computer.
Thank you. You may be seated.
I've tried to be polite, and I've tried to be helpful, but I'm sorry:
either you don't understand a lot of the terms you are throwing around,
or you aren't disciplined enough to focus on a topic long enough to
explain yourself. Either way, I don't know how else to move the
discussion forward.
You forgot to end with a well-warranted "Boom".
Mark Janssen is rapidly becoming Xah Lee 2.0, identical down to the
repugnant misogyny he expresses elsewhere. The only difference is one of
verbosity.
Thank you. You may be seated.
Ranting Rick, is that you?
You think a language that is not Turing-complete cannot be compiled?
What nonsense is that. Please Mark, spare us your nonsense.
Is there any document describing what it can already compile and, if possible, showing some benchmarks?
After reading through a vast amount of drivel below on irrelevant
topics, looking at the nonexistent documentation, and finally reading
some of the code, I think I see what's going on here. Here's
the run-time code for integers:
http://sourceforge.net/p/gccpy/code/ci/master/tree/libgpython/runtime/gpy-object-integer.c
The implementation approach seems to be that, at runtime,
everything is a struct which represents a general Python object.
The compiler is, I think, just cranking out general subroutine
calls that know nothing about type information. All the
type handling is at run time. That's basically what CPython does,
by interpreting a pseudo-instruction set to decide which
subroutines to call.
It looks like integers and lists have been implemented, but
not much else. Haven't found source code for strings yet.
Memory management seems to rely on the Boehm garbage collector.
Much code seems to have been copied over from the GCC library
for Go. Go, though, is strongly typed at compile time.
There's no inherent reason this "compiled" approach couldn't work,
but I don't know if it actually does. The performance has to be
very low. Each integer add involves a lot of code, including two calls
of "strcmp (x->identifier, "Int")". A performance win over CPython
is unlikely.
Compare Shed Skin, which tries to infer the type of Python
objects so it can generate efficient type-specific C++ code. That's
much harder to do, and has trouble with very dynamic code, but
what comes out is fast.
John Nagle
....
I think your analysis is probably grossly unfair for many reasons.
But your entitled to your opinion.
Current i do not use Bohem-GC (I dont have one yet),
You included it in your project:
http://sourceforge.net/p/gccpy/code/ci/master/tree/boehm-gc
i re-use
principles from gccgo in the _compiler_ not the runtime. At runtime
everything is a gpy_object_t, everything does this. Yeah you could do
a little of dataflow analysis for some really really specific code
and very specific cases and get some performance gains. But the
problem is that the libpython.so it was designed for an interpreter.
So first off your comparing a project done on my own to something
like cPython loads of developers 20 years on my project or something
PyPy has funding loads of developers.
Where i speed up is absolutely no runtime lookups on data access.
Look at cPython its loads of little dictionaries. All references are
on the Stack at a much lower level than C. All constructs are
compiled in i can reuse C++ native exceptions in the whole thing. I
can hear you shouting at the email already but the middle crap that a
VM and interpreter have to do and fast lookup is _NOT_ one of them.
If you truely understand how an interpreter works you know you cant
do this
Plus your referencing really old code on sourceforge is another
thing.
That's where you said to look:
http://gcc.gnu.org/wiki/PythonFrontEnd
"To follow gccpy development see: Gccpy SourceForge
https://sourceforge.net/projects/gccpy"
And i dont want to put out bench marks (I would get so much
shit from people its really not worth it) but it i can say it is
faster than everything in the stuff i compile so far. So yeah... not
only that but your referncing a strncmp to say no its slow yeah it
isn't 100% ideal but in my current git tree i have changed that.
So the real source code isn't where you wrote that it is?
Where is it, then?
So i
think its completely unfair to reference tiny things and pretend you
know everything about my project.
If you wrote more documentation about what you're doing,
people might understand what you are doing.
One thing people might find interesting is class i do data flow
analysis to generate a complete type for that class and each member
function is a compiled function like C++ but at a much lower level
than C++.
It's not clear what this means. Are you trying to determine, say,
which items are integers, lists, or specific object types?
Shed Skin tries to do that. It's hard to do, but very effective
if you can do it. In CPython, every time "x = a + b" is
executed, the interpreter has to invoke the general case for
"+", which can handle integers, floats, strings, NumPy, etc.
If you can infer types, and know it's a float, the run
time code can be float-specific and about three machine
instructions.
The whole project has been about stripping out the crap
needed to run user code and i have been successful so far but your
comparing a in my spare time project to people who work on their
stuff full time. With loads of people etc.
Shed Skin is one guy.
Anyways i am just going to stay out of this from now but your email
made me want to reply and rage.
You've made big claims without giving much detail. So people
are trying to find out if you've done something worth paying
attention to.
John Nagle
If you don't implement exec() and eval() then people won't be able to use
namedtuples, which are a common datatype factory.
As for the rest: well, good luck writing an AOT compiler producing
interesting results on average *pure* Python code. It's already been tried
a number of times, and has generally failed. Cython mitigates the issue by
exposing a superset of Python (including type hints, etc.).
Is this some kind of joke? What has this list become?
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.