Python OS

  • Thread starter Richard Blackwood
  • Start date
R

Richard Blackwood

Is it possible to prototype an operating system in Python? If so, what
would such a task entail? (i.e. How would one write a boot-loader in
Python?)

- Richard B.
 
D

Diez B. Roggisch

Richard said:
Is it possible to prototype an operating system in Python? If so, what
would such a task entail? (i.e. How would one write a boot-loader in
Python?)


There have been lengthy discussions on this subject in this group - google
is your friend.

Generally spaeking, its not possible - as there is a fair amount of
low-level stuff to be progammed for interrupt routines and the like that
can not be done in python.
 
R

Richard Blackwood

Diez said:
Richard Blackwood wrote:





There have been lengthy discussions on this subject in this group - google
is your friend.
I know, I read them. The conclusion was that indeed, it can and in fact
has been done.
Generally spaeking, its not possible - as there is a fair amount of
low-level stuff to be progammed for interrupt routines and the like that
can not be done in python.
More what I meant was whether I can _prototype_ an OS in Python (i.e.
make a virtual OS).

P.S. If one can program interrupt routines in C, they can do the same
in Python.
 
D

Diez B. Roggisch

I know, I read them. The conclusion was that indeed, it can and in fact
has been done.

Hm. I'll reread them myself and see if what has been achieved in this field.
But I seriously doubt that someone took a python interpreter and started
writing an OS.

I can imagine having an OS _based_ on python, where the os api is exposed
using python - but still this requires a fair amount of lowlevel stuff that
can't be done in python itself, but must be written in C or even assembler.
More what I meant was whether I can _prototype_ an OS in Python (i.e.
make a virtual OS).

P.S. If one can program interrupt routines in C, they can do the same
in Python.

Show me how you set the interrupt jumptables to your routines in python and
how you write time-critical code which has to be executed in a few hundred
cpu-cycles. And how good that works together with the GIL.
 
B

Benji York

Diez said:
Hm. I'll reread them myself and see if what has been achieved in this field.
But I seriously doubt that someone took a python interpreter and started
writing an OS.

I am aware of at least one attempt. See
http://cleese.sourceforge.net/cgi-bin/moin.cgi and the mailing list
archive. They managed to boot a specially modified VM and run simple
Python programs. Unfortunately the project was abandoned about a year ago.
 
D

Diez B. Roggisch

Benji said:
I am aware of at least one attempt. See
http://cleese.sourceforge.net/cgi-bin/moin.cgi and the mailing list
archive. They managed to boot a specially modified VM and run simple
Python programs. Unfortunately the project was abandoned about a year
ago.

And it backs my assertions:

--------------
Code that is written in either Assembly Language, Boa (see BoaPreprocessor)
or C is often referred to here (and in mailing list discussions) as ABC
Code. (Coincidently, "ABC" is one of the languages that Python was
originally based on)


Different components of Cleese live in different layers:


Layer 1
start-up code, low-level hardware services and any library code required by
Python virtual machine - ABC code

Layer 2
the Python virtual machine - C code

Layer 3
the kernel and modules - Python code

Layer 4
user-level applications - Python code
---------------

http://cleese.sourceforge.net/cgi-bin/moin.cgi/CleeseArchitecture

So the lowest layer _is_ written in assembler, C or something else that's
allowing for pythonesque source but generates assembler. That was precisely
my point.
 
B

Benji York

Diez said:
And it backs my assertions:

Definitely. I missed that part.

I wonder if someone were to start a similar project today if they would
be able to use Pyrex (which generates C) to do large parts of the OS.

Perhaps when I retire <wink>.
 
R

Richard Blackwood

Show me how you set the interrupt jumptables to your routines in python and
how you write time-critical code which has to be executed in a few hundred
cpu-cycles. And how good that works together with the GIL.
Here is my logic: If one can do X in C and Python is C-aware (in other
words Python can be exposed to C) then Python can do X via such exposure.
 
R

Richard Blackwood

Benji said:
Definitely. I missed that part.

I wonder if someone were to start a similar project today if they
would be able to use Pyrex (which generates C) to do large parts of
the OS.

Perhaps when I retire <wink>.

An intriguing possibility, I do not see what obstacles exist here but I
am sure they exist. A question: In Pyrex may I perform a malloc?
 
P

Peter Hansen

Richard said:
Here is my logic: If one can do X in C and Python is C-aware (in other
words Python can be exposed to C) then Python can do X via such exposure.

Unfortunately the logic is flawed, even in the case that you
quoted above!

You _cannot_ use Python for a time-critical interrupt, even
when you allow for pure C or assembly code as the bridge
(since Python _cannot_ be used natively for interrupts, of
course), because -- as noted above! -- the interrupt must
execute in a few hundred CPU cycles.

Given that the cost of invoking the Python interpreter on
a bytecode-interrupt routine would be several orders of
magnitude higher, I don't understand why you think it
is possible for it to be as fast.

Of course, if you will allow both assembly/C code here and
there as a bridge, *and* you are willing to accept an operating
system that is arbitrarily slower at certain time-critical
operations (such as responding to mouse activities) than we
are used to now, then certainly Python can be used for such things...

-Peter
 
J

Jeremy Jones

Peter said:
Unfortunately the logic is flawed, even in the case that you
quoted above!

Yes and no. See below.
You _cannot_ use Python for a time-critical interrupt, even
when you allow for pure C or assembly code as the bridge
(since Python _cannot_ be used natively for interrupts, of
course), because -- as noted above! -- the interrupt must
execute in a few hundred CPU cycles.

I'm really not trying to contradict nor stir things up. But the OP
wanted to know if it were possible to prototype an OS and in a
follow-up, referred to a virtual OS. Maybe I mis-read the OP, but it
seems that he is not concerned with creating a _real_ OS (one that talks
directly to the machine), it seems that he is concerned with building
all the components that make up an OS for the purpose of....well.....he
didn't really state that.....or maybe I missed it.

So, asking in total ignorance, and deferring to someone with obviously
more experience that I have (like you, Peter), would it be possible to
create an OS-like application that runs in a Python interpreter that
does OS-like things (i.e. scheduler, interrupt handling, etc.) and talks
to a hardware-like interface? If we're talking about a virtual OS,
(again, I'm asking in ignorance) would anything really be totally time
critical? Especially if the point were learning how an OS works?
Given that the cost of invoking the Python interpreter on
a bytecode-interrupt routine would be several orders of
magnitude higher, I don't understand why you think it
is possible for it to be as fast.

I totally agree with you...sort of. I totally agree with your technical
assessment. However, I'm reading the OP a different way. If he did
mean a virtual OS and if time isn't critical, and he was thinking,
"well, I'm getting shot down for proposing to do this in Python, so
maybe it isn't possible in Python, but it is possible in C and since I
can call C from Python, then I should be able to do it", then maybe he
has a point. Or, maybe I'm just totally misreading the OP. So, if he's
saying that he can just call the C code from python and it'd be just as
fast doing interrupt handling that way, then I agree with you. But if
he's talking about just the functionality and not the time, is that
possible?
Of course, if you will allow both assembly/C code here and
there as a bridge, *and* you are willing to accept an operating
system that is arbitrarily slower at certain time-critical
operations (such as responding to mouse activities) than we
are used to now, then certainly Python can be used for such things...
OK - so here's my answer. It should be possible, but it will be slower,
which seems to be acceptable for what he meant when mentioning
prototyping and a virtual OS. But here's another question. Would it be
possible, if I've interpreted him correctly, to write the whole thing in
Python without directly calling C code or assembly? Even if it were
unbearably slow and unfit for use for anything other than, say, a
learning experience? Kind of like a combustion engine that has part of
it replaced with transparent plastic - you dare not try to run it, but
you can manually move the pistons, etc. It's only good for education.

Jeremy
 
R

Richard Blackwood

I'm really not trying to contradict nor stir things up. But the OP
wanted to know if it were possible to prototype an OS and in a
follow-up, referred to a virtual OS. Maybe I mis-read the OP, but it
seems that he is not concerned with creating a _real_ OS (one that
talks directly to the machine), it seems that he is concerned with
building all the components that make up an OS for the purpose
of....well.....he didn't really state that.....or maybe I missed it.

You understand me entirely Jeremy. The goal is to create a _virtual_ OS
that will represent and behave (minus speed) like a real OS. It will
be comprised of all the components necessary for a _real_ OS and if the
only way to do it is to simulate hardware as well, so be it. If the
only way to do it is to handle _real_ interrupts via excruciating slow
Python to C calls, so be it. So you understand me entirely as I do not
wish to create an OS that is usable in the traditional sense.
So, asking in total ignorance, and deferring to someone with obviously
more experience that I have (like you, Peter), would it be possible to
create an OS-like application that runs in a Python interpreter that
does OS-like things (i.e. scheduler, interrupt handling, etc.) and
talks to a hardware-like interface? If we're talking about a virtual
OS, (again, I'm asking in ignorance) would anything really be totally
time critical? Especially if the point were learning how an OS works?

Time is not an issue.
I totally agree with you...sort of. I totally agree with your
technical assessment. However, I'm reading the OP a different way.
If he did mean a virtual OS and if time isn't critical, and he was
thinking, "well, I'm getting shot down for proposing to do this in
Python, so maybe it isn't possible in Python, but it is possible in C
and since I can call C from Python, then I should be able to do it",
then maybe he has a point. Or, maybe I'm just totally misreading the
OP. So, if he's saying that he can just call the C code from python
and it'd be just as fast doing interrupt handling that way, then I
agree with you. But if he's talking about just the functionality and
not the time, is that possible?

I agree as well, but somehow it was interpreted that I believe it
possible to achieve the same speed in Python as in C, which I never
said. So again, you understand me entirely. Functionality and not
time, exactly.
OK - so here's my answer. It should be possible, but it will be
slower, which seems to be acceptable for what he meant when mentioning
prototyping and a virtual OS. But here's another question. Would it
be possible, if I've interpreted him correctly, to write the whole
thing in Python without directly calling C code or assembly? Even if
it were unbearably slow and unfit for use for anything other than,
say, a learning experience? Kind of like a combustion engine that has
part of it replaced with transparent plastic - you dare not try to run
it, but you can manually move the pistons, etc. It's only good for
education.
That is my question, but the consensus seems to be no.
 
R

Richard Blackwood

Peter said:
Unfortunately the logic is flawed, even in the case that you
quoted above!

Reread what I wrote above Peter, I said _nothing_ about speed. It says,
"If....then Python can do X": _can do_ NOT that If.....then Python can
do X just as well speed wise as C. It is not writ.
You _cannot_ use Python for a time-critical interrupt, even
when you allow for pure C or assembly code as the bridge
(since Python _cannot_ be used natively for interrupts, of
course), because -- as noted above! -- the interrupt must
execute in a few hundred CPU cycles.

Are you so sure of this Peter? It certainly seems that this might be
possible but as you point out, if one uses Python for _time critical_
interrupts, the code will not live up to the _time critical_ aspect.
Indeed, I agree and never said otherwise. I could case less if it is
extremely slow as speed or even usability was not my aim (I never
indicated such either and in fact, I indicated otherwise by utilizing
such terms as _virtual_ and _prototype_ [As Jeremy bravely points out] ).
Given that the cost of invoking the Python interpreter on
a bytecode-interrupt routine would be several orders of
magnitude higher, I don't understand why you think it
is possible for it to be as fast.

I never said this Peter.
Of course, if you will allow both assembly/C code here and
there as a bridge, *and* you are willing to accept an operating
system that is arbitrarily slower at certain time-critical
operations (such as responding to mouse activities) than we
are used to now, then certainly Python can be used for such things...

Why thank you Peter, that is exactly my aim.
 
D

Diez B. Roggisch

Are you so sure of this Peter? It certainly seems that this might be
possible but as you point out, if one uses Python for _time critical_
interrupts, the code will not live up to the _time critical_ aspect.
Indeed, I agree and never said otherwise. I could case less if it is
extremely slow as speed or even usability was not my aim (I never
indicated such either and in fact, I indicated otherwise by utilizing
such terms as _virtual_ and _prototype_ [As Jeremy bravely points out] ).


I did not only mention the timing aspect, but also the GIL (Global
Interpretor Lock) aspect of a python-coded interrupt routine. That renders
python useless in such a case as it causes a deadlock.

I've had my share of embedded programming on various cores with variying
degrees of OS already available to me, as well as low-lever assembly
hacking on old 68k-machines as the amiga. My feeble attempts on task
schedulers and the like don't qualify as OS, but the teached me about the
difficulties that arise when trying to cope with data structure integritiy
in a totally asynchronous event like an interrupt.

All that stuff has to be so low-level and carefully adjusted to timing
requirements that python is ruled out there- sometimes even C doesn't make
up for it.

Thats what I had in mind when answering your question.
 
R

Richard Blackwood

Diez said:
Are you so sure of this Peter? It certainly seems that this might be
possible but as you point out, if one uses Python for _time critical_
interrupts, the code will not live up to the _time critical_ aspect.
Indeed, I agree and never said otherwise. I could case less if it is
extremely slow as speed or even usability was not my aim (I never
indicated such either and in fact, I indicated otherwise by utilizing
such terms as _virtual_ and _prototype_ [As Jeremy bravely points out] ).


I did not only mention the timing aspect, but also the GIL (Global
Interpretor Lock) aspect of a python-coded interrupt routine. That renders
python useless in such a case as it causes a deadlock.
I am ignorant in these respects, but deadlocks do not sound like a good
thing (contrary to what Martha would say).
I've had my share of embedded programming on various cores with variying
degrees of OS already available to me, as well as low-lever assembly
hacking on old 68k-machines as the amiga. My feeble attempts on task
schedulers and the like don't qualify as OS, but the teached me about the
difficulties that arise when trying to cope with data structure integritiy
in a totally asynchronous event like an interrupt.

All that stuff has to be so low-level and carefully adjusted to timing
requirements that python is ruled out there- sometimes even C doesn't make
up for it.
Do you mean that there are no entirely C OSs?
Thats what I had in mind when answering your question.
Understood, however, note the terms prototype and virtual. Perhaps I
could create virtual hardware where needed, and play around with the
timing issues in this manner (emulate them and create solutions
prototyped in Python).
 
D

Diez B. Roggisch

Do you mean that there are no entirely C OSs?

I doubt it. e.g. GCC lacks the ability to create proper interrupt routines
for 68k-based architectures, as these need an rte (return from exception)
instead of an rts (return from subroutine) at their end, which GCC isn't
aware of. So for creating a interrupt registry, I had to resort to
assembler - at least by "poking" the right values into ram. This consisted
of a structure


typedef struct {
short prefix[9];
t_intHandler handler;
short suffix[8];
} t_intWrapper;

where handler was set to my C function that was supposed to become an
interrupt handler and

t_intWrapper w = {
0x4e56, 0x0000,0x08f9,
0x0001, 0x00ff, 0xfa19,
0x48e7, 0xfffe, 0x4eb9,
handler,
0x4cdf, 0x7fff, 0x08b9, 0x0001, 0x00ff, 0xfa19, 0x4e5e, 0x4e73
};

beeing the initialisation of that t_intWrapper struct. The code simply
pushes registers on the stack and then jumps into the C function. The
address of such a struct was then set to the appropriate interrupt vector.

Now one can argue if that is still C, as it's written in a cpp-file thats
run through a compiler - but if that really counts as C, then of course you
can write anything in python (or VB or whatever languague you choose) by
simply writing out hexdigits to a file....
Understood, however, note the terms prototype and virtual. Perhaps I
could create virtual hardware where needed, and play around with the
timing issues in this manner (emulate them and create solutions
prototyped in Python).

In you first post, you didn't mention virtual - only prototype. And
prototyping an OS for whatever hardware can't be done in pure python. Thats
all that was said.

Writing a virtual machine for emulation can be done in python of course. But
then you can't write a OS for it in python as well (at least not in
CPython )- as your virtual machine must have some sort of byte-code, memory
model and so on. But the routines you write for the OS now must work
against that machine model, _not_ operate in CPython's execution model
(which is based on an existing os running on real hardware)

Now generating code for that machine in python means that you have to port
python to to your machine model - like jython is to java. That porting work
is akin to the one projects like ceese or unununium, writing memory
allocation routines and so on - for your machine model.

So I'd still say: No, you can't prototype an OS in python, neither for
virtual or real hardware. You _can_ add all sorts of modules, C-code and
what not to boot a machine into a python-interpreter that has all sorts of
os-services at its hand by certain modules, like the forementioned projects
attempt.

But creating a VM only leverages that work to the next level, retaining the
initial problems. If you don't do it that way, you don't prototype anything
like an OS nor implement a machine model but instead meddle some things
together by creating a machine model that is so powerful (e.g. has an
built-in notion for lists, dicts and the like) that it can't be taken
seriously as educational for writing an OS - at least to me, as the virtue
of writing an OS is to actually _deal_ with low-level matters, otherwise
there is no challenge in it.

I don't say that this is not a worthy project to undertake for educational
purposes - but I wouldn't call it an OS, as it does not teach what creating
an OS from scratch is supposed to teach.

For example you can't write a OS for the JAVA VM, as there is no such things
like interrupts defined for it - instead IO happends "magically" and is
dealt with on the low level with the underlying OS. The VM only consumes
the results.
 
R

Richard Blackwood

Diez said:
<LOTS OF SWELL CODE SNIPPED>

Now one can argue if that is still C, as it's written in a cpp-file thats
run through a compiler - but if that really counts as C, then of course you
can write anything in python (or VB or whatever languague you choose) by
simply writing out hexdigits to a file....
Looks like a Hybrid.
In you first post, you didn't mention virtual - only prototype. And
prototyping an OS for whatever hardware can't be done in pure python. Thats
all that was said.

Writing a virtual machine for emulation can be done in python of course. But
then you can't write a OS for it in python as well (at least not in
CPython )- as your virtual machine must have some sort of byte-code, memory
model and so on. But the routines you write for the OS now must work
against that machine model, _not_ operate in CPython's execution model
(which is based on an existing os running on real hardware)
All of that can be virtually emulated (virtual memory and so forth).
But creating a VM only leverages that work to the next level, retaining the
initial problems. If you don't do it that way, you don't prototype anything
like an OS nor implement a machine model but instead meddle some things
together by creating a machine model that is so powerful (e.g. has an
built-in notion for lists, dicts and the like) that it can't be taken
seriously as educational for writing an OS - at least to me, as the virtue
of writing an OS is to actually _deal_ with low-level matters, otherwise
there is no challenge in it.
*laugh* Now that is hilarious.
I don't say that this is not a worthy project to undertake for educational
purposes - but I wouldn't call it an OS, as it does not teach what creating
an OS from scratch is supposed to teach.

Understood.

For example you can't write a OS for the JAVA VM, as there is no such things
like interrupts defined for it - instead IO happends "magically" and is
dealt with on the low level with the underlying OS. The VM only consumes
the results.
This has been done as well, there are Java operating systems were IO is
handled by Java and not magic, or so I understand.
 
M

Mike Meyer

Diez B. Roggisch said:
For example you can't write a OS for the JAVA VM, as there is no such things
like interrupts defined for it - instead IO happends "magically" and is
dealt with on the low level with the underlying OS. The VM only consumes
the results.

OS's have been written for VMs (LISP and Forth) that didn't have the
notion of interrupt before they were built. For LISPMs, interrupt
handlers are LISP objects (*). Java may not be as powerful as LISP,
but I'm pretty sure you could turn interrupts into method invocations
without having to extend the VM.

<mike

(*) <URL: http://home.comcast.net/~prunesquallor/memo444.htm >,
under the section on Stack Groups.
 
D

Diez B. Roggisch

OS's have been written for VMs (LISP and Forth) that didn't have the
notion of interrupt before they were built. For LISPMs, interrupt
handlers are LISP objects (*). Java may not be as powerful as LISP,
but I'm pretty sure you could turn interrupts into method invocations
without having to extend the VM.

How so? An interrupt is a address the processor directly jumps to by
adjusting its PC. The JVM doesn't even have the idea of function pointers.
Invoking a (even static) method involves several lookups in dicts until the
actual code pointer is known - and that's byte code then, not machine
code.

As your examples show, one can implement a VM ontop of a considederably thin
layer of low level code and expose hooks that allow for system
functionality to be developed in a high-level language running bytecode.
Fine. Never doubted that. I've written myself C-wrappings that allowed
python callables to be passed as callbacks to the C-lib - no black magic
there. But that took some dozen lines of C-code, and time-critical
interrupts won't work properly if you allow their code beeing implemented
in a notably slower language.
 
D

Diez B. Roggisch

Looks like a Hybrid.

Nicely observed. Now you asked for pure C. Do you call pure C beeing hybrid?
All of that can be virtually emulated (virtual memory and so forth).

Yeah. So you got your virtual memory - and how do you plan to bring it to
use? The CPython implementation conveniently uses malloc, that fetches its
memory from the "real" memory. And allocating python objects will use that.
So how exactly do you plan to put your nice new simulated memory to use, in
python, the language you want to create an OS in?
This has been done as well, there are Java operating systems were IO is
handled by Java and not magic, or so I understand.

I found this one:

http://www.savaje.com/faqs1.html#item1

Somehow these guys made the design decision to use C/C++ for low-level
stuff. So they seem to share my hilarious viewpoints to a certain degree.

But I'm sure you can show me a JAVA OS thats purely written in java. I'm
looking forward to it. I'm especially interested in their core IO driver
code beeing written in JAVA.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,211
Messages
2,571,092
Members
47,693
Latest member
david4523

Latest Threads

Top