Systems software versus applications software definitions

J

Juhan Leemet

Here in comp.lang.c,
Alex McDonald said:
Systems programmers count in hex.

[Snip]

Application programmers count in decimal.

I count in octal -- what does that make me? :)

antediluvian? (like me?)

FWIW, I seem to recall being able to multiply hex digits (rote memory, I
know) when I was crawling through IBM mainframe dumps as a uni student. I
had a professor who joked about how I was playing the front panel switches
of a PDP-8 "like a piano" when I toggled in the bootstrap loader. Ah...

p.s. hated octal, esp. on 8-bit byte machines like PDP-11 ! or VAX ?!?
 
A

Anne & Lynn Wheeler

Alex McDonald said:
Systems programmers dislike end users. Users need software. Ergo
if the user specifies it, the systems programmer won't write it;
the application programmer does that.

Systems programmers only write programs for themselves or other
systems programmers. Rarely they will write programs to support
applications, but only under protest at the inefficency of the
applications running on their finely crafted code accessible only
from the command line. Systems programmers count in hex.

some of the old collection

http://www.garlic.com/~lynn/2001e.html#31 High Level Language Systems was Re: computer books/authors (Re: FA:
http://www.garlic.com/~lynn/2002e.html#39 Why Use *-* ?

above has:

* real programmers don't eat quiche
* real software engineers don't read dumps
* real programmers don't write specs

long ago and far away ... i have vague memories of being able to read
the holes in cards that were executable output of assembler ...
(12-2-9/x'02' "TXT" cards) and modifying the program by repunching the
binary data in the cards (actually using 026 and later 029 to do copy
of the card up until the column(s) i needed to change ... and then
using multi-punch to punch the correct holes for the hex that i
needed).

minor archeological references:
http://www.garlic.com/~lynn/95.html#4 1401 overlap instructions
http://www.garlic.com/~lynn/2001.html#0 First video terminal?
http://www.garlic.com/~lynn/2001.html#60 Text (was: Review of Steve McConnell's AFTER THE GOLD RUSH)
http://www.garlic.com/~lynn/2001k.html#27 Is anybody out there still writting BAL 370.
http://www.garlic.com/~lynn/2001k.html#28 Is anybody out there still writting BAL 370.
http://www.garlic.com/~lynn/2001k.html#31 Is anybody out there still writting BAL 370.
http://www.garlic.com/~lynn/2001m.html#45 Commenting style (was: Call for folklore)
http://www.garlic.com/~lynn/2001n.html#49 PC/370
http://www.garlic.com/~lynn/2002o.html#26 Relocation, was Re: Early computer games
 
R

Richard Steiner

Here in comp.lang.c,
Juhan Leemet said:
antediluvian? (like me?)
Harumph!

p.s. hated octal, esp. on 8-bit byte machines like PDP-11 ! or VAX ?!?

It works very well in the 36-bit word-oriented environment I still play
in at work, though. 9-bit ASCII bytes. :)
 
P

ptth

How do we define systems programs? when we say systems programming,
does it necessary mean that the programs we write need to interact
with hardware directly? For example, OS, compiler, kernel, drivers,
network protocols, etc...? Couple years ago, yes, I understand this is
definitely true. However, as the software applications become more and
more complicated, some people try to argue that. Some people argue the
definition of systems programs depend on the level of abstractions. I
heard people saying that web server is a systems software, which I
feel confused. I think web server is an application software. Yes,
other applications run on top of web server.

Please advise and discuss. thanks!!

Well, our prof. counted couples of software to be the system level:

0. Operating Systems.
1. Sofwares which widely use system calls. (socket is a kind of system
call, and thats why we talk about web servers as system softwares)
2. Compilers and interpreters.

hopes it helps.
 
N

Nick Maclaren

I would consider a state-of-the-art optimizing compiler to be of equal or higher
complexity than a kernel. But it would be 1 in your scheme.

There is internal complexity and interface complexity, and it is
the latter that generally causes more trouble, needs more design,
and usually gets less of both.

The most optimising compiler has very little more interface complexity
than a basic compiler, and the interface can be anything from simple
to fiendishly complex, depending. For example, under IA-64, it is
necessarily at least of medium complexity.

Similarly, a kernel for some of the more extreme microkernel system
designs can be very simple, both internally and at its interface,
but one for POSIX and derivatives necessarily has a fiendish
interface complexity.


Regards,
Nick Maclaren.
 
C

Chris Croughton

Juhan Leemet said:
I remember seeing a nice little table in Datamation many years ago:
relating the approximate difficulty of implementing software:

| single-user multi-user
----------------+----------------------------
application | 1 3
system | 3 9 <== HARDEST

[Dunno if that survived formatting?]

One may quibble with the numbers, but they are roughly representative. The

I would consider a state-of-the-art optimizing compiler to be of equal or higher
complexity than a kernel. But it would be 1 in your scheme.

Yes, it would, because that is the definition. A multi-user
state-of-the-art optimizing compiler would be three times as difficult
as a standalone one, a compiler which had to handle hardware
('compiling' directly into an ASIC for instance) would be about the
same, etc.
There are probably lots of other counter examples. How about a single-user
application that solves some incredibly complex problem?

Same thing. The multi-user version of that would be three times as
hard (probably worse, having worked on some distributed applications).
If that was true then libraries would be a lot less popular than they are.

Well, he's right in that sense, when we program we do indeed want to
make the machine do what we want, and we get annoyed when they don't do
it, which is pretty much the definition of "control freak". I get
especially annoyed when libraries don't do what they say they are
supposed to do...

(Note followups and override if desired)

Chris C
 
A

Andi Kleen

There is internal complexity and interface complexity, and it is
the latter that generally causes more trouble, needs more design,
and usually gets less of both.

The interface of a compiler is much more complex than
the input language and the command line options.

An optimizing compiler consists of many passes that talk to each other
using complex data structures and even a special intermediate
language. Or rather multiple ones when the program is step by step
lowered to machine language. The "interface complexity" between these
passes is quite high. Since it is a big enough project that it likely
needs to be written by multiple persons or even multiple groups the
interfaces must be well defined and documented too. Another similar
case would be to write a new backend for a existing compiler to port
it to a new architecture or new frontend to reuse an existing
optimizer and backend. All of this involves dealing with very
complex interfaces.

-Andi
 
C

Chris Croughton

Here in comp.lang.c,
Alex McDonald said:
Systems programmers count in hex.

[Snip]

Application programmers count in decimal.

I count in octal -- what does that make me? :)

Old, like me <g>.

(If I'm counting on my fingers I use Gray code...)

Chris C
 
W

wolfgang kern

Matt asked:

| How do we define systems programs?
| when we say systems programming, does it necessary mean that the
| programs we write need to interact with hardware directly?
| For example, OS, compiler, kernel, drivers,
| network protocols, etc...?


I see two main parts of 'system' code, the first covers all the
hardware needs (the drivers).
The other part holds OS/FS-specific code and it depends on the
'security level' of an OS, which functions are to be protected
from user- and/or admin-access and included in the system.

Network protocols are well known in detail, so there are many
different web-browser applications around, even they just call
system functions for password-cache and connection.

Compilers often just use API-functions and libraries for a
certain target-OS/CPU pair and are limited to user level,
but a few tools also allow to write 'unprotected' system code.

| Couple years ago, yes, I understand this is definitely true.

| However, as the software applications become more and
| more complicated, some people try to argue that.
| Some people argue the definition of systems programs depend on
| the level of abstractions.

I'd name it as the level of being paranoid for security :)

| I heard people saying that web server is a systems software,
| which I feel confused.
| I think web server is an application software.
| Yes, other applications run on top of web server.

If you don't mean 'net-linkers' within an GP-OS like windoze:
I have no experience with web-servers, but I think they would be
well advised to use their very own system rather than a GP-Os.

__
wolfgang
http://web.utanet.at/schw1285/KESYS/index.htm
 
N

Nick Maclaren

|>
|> >>I would consider a state-of-the-art optimizing compiler to be of equal or higher
|> >>complexity than a kernel. But it would be 1 in your scheme.
|> >
|> > There is internal complexity and interface complexity, and it is
|> > the latter that generally causes more trouble, needs more design,
|> > and usually gets less of both.
|>
|> The interface of a compiler is much more complex than
|> the input language and the command line options.

Of course. I was primarily referring to the external interfaces,
but they include the code generated, the calling conventions, the
object file formats and so on.

|> An optimizing compiler consists of many passes that talk to each other
|> using complex data structures and even a special intermediate
|> language. ...

Yes, but at least that is within a single product. It gets much
hairier (managerially and technically) when the interfaces are
between separate products, perhaps even developed by separate
organisations.


Regards,
Nick Maclaren.
 
W

Wayne Woodruff

Great question, I always wondered that too. I met an embedded programmer in
San Diego years ago. He said the more down to the hardware level he got, the
more exciting it was. Also maybe somebody could explain this, I've seen a
lot of mainframe positions advertised as "system programmer"

Well, as chips become more and more complex, the embedded people who
deal directly with the hardware need to learn the "personality" of the
chip, e.g. how various register settings affect each other and which
register combinations do not work, etc. Some of these chips have
hundreds of registers, each bit controlling a different aspect. It
can be quite a challenge to learn the chip.

I can be fascinating to work at this level.


Wayne Woodruff
http://www.jtan.com/~wayne
 
J

jmfbahciv

when i did the resource manager ... there was something like 2000
(automated) tests that took 3 months elapsed time to run as part of
calibrating and verifying the resource manager.
http://www.garlic.com/~lynn/subtopic.html#bench

the standard system maint. process was monthly update (patch?)
distribution called PLC (program level change). It would ship the
cumulative source updates as well as the executable binaries.

I was asked to put out monthly PLC for the resource manager on the
same schedule as the standard system PLC. I looked at the process, and
made a counter-offer of quarterly PLC for the resource manager
.... since I would have to take all the accumulated patches (for the
whole system) and rerun some significant number of the original
validation suite .... and there just weren't the resources to do that
on a monthly basis.
http://www.garlic.com/~lynn/subtopic.html#fairshare
http://www.garlic.com/~lynn/subtopic.html#wsclock

Yup. TW always said that the only way to ship bugless software was
to ship every day or not ship at all. Boy! Finding the compromise
to that one was difficult, a PITA, and different with each and every
project we ever did. And that was _without_ PHB and NIH interference.
Add the last two, and I still am amazed that we ever shipped anything
at all.


reference to the original resource manager product announcement
http://www.garlic.com/~lynn/2001e.html#45

note that much of the bits & pices that was in the resource manager
had been available in earlier kernal ... but were dropped over period
years. it was eventually decided to collect them up and package them
as a separate distribution.

this was in the period of some transition from free to charged for
software. at the time, there had been soem distinction that
application software could be charged for ... but kernel/system
software (as part of supporting the machine) was free.

the resource manager got to be the guinee pig for the first charged
for kernel software component ... with a new distinction that kernel
software that was directly related to hardware software would still be
free ... but other types of kernel software could be charged for.

a interesting paradox then showed up for the next release. the
resource manager shipped with the release prior to the release that
shipped smp support.
http://www.garlic.com/~lynn/subtopic.html#smp

however much of the SMP design and implementation was predicated on
various features that were part of the resource manager. the problem
was that SMP support was obviously directly supported hardware and
therefor was free ... but now had integral dependancies on features in
the resource managere ... which was priced.

That's how you would make money for an SMP. Our way was to
put the "service driver" for SMP on one magtape. When the customer
paid for the support, he would automatically get the tape
containing CPNSER.MAC as a part of the distribution. Each
and every other monitor module had SMP code in it under a
feature test switch and we shipped that on our monitor
distribution tape which went to all customers.

The way JMF designed the marketing change to support three
instead of two CPUs was to very carefully never mention the
word "two", using "multi" instead. Thus, we never had to
remaster a tape, had all the testing done with the previous
release, and just change the PD-something. No documentation
mentioned two (it also said multi).

To get DEC to officially "support" more than two CPUs on a system
took nine fucking months; this was a measurement of the internal
processes of product prevention that had completely infected DEC
by 1982.

/BAH


Subtract a hundred and four for e-mail.
 
T

Toon Moene

Richard said:
Here in comp.lang.c,
Juhan Leemet <[email protected]> spake unto us, saying:


It works very well in the 36-bit word-oriented environment I still play
in at work, though. 9-bit ASCII bytes. :)

Or, for the real stuff: 60-bit words ... the original CDC Cyber series !
Real Programmers can find bugs buried in 6 Megabyte core dumps :)
 
G

glen herrmannsfeldt

The VAX was DEC's first hex machine. There was a story that
around the time of the announcement they published a
calendar with the dates in hex. The Fortran compiler supported
hex constants and format codes. The instruction fields were
in groups of four bits, unlike the PDP-11 where the instruction
fields, especially register numbers, grouped in threes.

(Though I still prefer hex for the PDP-11.)

-- glen
 
S

Sander Vesik

In said:
Well, our prof. counted couples of software to be the system level:

0. Operating Systems.
1. Sofwares which widely use system calls. (socket is a kind of system
call, and thats why we talk about web servers as system softwares)

This is a bad deifinition. For example, it potentialy leaves backup
software and most of system maintenance stuff out but lets web servers
and other solid application level stuff in.
 
L

Lin Mi

Sander said:
This is a bad deifinition. For example, it potentialy leaves backup
software and most of system maintenance stuff out but lets web servers
and other solid application level stuff in.

Plz explain your first point.
For web servers, to my opinion, they're widely using TCP/IP stack, which
is OS built-in functions. Therefore, at least they are not *such* high
level apps.


--
Lin Mi (Rin Fuku)
(e-mail address removed)
Faculty of Environmental Information
Hagino-Hattori Laboratory
Keio University, Shounan-Fujisawa Campus
 
J

Joe Wright

glen said:
The VAX was DEC's first hex machine. There was a story that
around the time of the announcement they published a
calendar with the dates in hex. The Fortran compiler supported
hex constants and format codes. The instruction fields were
in groups of four bits, unlike the PDP-11 where the instruction
fields, especially register numbers, grouped in threes.

(Though I still prefer hex for the PDP-11.)

-- glen

Octal (0..7) came about because of a need to 'say' binary. In the
day, the 'character' was six bits wide, all upper case and 'A' was
something like '100001' if I recall correctly. Split into groups of
3 we get '100' and '001'. Everybody could keep that much binary in
their heads and could 'see' four and one.

If asked the value of 'A' it was a blessing to say 'four one' rather
than 'one zero zero zero zero one'.

Then in the 1960's everything changed. Fairchild Semiconductor
invented the Integrated Circuit, the IC. They could now put hundreds
of transistors on one piece of silicon. The whole idea of designing
circuits from individual transisters died an almost instant death.

The new IC's were virtually circuit boards on a chip. In 1963 IBM's
new 7094 CPU was nearly the size of a city bus, required tons of air
conditioning, had 36-bit memory word, 6-bit character, 7-channel
magnetic tape, fixed record size 80 characters per record (punched
card). Perhaps the youngest (last) dinosaur.

IBM at the same time had embraced the IC and in 1964 introduced
their System 360 which was the first machine to this generation. The
old six-bit character (BCD) was severely limiting and 7-bit ASCII
was nipping at their heels. Digital Equipment, Data General, Pr1me,
etc. were giving them hell with ASCII. Hence in an attempted leap
forward, IBM uses IC's. The IC designers think 2, 4, 8 and don't
know what to do with 3 or 6.

So the new character is 8 bits and becomes Extended BCD Interchange
Code (EBCDIC). New terms, byte (the 8-bit thingy) nybble (half a
byte) were current in 1964. I'm not sure IBM did this.

The IC counters were 4-bits wide, not 3. The modulus was 16, not 8
and Octal died. Hexadecimal is born. Simply add six alpha characters
to the ten decimal ones and we have '0123456789ABCDEF' for our set.
 
J

Jim Cownie

Nick said:
|> An optimizing compiler consists of many passes that talk to each other
|> using complex data structures and even a special intermediate
|> language. ...

Yes, but at least that is within a single product. It gets much
hairier (managerially and technically) when the interfaces are
between separate products, perhaps even developed by separate
organisations.

Tell me about it ! As someone who writes a third party debugger for a
living I certainly fell the pain of dealing with many ill specified
external interfaces. A debugger is certainly dependent on more such
interfaces than a compiler.
 
M

Michel Hack

How do we define systems programs?

One view (certainly not the only one) distinguishes between programs that need
or expect some privilege (e.g. supervisor state) from those that don't (though
they may use privilege-requiring services through a system-call interface).

In this view, system programs *must* be written carefully, since a malfunction
can have global effects; application programs should never be able to do worse
than shoot themselves in the foot (local and bounded side-effects only). This
last assumption depends of course on a properly fenced execution environment,
which most people don't enjoy.

Michel.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,183
Messages
2,570,969
Members
47,524
Latest member
ecomwebdesign

Latest Threads

Top