M
But the easiest machine language /ever/.
Martin said:What? Even easier than ICL 1900 PLAN or MC68000 assembler? That would be
difficult to achieve.
OK - I haven't touched that since typing ALTER commands into the consoleI said "machine language" and I meant it.
Even a steam powered 1901 (3.6 uS for a half-word add IIRC) running aEven shops that used assembler nevertheless frequently did bug fixes as
machine-language patches, rather than take the time to run the assembler
again. (SPS, the non-macro basic assembler, ran at about 70 lines a
minute, tops.)
Arne Vajhøj said:JWK> Into the 60s, indeed, there were still machines being made
JWK> that had no instruction comparable to the mainframe BASx/BALx
JWK> family, or to Intel's CALL. You had to do a subprogram call by
JWK> first overwriting the last instruction of what you were
JWK> calling with a branch instruction that would return back to
JWK> you.
AV> CDC Cyber did something very similar.
AV> Not very recursion friendly.
Quite. I never knew how to boot the Elliott 503 (never got closer to theYou're assuming that all machines *have* some sort of "boot ROM". Before
the microprocessor days, that was certainly not always the case. The
"boot ROM", or other methods of booting a machine without manually
entering at least a small amount of "shoelace" code [enough the *load*
the real bootstrap], was a fairly late invention.
Piet said:Actually, the CYBER way wasn't too bad. IIRC the CYBER had a subroutine
instruction that stored the return address in the location that the
instruction referenced and then jumped to the address following that
location. To implement a recursive procedure you started the code of the
procedure with saving the return address to a stack.
Piet said:Actually, the CYBER way wasn't too bad. IIRC the CYBER had a subroutine
instruction that stored the return address in the location that the
instruction referenced and then jumped to the address following that
location. To implement a recursive procedure you started the code of the
procedure with saving the return address to a stack.
+---------------
| I was fascinated, though by the designs of early assemblers: I first
| learnt Elliott assembler, which required the op codes to be typed on
| octal but used symbolic labels and variable names. Meanwhile a colleague
| had started on a KDF6 which was the opposite - op codes were mnemonics
| but all addresses were absolute and entered in octal. I always wondered
| about the rationale of the KDF6 assembler writers in tackling only the
| easy part of the job.
+---------------
In the LGP-30, they used hex addresses, sort of[1], but the opcodes
(all 16 of them) had single-letter mnemonics chosen so that the
low 4 bits of the character codes *were* the correct nibble for
the opcode! ;-}
[Or you could type in the actual hex digits, since the low 4 bits
of *their* character codes were also their corresponding binary
nibble values... "but that would have been wrong".]
-Rob
[1] The LGP-30 character code was defined before the industry had
yet standardized on a common "hex" character set, so instead of
"0123456789abcdef" they used "0123456789fgjkqw". [The "fgjkqw"
were some random characters on the Flexowriter keyboard whose low
4 bits just happened to be what we now call 0xa-0xf]. Even worse,
the sector addresses of instructions were *not* right-justified
in the machine word (off by one bit), plus because of the shift-
register nature of the accumulator you lost the low bit of each
machine word when you typed in instructions (or read them from
tape), so the address values you used in coding went up by *4*!
That is, machine locations were counted [*and* coded, in both
absolute machine code & assembler] as "0", "4", "8", "j", "10",
"14", "18", "1j" (pronounced "J-teen"!!), etc.
Whats os interresting about all this hullabaloo is that nobody has
coded machine code here, and know's squat about it.
I'm not talking assembly language. Don't you know that there are routines
that program machine code? Yes, burned in, bitwise encodings that enable
machine instructions? Nothing below that.
There is nobody here, who ever visited/replied with any thought relavence that can
be brought foward to any degree, meaning anything, nobody....
[snip](e-mail address removed)> wrote:
+---------------
| (e-mail address removed) (Rob Warnock) wrote:
| >In the LGP-30, they used hex addresses, sort of[1], but the opcodes
| >(all 16 of them) had single-letter mnemonics chosen so that the
| >low 4 bits of the character codes *were* the correct nibble for
| >the opcode! ;-}
...
| >[1] The LGP-30 character code was defined before the industry had
| > yet standardized on a common "hex" character set, so instead of
| > "0123456789abcdef" they used "0123456789fgjkqw". [The "fgjkqw"
| > were some random characters on the Flexowriter keyboard whose low
| > 4 bits just happened to be what we now call 0xa-0xf]. Even worse,
| > the sector addresses of instructions were *not* right-justified
| > in the machine word (off by one bit), plus because of the shift-
| > register nature of the accumulator you lost the low bit of each
| > machine word when you typed in instructions (or read them from
| > tape), so the address values you used in coding went up by *4*!
| > That is, machine locations were counted [*and* coded, in both
| > absolute machine code & assembler] as "0", "4", "8", "j", "10",
| > "14", "18", "1j" (pronounced "J-teen"!!), etc.
|
| Whats os interresting about all this hullabaloo is that nobody has
| coded machine code here, and know's squat about it.
+---------------
Think again! *BOTH* of the two examples I gave -- for the LGP-30 & the
IBM 1410 -- *WERE* raw machine code, *NOT* assembler!!! Please read again
what I wrote about the character codes for the instruction mnemonics
*BEING* the machine instruction codes. For the IBM 1410, the bootstrap
code that ones types in:
v v
L%B000012$N
*IS* raw machine code, *NOT* assembler!!
[snip all the bullshit]you're typing *RAW* machine code, *NOT* assembler!! You see, the
lower 4 bits of character "b" -- the "mnemonic" for "Bring" -- were
binary 0001, the *same* as the lower 4 bits of the digit "1" (and two
other characters as well). So when you typed a "b" in that position
in "4-bit input mode" you were typing the same thing as the character "1"
which was the same thing as *binary* 0001 which was the absolute machine
opcode for the ""Bring" instruction!! "No assembler required" (pardon
the pun).
+---------------
| I'm not talking assembly language.
+---------------
Neither was I. The fact that for the two machines I mentioned
absolute machine code was somewhat readable to humans seems to
have confused you, but that's the way some people liked to design
their hardware back in those days -- with clever punning of character
codes with absolute machine opcodes (for the convenience of the user).
+---------------
| Don't you know that there are routines that program machine code?
+---------------
What do you mean "routines"?!? For the above two machines, you can
enter machine code with *no* programs ("routines?") running; that is,
no boot ROM is required (or even available, as it happened).
+---------------
| I was fascinated, though by the designs of early assemblers: I first
| learnt Elliott assembler, which required the op codes to be typed on
| octal but used symbolic labels and variable names. Meanwhile a colleague
| had started on a KDF6 which was the opposite - op codes were mnemonics
| but all addresses were absolute and entered in octal. I always wondered
| about the rationale of the KDF6 assembler writers in tackling only the
| easy part of the job.
+---------------
In the LGP-30, they used hex addresses, sort of[1], but the opcodes
(all 16 of them) had single-letter mnemonics chosen so that the
low 4 bits of the character codes *were* the correct nibble for
the opcode! ;-}
[Or you could type in the actual hex digits, since the low 4 bits
of *their* character codes were also their corresponding binary
nibble values... "but that would have been wrong".]
-Rob
[1] The LGP-30 character code was defined before the industry had
yet standardized on a common "hex" character set, so instead of
"0123456789abcdef" they used "0123456789fgjkqw". [The "fgjkqw"
were some random characters on the Flexowriter keyboard whose low
4 bits just happened to be what we now call 0xa-0xf]. Even worse,
the sector addresses of instructions were *not* right-justified
in the machine word (off by one bit), plus because of the shift-
register nature of the accumulator you lost the low bit of each
machine word when you typed in instructions (or read them from
tape), so the address values you used in coding went up by *4*!
That is, machine locations were counted [*and* coded, in both
absolute machine code & assembler] as "0", "4", "8", "j", "10",
"14", "18", "1j" (pronounced "J-teen"!!), etc.
Whats os interresting about all this hullabaloo is that nobody has
coded machine code here, and know's squat about it.
A friend of mine had an early 8080 micros that was programmed through
the front panel using knife switches ... toggle in the binary address,
latch it, toggle in the binary data, latch it, repeat ad nauseam. It
had no storage device initially ... to use it you had to input your
program by hand every time you turned it on.
I did a little bit of programming on it, but I tired of it quickly.
As did my friend - once he got the tape storage working (a new prom)
and got hold of a shareware assembler, we hardly ever touched the
switch panel again. Then came CP/M and all the initial pain was
forgotten (replaced by CP/M pain .
I'm not talking assembly language. Don't you know that there are routines
that program machine code? Yes, burned in, bitwise encodings that enable
machine instructions? Nothing below that.
There is nobody here, who ever visited/replied with any thought relavence that can
be brought foward to any degree, meaning anything, nobody....
What are you looking for? An emulator you can play with?
Machine coding is not relevant anymore - it's completely infeasible to
input all but the smallest program. My friend had a BASIC interpreter
for his 8080 - about 2KB which took hours to input by hand and heaven
help you if you screwed up or the computer crashed.
George
There's a 1:1 relationship between machine code and assembler.[snip](e-mail address removed)> wrote:
*IS* raw machine code, *NOT* assembler!!
I don't see the distinction.
Just dissasemble it and find out.
Not necessarily. An awful lot of CPU cycles were used before microcodeEach op is a routine in microcode.
That is machine code. Those op routines use machine cycles.
There's a 1:1 relationship between machine code and assembler.[snip](e-mail address removed)> wrote:
*IS* raw machine code, *NOT* assembler!!
I don't see the distinction.
Just dissasemble it and find out.
Unless its a macro-assembler, of course!
Not necessarily. An awful lot of CPU cycles were used before microcodeEach op is a routine in microcode.
That is machine code. Those op routines use machine cycles.
was introduced. Mainframes and minis designed before about 1970 didn't
use or need it and I'm pretty sure that there was no microcode in the
original 8/16 bit microprocessors either (6800, 6809, 6502, 8080, 8086,
Z80 and friends).
The number of clock cycles per instruction isn't a guide either. The only
processors I know that got close to 1 cycle/instruction were all RISC,
all used large lumps of microcode and were heavily pipelined.
By contrast the ICL 1900 series (3rd generation mainframe, no microcode,
no pipeline, 24 bit word) averaged 3 clock cycles per instruction.
Motorola 6800 and 6809 (no microcode or pipelines either, 1 byte fetch)
average 4 - 5 cycles/instruction.
Kenny said:Martin said:There's a 1:1 relationship between machine code and assembler. Unless*IS* raw machine code, *NOT* assembler!!
[snip]
I don't see the distinction.
Just dissasemble it and find out.
its a macro-assembler, of course!
Not necessarily. An awful lot of CPU cycles were used before microcodeEach op is a routine in microcode.
That is machine code. Those op routines use machine cycles.
was introduced. Mainframes and minis designed before about 1970 didn't
use or need it and I'm pretty sure that there was no microcode in the
original 8/16 bit microprocessors either (6800, 6809, 6502, 8080,
8086, Z80 and friends).
The number of clock cycles per instruction isn't a guide either. The
only processors I know that got close to 1 cycle/instruction were all
RISC, all used large lumps of microcode and were heavily pipelined.
By contrast the ICL 1900 series (3rd generation mainframe, no
microcode, no pipeline, 24 bit word) averaged 3 clock cycles per
instruction. Motorola 6800 and 6809 (no microcode or pipelines either,
1 byte fetch) average 4 - 5 cycles/instruction.
One problem with this discussion is that the term "microcode" isn't
really well-defined. There's the vertical kind, the horizontal kind,
with and without internal control-flow constructs, and then there are
various levels of visibility to the user -- see e.g. the pdp-8 manual,
where "microcoding" is used to mean piling the bits for a bunch of
instructions together in the same memory location, which works fine as
long as the instructions in question don't use conflicting sets of bits.
Martin said:There's a 1:1 relationship between machine code and assembler.[snip](e-mail address removed)> wrote:
*IS* raw machine code, *NOT* assembler!!
I don't see the distinction.
Just dissasemble it and find out.
Unless its a macro-assembler, of course!
Not necessarily. An awful lot of CPU cycles were used before microcodeEach op is a routine in microcode.
That is machine code. Those op routines use machine cycles.
was introduced. Mainframes and minis designed before about 1970 didn't
use or need it and I'm pretty sure that there was no microcode in the
original 8/16 bit microprocessors either (6800, 6809, 6502, 8080, 8086,
Z80 and friends).
The number of clock cycles per instruction isn't a guide either. The only
processors I know that got close to 1 cycle/instruction were all RISC,
all used large lumps of microcode and were heavily pipelined.
By contrast the ICL 1900 series (3rd generation mainframe, no microcode,
no pipeline, 24 bit word) averaged 3 clock cycles per instruction.
Motorola 6800 and 6809 (no microcode or pipelines either, 1 byte fetch)
average 4 - 5 cycles/instruction.
Paul said:Martin said:There's a 1:1 relationship between machine code and assembler. Unless*IS* raw machine code, *NOT* assembler!!
[snip]
I don't see the distinction.
Just dissasemble it and find out.
its a macro-assembler, of course!
Not necessarily. An awful lot of CPU cycles were used before microcodeEach op is a routine in microcode.
That is machine code. Those op routines use machine cycles.
was introduced. Mainframes and minis designed before about 1970 didn't
use or need it and I'm pretty sure that there was no microcode in the
original 8/16 bit microprocessors either (6800, 6809, 6502, 8080,
8086, Z80 and friends).
The number of clock cycles per instruction isn't a guide either. The
only processors I know that got close to 1 cycle/instruction were all
RISC, all used large lumps of microcode and were heavily pipelined.
By contrast the ICL 1900 series (3rd generation mainframe, no
microcode, no pipeline, 24 bit word) averaged 3 clock cycles per
instruction. Motorola 6800 and 6809 (no microcode or pipelines either,
1 byte fetch) average 4 - 5 cycles/instruction.
One problem with this discussion is that the term "microcode" isn't
really well-defined. There's the vertical kind, the horizontal kind,
with and without internal control-flow constructs, and then there are
various levels of visibility to the user -- see e.g. the pdp-8 manual,
where "microcoding" is used to mean piling the bits for a bunch of
instructions together in the same memory location, which works fine as
long as the instructions in question don't use conflicting sets of bits.
Martin said:Not necessarily. An awful lot of CPU cycles were used before microcode
was introduced. Mainframes and minis designed before about 1970 didn't
use or need it
Rob said:What was the corresponding 1401 boot sequence?
No, most S/360s used microcode.
Martin said:I never used an S/360.
I thought microcode came into the IBM world with S/370 and Future Series
(which later reappeared as the AS/400, which I did use). Didn't the S/370
load its microcode off an 8 inch floppy?
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.