(snip on the evolution of I/O systems)
That makes sense. As CPU speeds have increased, it seems like system
design has evolved into a single-minded exercise of hiding/eliminating
data latency so they don't spend all their time stalled.
The usual disks for S/360 and S/370 ran at 3600RPM. Not so many years
ago, that was still common. Now 7200RPM are reasonably common.
Bytes per track have increased, and so transfer rate, but not so
much for latency.
Even in mobile chips, where power consumption is often more important
than raw speed, we're now throwing cache at relatively slow RAM and
flash so that the CPU can do its work and power off as quickly as
possible rather than stall in a high-power state.
The read() model really doesn't accommodate that, though, unless you
have one thread per buffer. You have to move to aio_read() for that to
work with a single thread, and AIO is sufficiently painful that most
programmers seem to prefer multiple threads--and the synchronization
problems that introduces.
For the read() model, buffering should be done by the system, with
the assumption of sequential access. When at isn't sequential, then
it won't help much.
But I do remember first learning about C's character oriented
(getchar()/putchar()) I/O, and wondering about how efficient
it could be.
Was explicit double-buffering on OS/360 related to batch processing and
the need to keep the CPU busy with a single job, whereas more modern
systems have roots in time-sharing systems?
I believe double buffering went back to single task systems before
OS/360, such as those on the 7090. So, yes, and to when the processor
and I/O timing made it about right. I believe records were unblocked
at that time. A program might read 80 character cards, or 80 character
records off tape.
OS/360 was designed for multitask batch, but not so many running
at once, as you might expect for timesharing. Smaller OS/360 systems
would run unblocked, but larger ones would block records. Usual in
many cases was a 3520 byte block of 44 records, 80 bytes each.
Programs would process them 80 bytes at a time, but the I/O system
at 3520 bytes. That was half track on a 2314 disk, and reasonably
space efficient on 9 track tape. (9 track tape has an inter-block
gap of 0.6 inch at 800 or 1600 bytes/inch.)
OS/360 was close to the beginning of device independent I/O.
Programs could be written independent of the actual I/O device,
which would be selected in JCL when the program was run.
Unlike the unix character oriented model, it was record oriented,
but those records could be cards, paper tape, magnetic tape, or
magnetic disk.
That goes along with early Fortran having separate I/O statements
for cards, drums, and tape. With Fortran IV, and as standardized
in Fortran 66, programs use unit numbers, the actual device being
hidden from the program.
Similar to unix stdin and stdout, OS/360 (and still with z/OS) programs
use a DDNAME to select a device. The DDNAME is an 8 character name,
with common ones being SYSIN and SYSPRINT, somewhat corresponding
to unix stdin and stdout. (SYSIN often has 80 character records,
and SYSPRINT often 133, including carriage control.)
-- glen