Ben Bacarisse said:
By writing the simplest and most obvious bit of code! There is no
actual programming problem here. An endless loop is not a problem.
Putting the data from an endless loop into some buffer is a problem.
I haven't specified how the data will be put into a buffer. You would expect
any code that did that to be aware the buffer is of a finite size. For
example, using fgets().
Look at what you said:
In fact I don't know if there's any other language group that would
seriously discuss the possibility of exhausting memory (on a machine
that has more memory than existed in the whole world a few decades
ago) while reading a simple response from a keyboard!
If the program does not consume memory, no one cares.
The choice seems to be between using next to no memory (your preference), to
use a small line buffer of perhaps 2000 bytes on a machine that might have
multiples of 1000000000 bytes (my preference), or to potentially use up all
the available memory (which for some reason, some see as a consequence of
using a line-buffered solution perhaps because they don't like the idea of
not coping with lines that might have unreasonably long lengths).
You seemed to
object to the absurdity of considering the possibility as if it either
could not happen, or was never a problem when it did. You are clearly
mocking the very idea of discussing of the issue in these days of
multi-GB machine.
No. I just would never let it get to that point where the memory capacity
would be under threat from something so trivial. It is desirable from a
coding point of view, especially programming at a higher level (from
languages such as Python for example), to easily iterate through lines in a
file. That requires that for each iteration you are presented with a string
containing the contents of the line.
At this level, you don't want mess about with the character-at-a-time
treatment that has been discussed. You read the line, and it should just
work. The underlying system (most likely a C implementation) should make
sure it behaves as expected.
It is entirely reasonable to expect a multi-GB machine to have enough
capacity for a string containing /one line/ of a text file. It is not
reasonable to compromise these expectations, because of the rare possibility
that someone will feed it garbage (ie. a file that is clearly not a
line-oriented text file). Deal with that possibility, yes, but don't throw
out the baby too.
Raising an error is one way. The probability is that something /is/ wrong,
but the requirement to have to find space to store such inputs means such
checks will be in place. They might not be, with a solution that just scans
characters from a stream, but then effectively hangs because it is given a
series of billion-character lines to deal with.
(Although my main objection to the character solution is that it is at a
lower-level than line-based ones. If you are using line-based input in the
application anyway, you don't want to mix it up with low-level access.)