I don't think Eric was saying that there was no problem, but that
standardising the solution in C was unnecessary. In those days one
accepted that there would be problems transferring course code between
systems so I can image solving the problem with a source-code character
mapping utility that had nothing to do with C.
Standardising this mapping helps, but why do it in C? The problem
exists for many languages so a better solution might have been a
portable representation for any text that needs characters not easily
typed (or represented) on some system. C would then be trigraph free,
and there would be a single solution suitable for many texts.
Which characters are missing?
IIRC the most common omissions were {, }, [ and ]. These were not
directly available on many keyboard layouts (particularly Scandinavian
and French keyboard). PC keyboards typically had these as AltGr keys
on the numbers but the problem pre-dates such sophisticated things!
I remember even my first computer keyboard (a teletype) had pretty
much all of the characters that might be needed. What it didn't have
was lower case, which you'd think would be more of an issue for C
code.
Yes, it would be. I never had to write C with such a thing. BCPL, yes,
but then BCPL's keywords were not lowercase.
Algol 68 addressed this issue by not specifying how the abstract token
of the language were to be represented in the source. One could choose,
to some extent, how keywords were written. "UPPER stropping" (WHILE,
IF, etc) and "POINT stropping" .while. .if. were both common as was
using one or two quotes. Most symbols such as the multiplication
operator had alternate an spelling (often as another symbol but also as
a keyword) which could be typed on almost any system.