And I certainly hope I never reach a point where a ten-minute project
takes two hours to write and maybe 8-10 hours thereafter to produce an
elaborate, complicated, inefficient, version which we still aren't totally
sure works.
Well, dear boy...
We're pretty sure that:
(1) Your off by one strlen didn't work. I guess I should say that it
is a minor saving grace of yours that the code was so simple that the
bug was easy to find, and I mean this in all sincerity.
It may be that my long and Meaningful names may in certain cases
conceal bugs from a raw statistical perspective. The fewer men, the
greater share of honor, said Henry V at Agincourt: the fewer
characters, the fewer places for bugs to hide. Based on this I may
reduce, somewhat, the length of the names I used.
(2) Your %s replacer didn't work. Here, another *mitzvot*, another
grace note, was that you told us about the bug. But in cases (1) and
(2) you proved that under the standards you subject others including
Schildt, you are not very competent. In (1), you made a basic mistake,
not a mistake of style or standards. In (2) you refused to fix the bug
you reported.
(2) You've taken a whole week as of yesterday by my clock not to even
try to solve the problem, which was to do replace() without string.h,
therefore the minimum amount of time you apparently need is w+x where
w=one week to not do anything and x is completely unknown. It's too
late to redeem yourself on this problem, since we can't be sure you
won't plagiarize me.
Now, as to assurance that my program works, as best we can tell.
We can see from a good English or other natural-language description
of the algorithm used that the algorithm works. Here is a good
description of the algorithm in English:
replace(master, target, replacement)
assert that target is not of zero length [there is no need to check
for NULL masters, targets or replacements in this step]
clear a linked list (set its header to null). Each node shall contain
the starting index and length of a "prefix" in the master, where a
"prefix" is the material to the left of a target occurrence or the end
of the string, along of course with the link. Note that a "prefix" can
be of zero length when two targets are adjacent.
from left to right, find all non-overlapping occurrences of the
target. When each occurrence is found, add the starting index and the
length of its "prefix" to the linked list UNLESS (1) there is at least
one member of the linked list, AND (2) this member records the
occurrence of one or more prefixes of zero length between adjacent
targets using a method to be chosen by the mere coder, as long as that
method records the number of prefixes of zero length. If (1) and (2)
obtain, increment the count of zero length prefixes. While scanning
left to right, sum-total the lengths of all prefixes, and each
occurrence of the target, in X.
in the above, add a linked list member for the "prefix" which appears
between the rightmost target and the end of the string only when this
prefix is not null.
in the above keep the index used to match the target character by
character for you shall need its value below.
allocate X bytes of storage [since we assume we shall use C]
concatenate all members of the linked list to the mallocated storage
by walking it: for each member append the prefix in it and then append
the replacement string if, and only if you are either not at the last
member or the last match of the target succeeded, indicating that the
master ends in a target. Note that all you need to do is have the last
value of the index used in the target to match: if this points at NUL
then the master ends with a target.
Now, from this algorithm statement a junior programmer (such as you,
Peter) can verify that the C code works in a half an hour, IF he can
parse reasonably complex English sentences.
Which brings up in fact a very, very interesting consideration.
It is that the common programmer reaction to The Code of the Other,
one of Shock and Awe, may in fact be the same psychological event as
the common reaction of many numerate but aliterate people to English
prose of a certain level of complexity.
The above algorithm description may be as "hard to read" as my code,
and it is certainly almost impossible for 90% percent of actual
programmers to write...despite the fact that interacting with texts of
this nature is an excellent way of writing software. This is something
Dijkstra knew, although he would strongly disagree with my belief that
the "informal" use of the above species of English could replace
mathematical formalism in practice.
But the "unreadability" of the above algorithm is not due to
verbosity. It is a social phenomenon in US and other "developed"
societies because (as many English professors will tell you) the
ability to write a sentence above a small upper bound of nesting and
complexity has been in a frighteningly rapid decline, tracking the
ability to read a coherent sentence above a small UB.
We see evidence for this here in the charges of "verbosity" when they
mean "I have a Master's degree,"
"I have a Master's degree" - Ask Mister Science, Duck's Breath Mystery
Theater
"...but I don't understand this".
Recall the passage from Dijkstra that I quoted earlier:
"Oh yes. In Europe a much larger fraction of the population CAN WRITE.
People in America have really suffered from the combination of TV,
which makes reading superfluous, and the telephone. A few years ago I
was at CalTech and that is a hiqh quality place and everybody
recommends it because the students are so bright. A graduate confessed
to me—no he did not confess, he just stated it, that he did not care
about his writing, since he was preparing himself for an industrial
career. Poor industry!" - EWD 1985, cf
http://www.cs.utexas.edu/users/EWD/misc/vanVlissingenInterview.html
That is, the fact that many people self-select or are tracked towards
programming careers because of difficulties with reading & writing
such as dyslexia may explain their difficulties with maintenance
programming (the need to connect with the Code, that is the Text, of
the Other) and in coding above small lower bounds of complexity. They
may in fact not be able to formulate what it is they are doing in
English.
But: this model doesn't explain why Peter Seebach in fact seems to
write rather well, and has published a programming book. It may be
that there is a writing facility that "flies under the radar" in the
sense that the individual in question has learned how to write
effectively in a SIMPLE style, at the cost of ignoring complexity.
But this "flat" writing style tends to run out of oxygen at higher
levels, as witness the real rage of formally well-educated and
financially successful people with post-baccalaureate degrees at the
texts they were expected to master in graduate school. I've met more
than one English MA working in industry in various capacities who've
mentioned that they specialized in Milton, but in response to my
questions about Paradise Lost, have responded with some ill-concealed
hostility that they really, really HATE fucking Milton, having had to
analyze Milton's famously complex grammar. They were in my opinion
unprepared by more basic English classes where the Little Darlings of
post-Sixties schools were spared the sentence diagrams we had to do,
and which I teach today.
Note that to adequately describe the replace algorithm above, I have
to fly above the upper bound of complexity found in most texts
(newspaper articles, emails, and popular novels) today OF NECESSITY.
What this means is that "complex code" is NOT the problem. My coding
tics, such as my pre-Szymonyi Hungarian, can be rapidly modified using
a good editor, and the underlying code will STILL present to many here
as "too complex", not because I'm a monstrum horrendum or a terrible
programmer, but because industrial society parks people in little
boxes labeled "numbers person" or "words person".
One more remark about the above natural language algorithm. In terms
of the pornographic and sado-masochistic division of labor
("specifications" versus "coding") it is neither fish nor fowl, and
this is as it should be. This is because a good programmer is a good
specifier and vice-versa, and a minimally acceptable programmer can
WRITE NATURAL LANGUAGE. Extreme Programming hasta la victoria siempre!