Arved said:
That's actually not true. When you're designing your solution - which is the
point that the OP is at - you should most definitely be considering
efficiency. More often than not you cannot make an inefficient solution
efficient without changing it fairly radically, so why not design it properly
first?
It's much more correct to say, get an efficient maintainable solution working
correctly first, then do your last little tweaks if necessary.
The OP correctly seeks to consider algorithms prior to delving into
implementation. Arved wisely shows how correctness and algorithmic efficiency
can share precedence. Joshua reminds us that problem scale affects
performance analysis and design.
Optimizations discussed here so far have avoided descent into
micro-optimization, and we have heard a helpful caution against such error.
The OP's goal lets us clarify the difference.
Synthesizing the advice of our gurus, the OP now knows that a couple of
algorithms - character-at-a-time and block-oriented - will work, and at what
problem scales they're likely to help. (Predictions prior to measurement are
always at best hypothetical, but you have to assess likelihood while
planning.) Signposts guard the fringes - avoid multi-char code points and
large character sets. Those are the optimizations and the correctness.
Performance analysis is algorithmic and doesn't seem premature.
Micro-optimization would be to start in right away with custom char arrays and
manual buffering through idiosyncratic I/O streams. Whatever the standard API
may lack, it at least provides decent infrastructure for that sort of thing.
So initial implementation will leave that stuff bog standard, and I'll bet
dollars to doughnuts that in the OP's scenario that'll do. Changing that
layer from get-go would be tweaking and constitutes premature optimization,
a.k.a. micro-optimization.
--
Lew
Ceci n'est pas une fenêtre.
..___________.
|###] | [###|
|##/ | *\##|
|#/ * | \#|
|#----|----#|
|| | * ||
|o * | o|
|_____|_____|
|===========|