Ö
Öö Tiib
But there is a big difference just to wait for a loop of 1000000 item doing
slow stream version versus atoi doing it very fast. Its a mattare of waiting
like 10 second more eatch time you run the app. Even as a programmer I do
not want to wait 10 seconds more if I do not need to. But if its a matter of
waitin 0.1 seconds, then its ok.
Focus on speed is correct C++ attitude but search for silver bullet is not
correct. Let me try to explain ...
When you are reading-writing tens or hundreds of items (typical case) then
you use the streams with some JSON or XML parser or the like. Speed of
implementing it matters and more than two hours is clearly wasteful.
When you are reading-writing billions of items then performance of product
matters more. You use or create special protocol (likely with compression)
and write and optimise special high speed parsers for the protocol. Weeks
of implementing can be good investment.
When it is unclear what case of the two above it is then you ask. If there are
billions of numbers as text then you also ask who The Idiot designed that
protocol. Do not worry, if the interviewer gets offended by that question
then you do not want to work in that company anyway.
....
It was a big issues here, as there was very heavy bitset usage: infinite
loop using bitset all the time. So even a small slowness will matter.
The requirements change and data initially assumed to be few hundred
bytes may grow to megabytes. Then you change the program. You do
not write programs onto rock. You commit code into revision control
system. You change code a lot. You draw it onto sand. Clarity and
maintainability of that "drawing" matters lot more than
micro-performance of the program.