Is this C or C++?

Ö

Öö Tiib

But there is a big difference just to wait for a loop of 1000000 item doing
slow stream version versus atoi doing it very fast. Its a mattare of waiting
like 10 second more eatch time you run the app. Even as a programmer I do
not want to wait 10 seconds more if I do not need to. But if its a matter of
waitin 0.1 seconds, then its ok.

Focus on speed is correct C++ attitude but search for silver bullet is not
correct. Let me try to explain ...

When you are reading-writing tens or hundreds of items (typical case) then
you use the streams with some JSON or XML parser or the like. Speed of
implementing it matters and more than two hours is clearly wasteful.

When you are reading-writing billions of items then performance of product
matters more. You use or create special protocol (likely with compression)
and write and optimise special high speed parsers for the protocol. Weeks
of implementing can be good investment.

When it is unclear what case of the two above it is then you ask. If there are
billions of numbers as text then you also ask who The Idiot designed that
protocol. Do not worry, if the interviewer gets offended by that question
then you do not want to work in that company anyway.

....
It was a big issues here, as there was very heavy bitset usage: infinite
loop using bitset all the time. So even a small slowness will matter.

The requirements change and data initially assumed to be few hundred
bytes may grow to megabytes. Then you change the program. You do
not write programs onto rock. You commit code into revision control
system. You change code a lot. You draw it onto sand. Clarity and
maintainability of that "drawing" matters lot more than
micro-performance of the program.
 
V

Victor Bazarov

Victor Bazarov said:
[..]
<shrug> Call it whatever you like.

ye -ah, but... the problem comes when you try to ask somebody else to make a
"C++ code" and you expect them to understand it the same way as you
undertand. Then there could be confusion. [..]

In order for "them to understand it the same way as you", you and "they"
need to agree, make what's known as "a convention" (comes from the word
'conventio', meaning "in the same place"). Without it you cannot
communicate effectively.

Making a convention (coming to the same place) is a process. When
"they" don't complete the task (don't reach the goal you set for them),
it most likely means that you didn't specify the goal correctly. So,
you evaluate the result *together* and come up with a better
understanding of what the goals are, what is expected as the result. By
doing that you get closer to each other, come to the "same place" (or,
in modern business lingo, "get on the same page").

Good luck!

V
 
W

woodbrian77

When you are reading-writing tens or hundreds of items (typical case) then
you use the streams with some JSON or XML parser or the like. Speed of
implementing it matters and more than two hours is clearly wasteful.

I think there are some games that fit your description,
but they use binary. They have some frequency with which
the tens/hundreds of items are sent and if game is popular,
the server has to support thousands of simultaneous users.



[ snip ]
When it is unclear what case of the two above it is then you ask. If there are
billions of numbers as text then you also ask who The Idiot designed that
protocol.

It might be the user of the protocol has made a poor choice.


Brian
Ebenezer Enterprises - In G-d we trust.
http://webEbenezer.net
 
J

Jorgen Grahn

Focus on speed is correct C++ attitude

Yes, up to a point.
but search for silver bullet is not correct.
Let me try to explain ...

Yes! The main problem with crea's postings in this thread is IMHO
he's searching for a common pattern or a single Golden Rule.

I see that as a beginner's mistake. I've certainly done a lot of it
myself over the years. The dogmas eventually get replaced with
experience: you still don't know what the /best/ way to do FOO is, but
you know one or two decent ways to do FOO.
When you are reading-writing tens or hundreds of items (typical case) then
you use the streams with some JSON or XML parser or the like. Speed of
implementing it matters and more than two hours is clearly wasteful.

Minor complaint: depends on what area you work in. I tend to write my
text data formats from scratch, especially when I want them to be
conveniently editable or when I want them to double as a commandline
(i.e. people will sit and type interactively).

But what you say applies: it's not hard to write a parser, and it will
probably be fast enough no matter how you do it. I/O is usually the
bottleneck.
When you are reading-writing billions of items then performance of product
matters more. You use or create special protocol (likely with compression)
and write and optimise special high speed parsers for the protocol. Weeks
of implementing can be good investment.
....

The requirements change and data initially assumed to be few hundred
bytes may grow to megabytes. Then you change the program. You do
not write programs onto rock. You commit code into revision control
system. You change code a lot. You draw it onto sand. Clarity and
maintainability of that "drawing" matters lot more than
micro-performance of the program.

/Jorgen
 
C

crea

Öö Tiib said:
When it is unclear what case of the two above it is then you ask. If there
are
billions of numbers as text then you also ask who The Idiot designed that
protocol. Do not worry, if the interviewer gets offended by that question
then you do not want to work in that company anyway.

Ye, i kind of agree. Although it might still be good to get that job, even
if you need to please them?? :)
Maybe you could in that situation say what they want to hear. Maybe in real
work people are different...
 
Ö

Öö Tiib

I think there are some games that fit your description,
but they use binary. They have some frequency with which
the tens/hundreds of items are sent and if game is popular,
the server has to support thousands of simultaneous users.

Sure because it reduces throughput need to your server park
about 4 times. However you need scalable architecture of that
service lot more because what if there will be 100 000 users.
You need to add more servers.
It might be the user of the protocol has made a poor choice.

Yes, everybody make mistakes. The inability to recognize
and correct mistakes indicates idiocy.
 
W

woodbrian77

Sure because it reduces throughput need to your server park
about 4 times. However you need scalable architecture of that
service lot more because what if there will be 100 000 users.
You need to add more servers.

I don't understand your point in your second sentence.
Are you saying binary isn't as scalable?
 
Ö

Öö Tiib

I don't understand your point in your second sentence.
Are you saying binary isn't as scalable?

No. We have limited amount of effort we can invest into our doings.
We can chose where the effort gives best effects.

When game becomes popular then amount of simultaneous connections
grows. The players start to complain about annoying "lag" sooner or
later. The game providers react by adding more servers.

There is always some point where verbose protocol and scalable
architecture give better result than laconic protocol and
non-scalable architecture. Also it is usually simpler to replace
protocol than to fix wasteful architecture.

Therefore it is better to put more effort into scalability of the
very service itself than into smallness of messages of protocol
when designing such games.
 
Ö

Öö Tiib

Minor complaint: depends on what area you work in. I tend to write my
text data formats from scratch, especially when I want them to be
conveniently editable or when I want them to double as a commandline
(i.e. people will sit and type interactively).

JSON is pretty conveniently editable. Just like with JSON or XML there
are some stock parsers for other text formats CSV, INI or command
line switches. I have worked in (bit too) several areas over the years.
I thought that command line UIs are now mostly used for (unit)
testing and scripting; typically they want some gesture-based graphical
UIs. What area it is that relies heavily on command line UIs?
 
I

Ian Collins

Jorgen said:
Minor complaint: depends on what area you work in. I tend to write my
text data formats from scratch, especially when I want them to be
conveniently editable or when I want them to double as a commandline
(i.e. people will sit and type interactively).

JSON is a good fit then. My stock command line option parser accepts
both regular option switches or the options expressed as JSON (file name
or blob).
 
J

Jorgen Grahn

JSON is pretty conveniently editable.

(Checks Wikipedia). Yes, it seems reasonably ok (much better than
XML, or ASN.1 BER, or ...) but I normally don't need the burden of a
language with structs. A series of "name=value" lines is almost always
enough. Or sometimes "command argument ...".
Just like with JSON or XML there
are some stock parsers for other text formats CSV, INI or command
line switches. I have worked in (bit too) several areas over the years.
I thought that command line UIs are now mostly used for (unit)
testing and scripting; typically they want some gesture-based graphical
UIs. What area it is that relies heavily on command line UIs?

Traditional Unix, from sysadmin tasks and data analysis to networking
protocols. I almost always want to be able to automate things, or
parse data with simple Perl-one liners, and/or feed it to gnuplot ...
I guess that falls under "scripting" above.

My views are similar to Eric Raymond's here:
http://www.catb.org/esr/writings/taoup/html/textualitychapter.html

/Jorgen
 
A

Alf P. Steinbach

(Checks Wikipedia). Yes, it seems reasonably ok (much better than
XML, or ASN.1 BER, or ...) but I normally don't need the burden of a
language with structs. A series of "name=value" lines is almost always
enough. Or sometimes "command argument ...".

Well this is getting pretty off-topic, but as opposed to name=value
lines JSON can represent just about any hierarchical data. I.e. it's as
powerful as you get. And like name=value lines it's easy enough to write
and to check manually, so IMHO it's a good candidate for

.... one logic to parse them all,

e.g. look at this example:
(http://stackoverflow.com/questions/12394472/serializing-and-deserializing-json-with-boost).

The promise of XML was once that we would be able to apply all the
existing SGML tools and machinery, which advantage would more than
compensate for the complexity. What? Haven't seen any of that? Well
neither have I. But then I haven't really delved into JSON either, so
maybe there's something smelly-for-poking-nose also in there somewhere?


Cheers,

- Alf (wondering, could it be "peeking nose", or is that just eyes?)
 
J

J. Clarke

(Checks Wikipedia). Yes, it seems reasonably ok (much better than
XML, or ASN.1 BER, or ...) but I normally don't need the burden of a
language with structs. A series of "name=value" lines is almost always
enough. Or sometimes "command argument ...".


Traditional Unix, from sysadmin tasks and data analysis to networking
protocols. I almost always want to be able to automate things, or
parse data with simple Perl-one liners, and/or feed it to gnuplot ...
I guess that falls under "scripting" above.

Just an aside, but Microsoft is also moving back in the direction of a
command line UI for server and network administration. The recommended
install for Server 2012 is "server core" which does not include the GUI
and is to be administered through the Powershell, and much of the
current go-around of MCSE training is how to run things using
Powershell.
 
I

Ian Collins

Alf said:
Well this is getting pretty off-topic, but as opposed to name=value
lines JSON can represent just about any hierarchical data. I.e. it's as
powerful as you get. And like name=value lines it's easy enough to write
and to check manually, so IMHO it's a good candidate for

... one logic to parse them all,

:)

The OS I spend most of my time developing for (SmartOS) uses JSON (and
node.js) extensively. It makes integration with web tools very easy.
The promise of XML was once that we would be able to apply all the
existing SGML tools and machinery, which advantage would more than
compensate for the complexity. What? Haven't seen any of that? Well
neither have I. But then I haven't really delved into JSON either, so
maybe there's something smelly-for-poking-nose also in there somewhere?

I get the impression (from the number of tools written in it) Java
embraced XML more enthusiastically.

XML still has its place (where it began, in document markup). It
certainly makes working with (Open)Office documents a lot easier than
the older binary formats. My favorite XML editor is LibreOffice.

My rule of thumb is if it's going to published or used as project
documentation, use XML. If not, use JSON.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,230
Members
46,819
Latest member
masterdaster

Latest Threads

Top