is C++ worth it ?

N

Noah Roberts

For example, I start to tend to favour Java over C++ because whenever I

make some grave mistake and get a Memory Access Violation, this is more

or less the end of the world for a C++ application. Java, in contrast,

nicely captures this and gives me an exception. This not only makes me

able to shut down gracefully, but also provides me with enough error

information about where I went wrong (there is no standard way under C++

to retrieve the stacktrace of an exception). I don't understand why such

a feature is not important enough to make it into the C++ standard,

while such gimmicks like lambdas and whatnot get huge attention.

To a great degree this kind of feature IS in the standard. The vector 'at'function for example throws an exception if bounds are violated. This is all Java does.

The main difference here is that C++ doesn't impose this upon you. The 'at' function is provided for times when you really need to do bounds checkingand other operations are provided for when you really do not.

I think the problem comes when people pay too much attention to the C part of C++. Raw arrays and such are mostly a thing of the past in C++. There's no really good reason to use them outside of a backward compatibility scenario. I believe the library provided abstractions are even quite competitive in the high-performance, micro-optimization niche.
 
N

Nick Keighley

To a great degree this kind of feature IS in the standard.  The vector 'at' function for example throws an exception if bounds are violated.  This is all Java does.

The main difference here is that C++ doesn't impose this upon you.  The'at' function is provided for times when you really need to do bounds checking and other operations are provided for when you really do not.

I think the problem comes when people pay too much attention to the C part of C++.  Raw arrays and such are mostly a thing of the past in C++.  There's no really good reason to use them outside of a backward compatibility scenario.  I believe the library provided abstractions are even quite competitive in the high-performance, micro-optimization niche.

low level programming? Interfacing to hardware?
 
B

boltar2003

I think the problem comes when people pay too much attention to the C part =
of C++. Raw arrays and such are mostly a thing of the past in C++. There'=

Not if you do any low level or network programming.
nario. I believe the library provided abstractions are even quite competit=
ive in the high-performance, micro-optimization niche.

For quite competetive read not as fast as.

B2003
 
J

Jorgen Grahn

Not if you do any low level or network programming.

OK, but there's no excuse not to isolate and encapsulate arrays so
they are largely invisible. If you want potential buffer overflows
everywhere, you might as well use C.
For quite competetive read not as fast as.

Given the beginner's mistake you did with std::set in another current
thread[0], your opinion doesn't have much weight yet. Work with the
library for a year or so, and you may change your mind.

/Jorgen

[0] <[email protected]>
 
S

Stefan Ram

Noah Roberts said:
To a great degree this kind of feature IS in the standard. The vector 'at' function for example throws an exception if bounds are violated. This is all Java does.
The main difference here is that C++ doesn't impose this upon you. The 'at' function is provided for times when you really need to do bounds checking and other operations are provided for when you really do not.

It turns out that programmers are notoriously bad at
estimating when bounds checking is needed. Otherwise, there
would be far less buffer overrun exploits. Java can detect
certain situations where it can statically prove that
bounds checking is not needed and then omit dynamical bounds
checking for those cases it. Some people are posting to a
technical newsgroup and then can't even control their line
lengths to be less than about 72.
 
W

woodbrian77

Given the beginner's mistake you did with std::set in another current

thread[0], your opinion doesn't have much weight yet. Work with the

library for a year or so, and you may change your mind.

Perhaps he normally uses an alternative to ::std::set.
I prefer Boost Intrusive's rbtree

http://www.boost.org/doc/libs/1_50_0/doc/html/intrusive/set_multiset.html

to ::std::map or ::std::set.


Brian
Ebenezer Enterprises
http://webEbenezer.net


I'm not endorsing Romney, but I hope people are
aware of what a loser Obama is by now.
 
T

Tobias Müller

Stefan Ram said:
It turns out that programmers are notoriously bad at
estimating when bounds checking is needed. Otherwise, there
would be far less buffer overrun exploits. Java can detect
certain situations where it can statically prove that
bounds checking is not needed and then omit dynamical bounds
checking for those cases it. Some people are posting to a
technical newsgroup and then can't even control their line
lengths to be less than about 72.

To rely on the assumption that lines don't exceed 80 chars is like using a
static buffer of size 80 without bounds checking -- an anachronism.

The limit of 72 chars for top level comments feels like: "Let's restrict
top level comments to 72 chars, then we are on safe side. Nobody will ever
need more than 8 citation levels."

I use an intelligent newsreader, that just displays everything correctly,
no matter what's the length of a line.
Actually, the only problems come from readers/writers that think they must
break every line after 80 chars, no matter if it's part of a quote. Even
worse, most of them don't even insert quotation marks for the newly created
line. This leads to unreadable text.

Tobi
 
R

Rui Maciel

Stefan said:
It turns out that programmers are notoriously bad at
estimating when bounds checking is needed. Otherwise, there
would be far less buffer overrun exploits. Java can detect
certain situations where it can statically prove that
bounds checking is not needed and then omit dynamical bounds
checking for those cases it. Some people are posting to a
technical newsgroup and then can't even control their line
lengths to be less than about 72.

That's only a problem if your usenet client is buggy. Meanwhile, there are
plenty of usenet clients that were adequately developed, and therefore don't
suffer from that limitation.


Rui Maciel
 
J

Jorgen Grahn

That's only a problem if your usenet client is buggy. Meanwhile, there are
plenty of usenet clients that were adequately developed, and therefore don't
suffer from that limitation.

We have been through this a number of times before. What you call a
"buggy" client is what others call "supports Usenet conventions and
netiquette". It's hard to do anything about that.

(That said, I wish SR hadn't brought Usenet length limits into a
completely unrelated discussion.)

/Jorgen
 
B

boltar2003

OK, but there's no excuse not to isolate and encapsulate arrays so
they are largely invisible. If you want potential buffer overflows
everywhere, you might as well use C.

If you know the exact size of the data you're dealing with , eg a packet
header, then you won't get any buffer overflows because you'll only ever
read in that amount.
Given the beginner's mistake you did with std::set in another current

What beginners mistake? I was quite well aware of the built in find() in
set, I'd just never used the standard version on it and was surprised it
wasn't optimised.
thread[0], your opinion doesn't have much weight yet. Work with the
library for a year or so, and you may change your mind.

Thanks, I've been using it for 10 years or so. Go take your patronising
bullshit somewhere else.

B2003
 
J

Jorgen Grahn

If you know the exact size of the data you're dealing with , eg a packet
header, then you won't get any buffer overflows because you'll only ever
read in that amount.

The key here is "/if/ you know". Usually you don't.
What beginners mistake? I was quite well aware of the built in find() in
set, I'd just never used the standard version on it and was surprised it
wasn't optimised.

The "beginner's mistake" was to assume algorithms like std::find can
and will do anything clever. Perhaps you're right: perhaps it's not
obvious that that won't happen. It was obvious to /me/, but I can't
recall how I learned it.
thread[0], your opinion doesn't have much weight yet. Work with the
library for a year or so, and you may change your mind.

Thanks, I've been using it for 10 years or so. Go take your patronising
bullshit somewhere else.

I was not trying to be patronizing. I got the firm impression that
you hadn't used the standard library a lot.

/Jorgen
 
M

Mark

I once saw an episode of Nova talking about a plane that crashed.
basically, something went wrong (wind speed sensor froze, started
returning 0 wind-speed, causing control software to crash), the plane
sent a crash dump back to the manufacturer over radio, and then the
plane promptly crashed into the ocean.

I am horrified that this software was ever certified. I used to work
on safety critical software for the aerospace industry. All software
should be tested to see how it copes with sensor failure. Generally
such systems will have multiple sensors and redundant
software/hardware that can still work even if significant parts stop
working.
 
U

unnamed

Use the right tool for the job. For certain use cases, although considerably less as time goes on, you need the low level features of C, C++, or Ada. JITs are getting so good, though, that for typical 32-bit programming, using C++ amounts to a micro-optimization. High level constructs like virtuals or exceptions are often better handled by a JIT.

That said, if you're a developer working on the JIT, you're probably going to be using C++. The language will always remain dominant for infrastructural software. Another good area for C++ is embedded platforms. It's not a good idea to run a JIT on a smartphone.

I'm sorry to say that "modern C++" is still not as safe as managed languages. Ref counting, for instance, is not as safe as GC. Programming is about tradeoffs. Would you take a 10% to 30% performance hit for an easier, safer development environment?
 
T

Tobias Müller

unnamed said:
Use the right tool for the job. For certain use cases, although
considerably less as time goes on, you need the low level features of C,
C++, or Ada. JITs are getting so good, though, that for typical 32-bit
programming, using C++ amounts to a micro-optimization. High level
constructs like virtuals or exceptions are often better handled by a JIT.

Our products written in C++ are almost always faster than those of our
competitors (often written in Java) often by a factor of 100 or similar.
There are probably also other causes, but the language is surely one of
them.
That said, if you're a developer working on the JIT, you're probably
going to be using C++. The language will always remain dominant for
infrastructural software. Another good area for C++ is embedded
platforms. It's not a good idea to run a JIT on a smartphone.

I'm sorry to say that "modern C++" is still not as safe as managed
languages. Ref counting, for instance, is not as safe as GC. Programming
is about tradeoffs. Would you take a 10% to 30% performance hit for an
easier, safer development environment?

We have almost never any problems with memory leaks, they are rather easy
to detect.
Actually from what I hear from Java programmers, they have more problems
with GC (GC pauses, nondeterminism) than we have with refcounting/manual
management. And those problems are not as easy to solve as memory leaks.

IMO, the biggest advantage of Java or .Net are not GC or managed code, but
the huge standard libraries.

Tobi
 
J

James Kanze

I'm sorry to say that "modern C++" is still not as safe as managed languages.
Ref counting, for instance, is not as safe as GC.

Not using dynamic allocation at all can be safer than GC. And while GC
can allow a higher degree of safety than other systems which manipulate
pointers, the code I've seen in Java and C# doesn't implement this
higher degree of safety (and you can use GC in C++, if you need it).

In the end, it depends on what you mean by safety. If you're handling
web reqests on an open server, and you're worried about viruses, then GC
is a must, even in C++; you simply cannot run the risk of a block of
memory being reused as long as there are pointers around which refer to
it in it's previous use. (Of course, this safety is really only useful
if you actively mark the memory as invalid, and crash the code if it is
accessed.) For a lot of applications, however, C++ is far simpler and
safer than Java or C#. (Overloaded operators and value semantics are
almost necessary for anything mathematical, for example. You just
cannot do mathematical software in Java.)
 
J

James Kanze

Our products written in C++ are almost always faster than those of our

competitors (often written in Java) often by a factor of 100 or similar.

There are probably also other causes, but the language is surely one of

them.












We have almost never any problems with memory leaks, they are rather easy

to detect.

Actually from what I hear from Java programmers, they have more problems

with GC (GC pauses, nondeterminism) than we have with refcounting/manual

management. And those problems are not as easy to solve as memory leaks.



IMO, the biggest advantage of Java or .Net are not GC or managed code, but

the huge standard libraries.



Tobi
 
O

ootiib

Use the right tool for the job. For certain use cases, although considerably less as time goes on, you need the low level features of C, C++, or Ada.. JITs are getting so good, though, that for typical 32-bit programming, using C++ amounts to a micro-optimization. High level constructs like virtuals or exceptions are often better handled by a JIT.

So the JIT handles your bugs well? So what? Even embedded debuggers are quite good these days. When there are no bugs then it still loses 2 - 5 times of efficiency to well-written C++. It does not stop there. When the C++ guyuses GPU for processing too then it is often orders of magnitude ahead.
That said, if you're a developer working on the JIT, you're probably going to be using C++. The language will always remain dominant for infrastructural software. Another good area for C++ is embedded platforms. It's not a good idea to run a JIT on a smartphone.

For certain limited use cases scripts run fast enough. Even in embedded environment. So on these cases there are no need to burden with overhead from JIT compiler or the like just a lightweight script interpreter is fine. That makes most powerful is code compiled to native instructions + script.
I'm sorry to say that "modern C++" is still not as safe as managed languages. Ref counting, for instance, is not as safe as GC. Programming is abouttradeoffs. Would you take a 10% to 30% performance hit for an easier, safer development environment?

As a developer or as end user? End user does not care about your development environment at all. Why he should care how soft is your chair is or how large coffee mug you got?

The end user notices good performance. 10%-30% is the modern wishful thinking trend. In practice it is 200%-500%. Sure i can write trash Assembler that loses to Python but in reality best competes with best and rest of the shit is dead.

So what mattes is how fast it starts, how fast it runs, how responsive it is. It is end user's (life)time that the good performance saves. Native compiled languages like C++ or C or Objective C are the winners. Nothing to do.It is quite narrow market where performance does not matter. Like the freeFlash puzzle games for kids, (but Flash is again script not JIT AFAIK).

For some panicked marketroids (they can impact the minds of your bosses) itis time-to-market that matters everything, despite (unless it is some sortof short-term trend software) you can take the market over later by betterperforming product anyway.

To please such marketroids it is still better to make a part of software using scripts (like Python) and rewrite them in version 2.0. Yes the scripts might reveal things to your competitors but also the JIT stuff is so easy to reverse engineer that no much difference there.

If something is worth writing at all it is worth writing it in C++. If you really need GC, stack tracing or what not then just link a library that does it. There simply are no tricks or technologies that you can not use from C++ if you want that so much.
 
O

ootiib

Not using dynamic allocation at all can be safer than GC. And while GC

can allow a higher degree of safety than other systems which manipulate

pointers, the code I've seen in Java and C# doesn't implement this

higher degree of safety (and you can use GC in C++, if you need it).

Sometimes a jump in efficiency and safety can be gained by turning off
dynamic de-allocation and by multiprocessing. No special idioms needed like with GC versus ref-counting, just turn operator delete() or free() into NOP.

Have a separate process for highly demanding complex calculation, clone it for
processing, pipe out the results and terminate for freeing all the resources at
once. The highest peak of memory that such a process needs does not usually
differ much if you free up memory on the way or not but it will be lot quicker
if you don't.

Further advantage is that it is easier to measure the memory that is needed by
a separate process. So it is simpler to learn to predict resource usage that
way and so to turn off dynamic allocation as well and it also allows to refuse
to take too demanding tasks before even starting those.

Such trick also scales very well since such processes can be later spread all
over the farm when the problems grow and a single PC stops satisfying. I am not
sure what Java or C# can offer as competition there, not too familiar with
those.
 
M

Melzzzzz

On Sun, 19 Aug 2012 13:47:52 -0700 (PDT)
Sometimes a jump in efficiency and safety can be gained by turning off
dynamic de-allocation and by multiprocessing. No special idioms
needed like with GC versus ref-counting, just turn operator delete()
or free() into NOP.
That would be possible only if memory cannot be exhausted ;)
 
M

Melzzzzz

Not using dynamic allocation at all can be safer than GC. And while
GC can allow a higher degree of safety than other systems which
manipulate pointers, the code I've seen in Java and C# doesn't
implement this higher degree of safety (and you can use GC in C++, if
you need it).

In the end, it depends on what you mean by safety. If you're handling
web reqests on an open server, and you're worried about viruses, then
GC is a must, even in C++;

Why?

you simply cannot run the risk of a block
of memory being reused as long as there are pointers around which
refer to it in it's previous use.

Aha. You mean deallocating block of memory while still being used.
I think that's a bug in software. Does not happens usually...
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,134
Messages
2,570,779
Members
47,336
Latest member
DuaneLawry

Latest Threads

Top