P.J. Plauger said:
Wait, is Java a modern language superior to C, or is it still
going through growing pains?
I think it is both. It is a much better OO language than C++, since the
designers learnt from the mistakes.
A big advantage over C is that you have automatic memory allocation. No more
messing about with testing mallocs() and then painstaking freeing your
half-built structure on fail, just for a refusal to give you a hundred bytes
that can never happen.
A big disadvantage in relation to C is the automatic memory allocation -
now we can't fine tune the prgoram where we need performance.
A popular dialect of
C (never standardized) used far and near pointers, to good
effect, to deal with a practical architectural issue. (They
even proved useful on a couple of non-X86 platforms as well.)
Whatever problems they caused pale compared to the races,
deadlocks, and sheer lack of portability inherent in Java
threading.
All practical C programs use pointers. Most Java programs don't need
threads.
What I have never done is written an all-singing, all-dancing Java
interactive application. I just use it for providing a simple graphical UI.
I would prefer C, but there is no standard windowing library that all
platforms (with appropriate hardware) are guaranteed to support.
This would be a trivial job - just define half a dozen function to open a
window, close it, draw a pixel, query the mouse position, and synchronise
the screen. However no-one has done so, and it is non-trival to implement
such an interface given another a high-level interface.
Java indeed "solved" a number of problems that C never has,
like flexible types and arithmetic representations. C hasn't
solved them because some of understand that they're not
problems.
It depends what you are doing, Java aims for rigorous portability - the same
program produces the same output, regardless of platform. Obviously, we
would chose this, other things being equal.
C aims for the next level of portability - the same program produces correct
output, regardless of platform.
So let's say my task is to load in a model of a molecule, rotate it given
user angles, and then output the result as a JPEG image. If the patron
specifies that the JPEG file must be, byte for byte, exactly the same on
every platform, then Java is the language to go for. There might be good
reasons for demanding this - for instance, you can test the program
automatically. Normally, however, the patron is happy with an image that he
can view, and shows the molecule in the orientation he entered.
Right. And that's exactly what you do with *every* program
that purports to be portable in the real world. Try telling
a project manager, "Don't bother to test the Java (or Perl,
or whatever) components before shipping -- they're guaranteed
to work on all supported platforms." Uh huh.
But this is an unacceptable situation, which we must work to change. An
engineer working for a bolt factory expects to have to test his bolts for
stress, manufacturing tolerances, or whatever. The engineer building a
bridge expects to order the bolts, and for them to do what it says on the
box. That's the way it should be with software.
I agree that writing portable code takes effort, which is
why I often advise people not to aim for portability unless
they're really sure they'll actually do one or more ports.
Otherwise, you'll never recoup the investment. I simply
observe that when I do choose to invest in portable C code,
the extra cost is not that great for the reward in extra
usability/marketability.
I think a common mistake is to suppose that the main benefit in writing
portbale code is to be able to compile and run it on a different platform.
It sounds commonsensical. In fact, by making code portable, you improve the
logical structure of your program, making it more readable, and less likely
to contain errors. That is often a much greater benefit.
Probably written in C, and callable from C. So you think a
programming language has to ship with a huge library to call
itself portable? I don't.
I don't know about that one. I use Java for trivial GUI apps, precisely
because C has no library. I would prefer to keep all my code in the same
language.
All of those are nice, and all of them are available in most of
the C compilers I use. Requiring them to be in the standard
library shipped with a language translator is, however, a
two-edged sword.
With most of these things, you have to choose between them and pointers. It
is easy enough to serialise a data structure, as long as it has no pointers.
It is a bit harder to provide threading, but a lot harder still if you
cannot protect memory. Same for arrays and memory leaks.
C is the language of pointers.
Excuse me, but you made two statements about things that go on
inside my head. And you happen to be wrong. You're defending them
by stating opinions that come from inside *your* head. And I
happen to believe most of those opinions are not supported in
the real world. I am undisputably the world's foremost authority
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
on what goes on inside my head. You seem to have problems ^^^^^^^^^^^^^^^^^^^^^^^^
distinguishing my reality from your personal reality from the
real world we both inhabit. I try not to.
How do you know you are not suffering from a mental illness?
>
Right. But that's not the *only* thing I've said, by a
long shot. Your syllogisms are once again bass-ackwards.
I'm using Fortran 77. When I started my new position, everyone was using it,
because it was the standard for academic programming. I decided to put IO
and memory allocation into C, and keep the numerical part of the program in
Fortran - I think it was the right decision but two months into starting,
all we've really achieved is to convert existing assets from Fortran 77 to
C, so I won't now do the scientific work I had hoped to do by Christmas.
I'm working now, in December 2005. So for me Fortran 77 is very much still a
modern language.
Sigh. What I said was that you make a Boolean out of a quantity
by using it in a comparison predicate. I used that as an
example of how people can conclude that a program is "not
portable" by performing some sort of implicit or explicit
quantitative estimate. The point is that portability
becomes a Boolean *only* when you apply a predicate to it.
Sure. I'd say that a very important rule is "will this program be usable if
compiled with zero changes to the source?". Even if you need to make a
trivial change, e.g. converting MAX_PATH to _MAX_PATH, you require a
competent programmer, and the value of the source is diminished.
However even this isn't completely Boolean. What about source which
compiles, but on Unix require an "-lm" to be added to the the makefile /
compiler options? This is a similar irritation, but not as bad as the source
change. In a lot of environments, the rule will be "OK, the source has
changed, now please rerun all the test suites", but not if a compiler flag
changes.
Other people have challenged your presumption better than I can.
When I use Perl, I usally use it for tasks that are not portable, because it
ships as source and hacking into a perl script is less intimidating than
recompiling C source. So if you want to load a thousand models from
/freddy/models, all starting with "att" and with digits between 0-52 as
suffixes, Perl is ideal. The task is, of course, very much dependent on the
needs of the moment.