Writing "absolutely" portable code

  • Thread starter ArifulHossain tuhin
  • Start date
R

Rui Maciel

jacob said:
Linux people are very conservative, and their version of Unix is frozen
around 1980. Apple did a better Unix. And I am not speaking about the
wonderful and intuitive user interface, the great LOOKS, the package
management system that has hundreths of public domain applications
ported to Apple Unix, etc etc.

I don't know what you mean by "Linux people". As you know, linux itself is
only an OS kernel. From there, there is a considerable number of people and
organizations who, independently, put together operating systems which are
based on the linux kernel. Some of those projects don't even rely
exclusively on the linux kernel, as they also provide essentially the same
operating system with kernels other than the linux one.

Anyway, there isn't a concerted effort dictated by a centralized authority
intended to steer the technical direction of those projects. Well, at least
beyond the scope of each individual project.

Therefore, until a meaningful definition for "linux people" is provided,
your assertion is meaningless. And even if one is provided, I seriously
doubt your initial assertion will hold.

I am really satisfied with my Mac, sorry. It is EXACTLY what linux
could have been if they would have worked in making a BETTER Unix
instead of producing several dozens of different window management
systems (equally bad and equally awful) and several dozens of equally
bad IDEs.

This assertion is absurd. There is no centralized authority wasting
resources on redundant projects. There are multiple, independent people
investing their time developing a set of projects independently.
Criticizing someone for, in his spare time, having put together a window
manager isn't a reasonable thing to do, particularly if the only complain
which is being made is that there are already competing projects out there.
For example, the same criticism that you are directing at the "several dozen
different window management systems" can also be used to complain about
jacob navia's compiler. Do you also believe that you should have worked in
making a better anything instead of producing one of the several dozen
different compilers already available, each equally bad and equally awful?

They never think about the USER EXPERIENCE when designing their programs
(just look at gdb), something Apple has always had in mind.

What's wrong with gdb?


Rui Maciel
 
R

Rui Maciel

ec429 said:
The WMs are not "awful"; neither GNOME nor KDE is perfect, but that's
largely because they erred in the direction of imitating MacOS Classic.
Xfce, LXDE, fvwm2 and several others are clean, unobtrusive, and
lightweight.

I don't see how KDE tried to imitate any of Apple's DEs. KDE tens to be
accused of trying to imitate Windows' DEs, which is also a silly claim. For
example, I remember some people claiming that KDE4 tried to rip off
Windows7, eventhough KDE4 was released over a year and a half before
Windows7.


Rui Maciel
 
R

Rui Maciel

Richard said:
Wrong. Invariably its because people dont like change. Debug using
eclipse or VS and compare it to using the god awful gdb with ddd or
something equally as half arsed, ugly and unreliable.

Do you realize that Eclipse uses gdb to debug C and C++ code?


Rui Maciel
 
R

Rui Maciel

ArifulHossain said:
Our team was(and still is) little inexperienced dealing with Gnu Build
system. Lot of them used to develop in java/c# or visual studio. Our
organization is trying to train us, hopefully we won't be new in coming
months. But still i believe a moderate sized project can be done by
handwritten makefiles because the similarity in POSIX systems. Yes they
require tweaks and won't be "automated". But autotools are no different.
Most of the time, they need tweaks too. The problem is the "tweak"s are
not very understandable because of the complexity associated with
Autotools. Whereas makefiles are rather straight forward.

In my opinion, the main flaw which affects the GNU Build System is its sub-
par documentation, which needlessly contributes to its steep learning curve.
Maybe it's even responsible for it. Nonetheless, once someone gets the hang
of it, things tend to work well. It's just a matter of reading the right
documentation and checking how others have set up their own projects.


Rui Maciel
 
E

ec429

They dont care.
Not caring that they're wrong doesn't make them right. The lessons of
the old Unix Wars about the danger of differentiation still ring true.
Every heard of overwriting defaults. Methins you squawk/protest too
much.
My point is that the feature isn't useful because it trivially can't be
robust. If you learn to rely on it, you will get headaches. Is " -lm"
really so much to type?
The squawking (which I interpret to refer to the ALL CAPS SHOUTING!!!!)
was in the post I was replying to.
Where Linux and GNU fail to follow POSIX, they usually have a
POSIXLY_CORRECT switch (or a POSIX_ME_HARDER :grin:), and it's usually
because of some brain-damaged thing a Unix vendor invented in order to
differentiate their product, which then got used in too much software to
excise it. Or were you disputing that standards are a good thing? If
so, you're beyond help.
Wrong. Invariably its because people dont like change. Debug using
eclipse or VS and compare it to using the god awful gdb with ddd or
something equally as half arsed, ugly and unreliable.
You have a problem with gdb? The problem is you. gdb is very powerful,
it has a well-designed CLI, and in my experience it's reliable (from
what I've heard, VS is very much /not/ reliable).
Anecdote: When Phil Kendall was designing the debugger for FUSE (the
Free Unix Spectrum Emulator, not Filesystem in USErspace), did he base
it on Eclipse? Of course not. VS? Surely you jest. No, he chose a
stripped-down gdb interface (somewhat modified to deal with z80 assembly
rather than C binaries with debugging symbols).
What, exactly, don't you like about gdb? Does it not hold your hand
enough, poor ickle child? Are there not enough pictures to engage your
attention?
Which particular packahame management? There are about 60 at the last
count. Too many chiefs, not enough indians.
I said which package management: Debian's system. If you meant which
package /manager/, there's really only one (apt). There are plenty of
interfaces, but that's a matter of choice; since they're all apt,
they're all compatible, so it wouldn't matter if every single user had
their own, they'd still all be using apt and sharing .deb packages. The
joy of standards, you see :)
And you based that on what? How did they err?
They (well, GNOME at least; I haven't used KDE) tried to copy the
'persistent desktop' metaphor; gnome-session-save is a shonky piece of
crap, and session management generally is a bit broken in gdm. I intend
to ditch it and install something simpler and lighter-weight, but I
haven't come to a decision yet.
Ugly and horrible too.
Actually I think fvwm is quite nice-looking. What really sells it,
though, is that it decouples the desktop shell from the processes under
it. I was ssh-ing into linux.pwf.cam.ac.uk one day, running a fvwm
session by the magic of Xnest, when my ssh tunnel fell over. When I
connected back up, there were all my terminals, still running fine,
unaware that fvwm had ever exit. If only GNOME could detach like that
*sigh*. (Note, by the way, that this is /old/ technology; WMs generally
have been somewhat slow in taking up ideas from each other - but that's
because they've been responding to market pressure to "Look shiny! Like
Apple does!")
IDEs, well done, are a real boon. I can only assume you never used one
or worked with any considerably sized code base that allows context
help, auto class completions, refactoring, JIT debugging etc etc.
I've used IDEs, and hated them. "Auto class completions"? Sorry, I
don't do OO. Refactoring? I do that by hand and it isn't that hard;
having it done automatically sounds like a great way to introduce subtle
bugs if there's a global or a static around. I don't need context help:
C is small, and I choose libraries whose interfaces are similarly small.
If you find yourself needing context help in order to use a tool, the
tool is badly designed (or your brain is; I'm not sure which).
On the charge of codebase size, I guess I plead guilty: my largest
project is about 10,000 lines. However, that's largely because I design
small combineable tools in the Unix tradition, rather than the monster
monoliths that are popular in other parts of the computing landscape.
If you need an IDE to handle your project, your project is wrong - that
doesn't make the IDE right.
(I have worked with larger codebases; this summer I was working on a web
frontend to the NetFPGA toolchain, thousands of LOC in several
languages. I handled C, Perl, Python, PHP, Javascript, and Verilog. No
IDE, no problem.)
The shell is my DE; when I want help, there's 'man'; when I want build
control, there's 'make'; when I want debugging, there's 'gdb'; when I
want source control, there's 'git'. The fact that these are separate
tools designed by different people doesn't stop them interoperating
nicely; Unix tradition produces much better integration than
"Integration" ever can, and it leaves me free to pick and choose my
tools at will (don't like make? there's always CMake. Don't like git?
Try svn or bzr or hg.)
In summary, Integrated Development Environments Considered Harmful.

fisk me, I fisk back -e
 
A

ArifulHossain tuhin

Hopefully that training is coming from someone with significant
experience. It can be a pretty steep learning curve.

Training is not coming from a expert. In our country we do not have a lot guys working on linux/unix programming these days. Most of Linux/Unix guyz we have are admins. They know little about build systems in depth.
That's the major feature of autotools: to not need to tweak makefiles
for each target platform.

For in-house apps that only have a few targets, learning autotools is
probably not worth the effort; for open-source apps that have to run on
dozens of targets, each with varying libraries and other apps installed,
and which may be cross-compiled or installed in different locations,
it's well worth it.

Its going to be deployed on customer's infrastructures.

If so, you're (probably) doing it wrong.

Maybe, because i do not have a lot of experience regarding autotools. I'm also a admin turned developer. my bad :)
 
A

ArifulHossain tuhin

Unfortunately, this step very often does not work for me. I am regularly
trying to cross-compile for an embedded, ppc-based linux system, which
in theory should be possible with the configure-parameter

--host=ppc-XXX-linux

Very often I have to work long until configure works. And if it works,
there is no guarantee that the next step, i.e. "make" works. And if that
works, there might be subtle errors which show up later when the program
runs on the target platform. Sometimes I don't get configure to work
anyway or decide, that it's not worth the time. For example, I once
tried to build apache for my target platform without success. Finally, I
decided to use another webserver (lighttpd), which might be more
appropriate anyway, but I wanted to give apache a try.
Just now I am in the process to cross-compile php for my target. The
first issue was that the target has MSB byte-order, which the configure
script did not care about. The actual sources had #ifdefs to take
byte-order into account, so after patching the byte-order information
into configure, I could solve this. But I have yet to solve further
problems. Often the configure-script tries to compile and run short
test-programs in order to find something out. This of course always
fails if you try to cross-compile. Than I have first to see what
information the configure script tried to aquire and how I can provide
this information manually. No, I don't like this.

So from this experiences, I would greatly prefer a manually written,
clean makefile together with some pre-written config.h where the
system-dependent definitions are visible. Btw I don't think, a manually
written script configuration script would be better, I think it could be
even worse.

Regards
Dirk

Excellent explanation. That's exactly what happens to me, and i'm guessing many others. Sometimes it takes hours to fix the ./configure stage. In which time a custom makefile can be written if someone has some understanding.
 
J

James Kuyper

On 01/11/2012 06:10 AM, ec429 wrote:
,,,
What, exactly, don't you like about gdb? Does it not hold your hand
enough, poor ickle child? Are there not enough pictures to engage your
attention?


The main things I don't like about gdb might actually be features of
gcc. The only thing I can say for certain is that the two tools don't
communicate with each other properly Even when I compile with
optimization turned (supposedly) completely off, I often face the
problems like the following:

* A variable, whose value I want to check, has been optimized out of
existence.

* I will advance though the program one step at a time. gdb will display
line 34, line 35, line 34, line 35, line 34, etc, which would be fine if
they were parts of loop, but is very disconcerting when one of those
lines is a comment and the other is a declaration. Eventually it stops
cycling between those line numbers, and jumps to line 39, and by
examination of the values of various variables, I conclude that what
it's actually been doing was executing the loop on lines 36 and 37.

* I will single-step through a line of code that supposedly changes the
value of a variable, examining the value of that variable before and
after execution of the statement, and see the value unchanged. It
eventually gets changed, but during what is supposed to be the execution
of some other line of code much farther down in the code.

I can't be sure whether these problems are the fault of gdb or gcc.
However, I only ran into similar problems using the IRIX MIPS-Pro
compiler and dbx if I chose optimization levels higher than the lowest
possible one. That is why I usually debug with all optimizations turned
off - unless turning optimization off makes the bug disappear.

I have other problems with gdb as well, but they're just due to lack of
experience with the program, I presume. I've been using dbx for more
than two decades, possibly three.
 
E

ec429

* A variable, whose value I want to check, has been optimized out of
existence.
If that's happening with optimisation turned off, it's probably a bug,
which you should probably report.
* I will advance though the program one step at a time. gdb will display
line 34, line 35, line 34, line 35, line 34, etc, which would be fine if
they were parts of loop, but is very disconcerting when one of those
lines is a comment and the other is a declaration. Eventually it stops
cycling between those line numbers, and jumps to line 39, and by
examination of the values of various variables, I conclude that what
it's actually been doing was executing the loop on lines 36 and 37.
That sounds strongly like your source file is more recent than
executable. gdb doesn't like it when the sources you have don't match
the binary you're debugging.
* I will single-step through a line of code that supposedly changes the
value of a variable, examining the value of that variable before and
after execution of the statement, and see the value unchanged. It
eventually gets changed, but during what is supposed to be the execution
of some other line of code much farther down in the code.
Same as above, source/binary mismatch.

-e
 
J

James Kuyper

If that's happening with optimisation turned off, it's probably a bug,
which you should probably report.

I hadn't thought about that - I'll try to remember to do so the next
time it happens.
That sounds strongly like your source file is more recent than
executable. gdb doesn't like it when the sources you have don't match
the binary you're debugging.
Same as above, source/binary mismatch.

I agree, that is the most reasonable conclusion. That is why my first
step was to erase all object, library, and executable files created by
my build process, and rebuild from scratch. I continued to observe the
same behavior. This has happened in many different contexts.

I'm pretty good at writing platform-agnostic code; even my bugs are
usually platform-agnostic, so testing on our Irix machines generally was
just as good as testing on our Linux ones. Since we shut down our Irix
machines I've frequently had to fall back on debugging printf()s in
order to get sensible (i.e. corrected to make it appear as if no
optimizations were occurring) dumps of the program state.
 
N

Nick Bowler

Am 09.01.2012 23:50, schrieb Keith Thompson:
If it works, fine.
Unfortunately, this step very often does not work for me. I am regularly
trying to cross-compile for an embedded, ppc-based linux system, which
in theory should be possible with the configure-parameter

--host=ppc-XXX-linux

Very often I have to work long until configure works.
[snip rant about various failures that occur during cross compilation].

All of this is the result of a single problem: the package developer(s)
did not test cross compilation, a feature which autoconf was explicitly
designed to support. They did not try it even once. Or if they did,
they did not bother to fix the problems that they doubtless encountered.
So from this experiences, I would greatly prefer a manually written,
clean makefile together with some pre-written config.h where the
system-dependent definitions are visible.

I cannot fathom why anyone would expect a from-scratch build system
written by such developers to feature better support for cross
compilation.
 
E

ec429

I cannot fathom why anyone would expect a from-scratch build system
written by such developers to feature better support for cross
compilation.
Perhaps because a build system that doesn't try to autoguess the values
of system-dependent things, doesn't break when the build machine and
target machine are different?
If you're cross-compiling, and the package uses autoconf, how's it going
to find out the values it normally gets by compiling and running test
programs?
Autotools tries to be a generic tool, and doesn't make the best job of
it. A clean makefile and a config.h has the important advantage of
being /minimal/; it doesn't know, nor seek to know, anything that it
doesn't need - and it doesn't try to be clever, either.
-É™
 
B

Ben Pfaff

ec429 said:
Perhaps because a build system that doesn't try to autoguess the
values of system-dependent things, doesn't break when the build
machine and target machine are different?
If you're cross-compiling, and the package uses autoconf, how's it
going to find out the values it normally gets by compiling and running
test programs?

Autoconf doesn't normally try to run test programs. Most of the
time, it only tries to compile and link them. In the cases where
running is desirable, normally one makes the Autoconf tests take
a conservative guess at the result.
 
K

Kaz Kylheku

Autoconf doesn't normally try to run test programs. Most of the
time, it only tries to compile and link them. In the cases where
running is desirable, normally one makes the Autoconf tests take
a conservative guess at the result.

The conservative guesses are often stupidly inappropriate.

A much better response would be to fail the configure with a big message
at the end like this:

*** The program can't be configured because it is being cross-compiled,
*** and so some of the tests which run test programs cannot be executed.
*** Here are all the variables whose values you need to export to do
*** the right thing (so you don't have to read the configure script):
*** ac_cv_blah=yes # this indicates the target has blah support
*** ac_cv_blorch=yes # the target has blorch support

What happens instead is the program configures successfully anyway. You have
to read the output of configure, spot the problems, gather them into a list,
and then dig through the configure script to find out what variables to
manipulate to get a proper build.

Another better response, rather than failing, would be to try to make better
use of available information.

For instance, suppose that the --target is specified as a tuple which encodes
not only the fact that we are compiling for MIPS or ARM, but also that the OS
is linux. The config system has no excuse for not taking advantage of this
little fact that was given to it as input. If you know that the OS is Linux,
then you know, for instance, that the target OS has POSIX job control, so you
don't have to bother testing for it, and you certainly don't need to assume
that it's not there.

Yet the job control test in GNU Bash's configure script defaults to "no"
if it is not able to run the configure test. Linux or not, it will build
a bash that has no Ctrl-Z, and no "jobs", "fg" or "bg".

A couple of years ago, Wind River was shipping a compiled bash with no job
control because of this issue. When cross compiling bash, they just trusted
that configure did the job because it ran successfully and an executable popped
out make.

It's very hard to test for every single thing like this across an entire OS.
You never know where something is crippled because a cross-compiling test
was *silently* not run and a poor-quality default substituted in its place.
 
S

Seebs

I cannot fathom why anyone would expect a from-scratch build system
written by such developers to feature better support for cross
compilation.

Well, in practice:
Autoconf can in theory support cross compilation, but you basically have to know
you want to support that or else you will end up with a ton of stuff which doesn't
work as expected. While the package as a whole has hooks, many specific tests
don't, and users often end up writing their own tests which don't know about cross
compilation.

A well-written app with a config.h will usually have the right pieces exposed to
let us write the correct values, generate a patch to create that config.h, and
will then Just Work. So in practice my experience has been that, for a whole lot
of packages, a developer completely unaware of the concept of cross-compilation
will leave me with less hassle than an autoconf based app, even in some cases one
which makes an effort to support cross-compilers.

This is all subject to rapid change. Back when I first tried to use autoconf for
cross-compilation, the built-in type size testing didn't work for cross compiles,
because it depended on compiling and running test applications... A package which
is being used by embedded developers, and whose maintainers are friendly to
submitted patches, will usually be fine by now. :)

To put it in perspective, I had less trouble porting pseudo from Linux to OS X
than I have had getting some apps to cross-compile for a different CPU type. :)

-s
 
S

Stephen Sprunk

Ben Pfaff said:
A competent C programmer knows how to write C programs correctly,
a C expert knows enough to argue with Dan Pop, and a C expert
expert knows not to bother.

and where are the ones, the peoples claim nether
the expert expert can write a C programs correctly 100%?
[only assembly programmers could write them afther extensive test]

do you call them not expert? right?

That is the logical conclusion. If a competent programmer knows how to
write programs correctly, then anyone who does not know how to write
programs correctly cannot be a competent programmer.

However, programmers are human and will make mistakes; the difference is
that competent ones recognize and correct their mistakes, whereas
incompetent ones often will not. In my experience, even when those
mistakes are pointed out by a competent programmer, the incompetent
programmer tends to defend his mistakes rather than correct them.

S
 
J

James Dow Allen

Question is: Will you port the program to just
3 or 4 architectures, or to dozens of different
architectures? Hand-steering the process 2 or 3
times is more reliable than a full-featured config
setup (and probably easier to do bug-free).

When I needed self-configured architecture-dependent
code, I just used ordinary 'make' with dependent
executables run to output tailored '.h' files.

* * * *

On another topic:
Is there a thread to choose best c.l.c expert/poster ?
My nominees are Eric Sosman and Ben Pfaff.
This makes the following comment seem rather ... odd.

afther say that, i kill file you and Ian Collins
... because not interested from wrong answers

Jamesdowallen (c/o Gmail)
 
S

Seebs

A couple of years ago, Wind River was shipping a compiled bash with no job
control because of this issue. When cross compiling bash, they just trusted
that configure did the job because it ran successfully and an executable popped
out make.

It's more subtle than that, apparently, and was most recently fixed in January
of 2007. Looking at it, it appears that all the standard stuff in bash's
configure uses a value bash_cv_job_control_missing, which would normally be
set to either "present" or "missing". We had a build file which was setting it
to "no", but was not setting it in the right way; it looks to me as though
the intent had been the logical equivalent of:
bash_cv_job_control_missing=no configure <args>
but it ended up working as though it were:
bash_cv_job_control_missing=no
configure <args>

So it wasn't trusting configure, it wasn't being careful enough to verify that
the fix was still working; it had been originally fixed back in 2006, and then a
change in July of 2006 moved the fix to a new location as part of a version uprev,
and it stopped working -- this slipped through the cracks for about 5 months.

.... this has basically nothing to do with C, but I was too curious not to look.

-s
 
K

Kaz Kylheku

It's more subtle than that, apparently, and was most recently fixed in January
of 2007. Looking at it, it appears that all the standard stuff in bash's
configure uses a value bash_cv_job_control_missing, which would normally be
set to either "present" or "missing". We had a build file which was setting it
to "no", but was not setting it in the right way; it looks to me as though
the intent had been the logical equivalent of:
bash_cv_job_control_missing=no configure <args>
but it ended up working as though it were:
bash_cv_job_control_missing=no
configure <args>

Ah, let me guess, missing backslash? :)

Thanks for providing me the necessary emotional closure on this one, haha.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,083
Messages
2,570,591
Members
47,212
Latest member
RobynWiley

Latest Threads

Top