C & hardware

I

Ian Collins

And with ridiculous comments like that from you, who I thought better
of, is it any wonder?

I stand by the comment and the archive of this group will back it up.
 
N

Niklas Holsti

Mok-Kong Shen said:
That seems not to be true in case of e.g. ADA.

One reason for this is is that many of the Ada "standardization people"
are also Ada compiler writers and vice versa, which explains why Ada
compiler writers tend to (eventually) implement whatever the
standardization people decide :)

Other reasons for this are that Ada evolution is intentionally slow,
with emphasis on upwards compatibility, and that the market for Ada
compilers is small, so the standardizers are careful not to burden the
compiler writers with large and costly changes.
ADA is in a very different market than C. ADA is being used in cases
where it is quite probable that the conditions for acceptance of a
program include verification that the compiler is certified to follow
the standard.

Rather, the compiler may be validated to follow *some* Ada standard, but
not necessarily the *current* one. There are now three successive Ada
standards (Ada 83, Ada 95, Ada 2005) and a fourth in preparation (Ada
2012). Some compilers have not yet implemented all of Ada 2005, but can
still be validated for Ada 95, which may be all that a given project
requires.
 
T

Thad Smith

When I implemented rotate ops, they worked on 8, 16 or 32 bits depending on the
type of the left operand, as you might expect.

This causes a problem in C because intermediate results must be ints.

One of the general characteristics of C is that if you avoid the size
boundaries, expressions usually yield the same result (there are some
exceptions). The right rotate, however, is dependent on the size of the operand
even for small values.
But this is responsible for all sorts of other problems too: eg.:

char c=0xC0;
int a = c<<1;

Should a be 0x80 or 0x180? There is a case for both results.

In C I can't think of a situation in which 0x80 is correct. However, the result
is undefined if type char is signed and CHAR_BIT == 8.
I think the main requirements will be to rotate entire bytes, shorts and ints,
rather than either the bottom N bits of a value, or an arbitrary bitfield. The
latter are always possible via the normal bit operations.

So write the logical expression in terms of shift and OR and let the compiler
produce a rotate where appropriate.
I think you've got that back to front: usually the source code uses the
simplest, most obvious expression, while the compiler generates the nightmare
code needed to achieve it.

You are correct: that is the usual case, but there are certainly cases in which
the compiler produces simpler code. It may recognize that an expression has
already been computed and simply use the value, rather than recompute. Another
example is a constant expression that reduce to single value.
If it's that critical then rotates can be implemented as built-in functions with
3 operands: value to rotate, shift count, and field width.
> I wouldn't object to
that, although it would be nice to be able to write:

a rol=1;

instead of:

a = rotateleft(a,1,32);

I think the disadvantages outweigh the advantages for C. You may define your
own helper functions, libraries, macros, preprocessor, extensions, or language
to implement your own.
 
N

Nick

Ian Collins said:
I don't think it is possible to get Jacob to engage in a technical
debate, he appears to take any disagreement with is position as a
personal insult.

Which is really a great shame. We have someone who clearly is
passionate about the language and has been experimenting with new
features. He's in a rare position of being able to tell us what worked
well and what didn't, and what features his users made most use of (as
someone who runs a big project myself I can tell you it's quite easy to
do that - the bits they make the most use of are the bits you get the
most bug reports for).

If he took a deep breath and counted to 10 before replying, he could
contribute so much more to the group from his almost unique position,
and would be arguing from a position of authority.
 
F

fj

Seebs said:
Could be. I have no idea. Since I haven't actually seen or heard of
anyone using Fortran for new code since the 70s, the question has been of
very little interest to me.

As I am writing a new code in Fortran (with C parts), you will not be able
to claim that anymore ;-)
 
M

Mok-Kong Shen

Rui said:
Can you point out what libraries were written in the C programming language whose developers not
only had the need to implement any form of bit rotation but also complained that the C programming
language should be changed in order to offer that feature in the core language?

Have you noticed that the sentence you quoted above concerns the
carry-over bit for coding multiple-precision and does not concern
the other issue?

M. K. Shen
 
M

Mok-Kong Shen

Seebs said:
What about them, I guess? Lots of compilers generate rotate instructions
when they're appropriate, on hardware which has them. Not all hardware
has them.

The fact that lots of compilers generate shows that an essential need
has been identified by a number of compiler writers but that would also
mean that maybe some compiler writers may have missed that. On hardware
that does not have the rotation instructions, the compiler writer can
implement with shifts (in which case it saves the user the trouble to
write the shifts himself). Anyway, I consider it a rather unusual and
lucky result that the compiler writers have reacted to this issue.
(They normally have no knowledge of users' programs and presumably must
have obtained some suggestions from the users.)

M. K. Shen
 
R

robertwessel2

The fact that lots of compilers generate shows that an essential need
has been identified by a number of compiler writers but that would also
mean that maybe some compiler writers may have missed that. On hardware
that does not have the rotation instructions, the compiler writer can
implement with shifts (in which case it saves the user the trouble to
write the shifts himself). Anyway, I consider it a rather unusual and
lucky result that the compiler writers have reacted to this issue.
(They normally have no knowledge of users' programs and presumably must
have obtained some suggestions from the users.)


Do you seriously think that compiler writer don't benchmark real
programs?
 
S

Seebs

As I am writing a new code in Fortran (with C parts), you will not be able
to claim that anymore ;-)

Uhm. I hereby declare a new rule, which is that I also don't care about
anything written or said by someone who posts under the name "fj". :p

Seriously, though, thanks for the information. I guess Fortran really is
as durable as people claim.

-s
 
F

FredK

Nobody said:
Not even that.

Some implementations may provide it, but C itself doesn't, due to the
reliance upon the "abstract machine" concept.

To have the C language support memory-mapped hardware, the standard would
need to either more precisely define the meaning of "volatile" or add
memory barriers.

No need. This is done in implementation-specific ways, and has been for a
very long time. Memory barriers (for one example) on platforms that need
it, are generally provided by built-in's much like access to atomic
operations. Without this, C would become unusable as a kernel
implementation language.

Since this all works quite nicely, there is no need to standardize it.
 
R

Rui Maciel

Kenny said:
Fortran, OTOH, still at least tries to be a desktop
language.

I don't know what gave you that idea. Fortran was and still is an engineer's programming language,
designed and intended to be used to develop number crunching applications. The recent changes made
to Fortran were basically adding support for OO programming and enhancing it's support for parallel
processing. That means that a programming language which was designed for and lives in the realm of
HPC got tweaked to better serve it's domain. Where did you get the idea that Fortran was ever a
"desktop language", particularly when compared to C?


Rui Maciel
 
R

Rui Maciel

Seebs said:
Could be. I have no idea. Since I haven't actually seen or heard of
anyone using Fortran for new code since the 70s, the question has been of
very little interest to me.

It is still extensively used up to this day, both directly (writing new code) and indirectly
(calling libraries compiled from Fortran code). For example, the standard reference libraries to
handle systems of linear equations, such as LAPACK and ATLAS, are written in Fortran.


Rui Maciel
 
R

Rui Maciel

Mok-Kong Shen said:
It's my (though maybe wrong) impression that the standard of Fortran
is changing at a higher speed than C.

Technical standards change when there is a honest, rational and reasonable need for them to change.
If no one can think of a reasonable issue and come up with a reasonable proposal to tackle that
issue then there is absolutely no reason to change stuff. Change for the sake of changing doesn't
do anyone any good. This isn't supposed to be a competition.


Rui Maciel
 
K

Kenny McCormack

I don't know what gave you that idea. Fortran was and still is an
engineer's programming language, designed and intended to be used to
develop number crunching applications. The recent changes made to
Fortran were basically adding support for OO programming and enhancing
it's support for parallel processing. That means that a programming
language which was designed for and lives in the realm of HPC got
tweaked to better serve it's domain. Where did you get the idea that
Fortran was ever a "desktop language", particularly when compared to C?

I'm not going to bother arguing with you, as you have demonstrated (in
other groups) that you're not particularly sympathetic to my way of
looking at things.

But suffice to say that if you divide the world into "embedded"
(including kernels and operating systems - which are, in a sense,
embedded) and "desktop", then Fortran sits on the later side of that
divide. And I am speaking as someone whose first computer language was
Fortran and who wrote interactive games in it (way back when).

Now, as to which side of the divide C falls in - well, nobody would
seriously argue that it hasn't been extensively used for both - but one
of the dogmas of this NG is that C is (today) "mostly" used for
embedded. Whether or not this dogma is true, I'm not here to argue, but
it *is* a frequently advanced position in the NG.

--
Windows 95 n. (Win-doze): A 32 bit extension to a 16 bit user interface for
an 8 bit operating system based on a 4 bit architecture from a 2 bit company
that can't stand 1 bit of competition.

Modern day upgrade --> Windows XP Professional x64: Windows is now a 64 bit
tweak of a 32 bit extension to a 16 bit user interface for an 8 bit
operating system based on a 4 bit architecture from a 2 bit company that
can't stand 1 bit of competition.
 
M

Mok-Kong Shen

Do you seriously think that compiler writer don't benchmark real
programs?

You are right in that there are public software, in particular
libraries, which he can examine. On the other hand, the major part
of software is IMHO proprietary or else theoretically available
but not accessible without knowledge of its existance and of the
proper contact persons. So I doubt that the compiler writer could
obtain an accurate global picture of how the compiler is being used,
even if he cares to obtain that knowledge.

M. K. Shen
 
F

FredK

William Ahern said:
Except memory barriers are being standardized for C1x in stdatomic.h. See
e.g. atomic_thread_fence() and atomic_signal_fence() at N1425 7.16.3.
Although they're defined in terms of threads and signals, so their
behavior
might be undefined w/r/t memory maps in general.

The actual atomic ops also take a memory order parameter.

Since many of these are very HW specific (for example a write barrier as
opposed to a fence - and there are slew of different HW-specific atomic
operations) I can only wonder why they would want to "standardize" these
built-in functions. Perhaps the clue is that they are specific to threads
and signals.
 
I

Ian Collins

The fact that lots of compilers generate shows that an essential need
has been identified by a number of compiler writers but that would also
mean that maybe some compiler writers may have missed that. On hardware
that does not have the rotation instructions, the compiler writer can
implement with shifts (in which case it saves the user the trouble to
write the shifts himself). Anyway, I consider it a rather unusual and
lucky result that the compiler writers have reacted to this issue.
(They normally have no knowledge of users' programs and presumably must
have obtained some suggestions from the users.)

What you describe here are quality of implementation optimisations
rather than something that should be included in a language standard.

All of the compilers I use have extensive target specific optimisations
controlled by command line options. Some of these identify the
instruction set and extensions (SSE etc.) used by the target system.

These compilers are also used to compile the operating systems they are
used on. OS developers are always striving for better benchmark numbers.
 
M

Mok-Kong Shen

Rui said:
Technical standards change when there is a honest, rational and reasonable need for them to change.
If no one can think of a reasonable issue and come up with a reasonable proposal to tackle that
issue then there is absolutely no reason to change stuff. Change for the sake of changing doesn't
do anyone any good. This isn't supposed to be a competition.

Right, changes in sciences and techniques always come from "real" need
arising out of their continued practical applications. There is IMHO
a constant evolution process going on.

M. K. Shen
 
I

Ian Collins

Since many of these are very HW specific (for example a write barrier as
opposed to a fence - and there are slew of different HW-specific atomic
operations) I can only wonder why they would want to "standardize" these
built-in functions. Perhaps the clue is that they are specific to threads
and signals.

These will follow the atomics in C++0x which are supported in recent gcc
versions, see http://gcc.gnu.org/wiki/Atomic.
 
F

FredK

Ian Collins said:
These will follow the atomics in C++0x which are supported in recent gcc
versions, see http://gcc.gnu.org/wiki/Atomic.

Interesting stuff. An attempt to come up with something broad enough to be
generally implementable on most hardware. But everything is described in
terms of C++ - will the C language follow?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,083
Messages
2,570,589
Members
47,211
Latest member
JaydenBail

Latest Threads

Top