Someone Told Me: Is C an Assembly Language

L

Les Cargill

Keith said:
Les Cargill said:
Keith said:
[...]
On Wednesday, 3 April 2013 20:43:52 UTC+3, Keith Thompson wrote:
Finally, there are still some things that are difficult to specify in C,
such as handling CPU condition flags. Try writing *portable* C code
that tells you whether multiplying two given int values will overflow.
[snip]

Perhaps

inline int will_imul_overflow( int a, int b )
{
const int prod = a * b;
const int nb = prod / a ;
return ( b != nb );
}

If the multiplication overflows, the behavior is undefined.

But not hopelessly so...

What does that mean?


It means that sometimes people do implementation-specific things to
make systems work.
 
E

Edward A. Falk

any proof why or why not.

No, it's not remotely true.

However, when C first hit the streets, it was described to me as
a substitute for assembly language, to the extent that you could
actually write an operating system in it.

I was entirely skeptical until I started working with it, at which
point the light came on. Pointers and bitfields as implemented in
C were a completely radical concept at the time, and using C was
like having the veil removed from your eyes and the mittens removed
from your fingers. It transformed programming like nothing I had
ever seen before.

The fact that I could write a device driver in a high-level language
was mind-blowing.
However, the last time I ever programmed in assembly language was two
decades ago.

The last time I programmed in assembly language was one decade ago. I was
writing the inner core of the interrupt-handling and context-switching
code for a new microprocessor.

A small amount of code had to be written in assembly, but all the rest
of the operating system was written in C.
The person who said this
claimed that modern assembly languages no longer implement a one-to-one
relationship between assembly language instructions and machine language
instructions, but are free to choose from a wide variety of possible
translations.

Maybe, depending on architecture. The machine I was writing for
had an entirely old-school what-you-write-is-what-you-get assembler.

Also, he claimed that modern assembly languages are no
longer restricted to a particular processor, that the same assembly code
can be used to generate different machine code on different platforms,
even if they have significantly different architectures.

Color me skeptical. But then, I've been wrong before.
On the basis of
that claim, he asserted that, since the same was true of C, C could
therefore be classified as an assembly language. To me, it would seem
more reasonable to say that the languages he described (if he was
describing them correctly) were now high level languages, and no longer
assembly languages.

Agreed.
 
T

Tim Rentsch

Johannes Bauer said:
I would say so, but mostly in the same sense that, in Calculus
when you do a limit you get closer and closer but never actually
get there. There are very few things that need to be done, and
can't be done in C, but there are still some.

C is a Turing complete language. It is as powerful as the
assembly code language that lies beneath it. Thus all tasks
can be achieved in both languages. [snip unrelated]

This conclusion is correct only when viewed through the lens of
Turing equivalence. There is no guarantee, for example, that a
C program can execute an instruction to flush a TLB, but an
assembly language program can: any language reasonably called
an assembly language can execute any instruction of the ISA of
the machine it targets.
 
T

Tim Rentsch

Keith Thompson said:
[snip]

Try writing *portable* C code that tells you whether
multiplying two given int values will overflow.

An insidiously tricky problem. It isn't conceptually difficult,
but it's also not easy to get the edge cases all right without
the code being a horrible mess, and still not fall off the cliff
of undefined behavior. The particular case of int may be done
with:

int
int_product_okay( int a, int b ){
return a == 0 || b == 0
|| a > 0 && b > 0 && a <= INT_MAX / b
|| a > 0 && b < 0 && -(b+1) <= (-(INT_MIN+1) - (a-1)) / a
|| a < 0 && b > 0 && -(a+1) <= (-(INT_MIN+1) - (b-1)) / b
|| a < 0 && b < 0
&& -INT_MAX <= a && -INT_MAX <= b
&& -a <= INT_MAX / -b
;
}

This function body relies on the values of INT_MIN and INT_MAX
being of type int (as the Standard says they must). Of course
what we would really like is a macro definition that is more
generic, eg, starting

#define SIGNED_PRODUCT_IN_RANGE( a, b, lower, upper )

where the arguments for 'lower' and 'upper' need not be the
same type as the values being (putatively) multiplied, and
may be an unsigned type rather than a signed type. That's a
harder problem. :)
 
P

Phil Carmody

Les Cargill said:
Keith said:
Les Cargill said:
Keith Thompson wrote:
[...]
On Wednesday, 3 April 2013 20:43:52 UTC+3, Keith Thompson wrote:
Finally, there are still some things that are difficult to specify in C,
such as handling CPU condition flags. Try writing *portable* C code
that tells you whether multiplying two given int values will overflow.
[snip]

Perhaps

inline int will_imul_overflow( int a, int b )
{
const int prod = a * b;
const int nb = prod / a ;
return ( b != nb );
}

If the multiplication overflows, the behavior is undefined.

But not hopelessly so...

What does that mean?

It means that sometimes people do implementation-specific things to
make systems work.

As someone who's been away from c.l.c for a month or so, and who's been
doing a lot of evaluating of coding test results for interviewees, all
I can add is that about 50% of the contributions to this subthread make
me wince.

It's like summonning a demon, and then saying "clearly, as I summoned it,
I'm in charge". Uh-huh.

Phil
 
P

Phil Carmody

Tim Rentsch said:
Keith Thompson said:
[snip]

Try writing *portable* C code that tells you whether
multiplying two given int values will overflow.

An insidiously tricky problem. It isn't conceptually difficult,
but it's also not easy to get the edge cases all right without
the code being a horrible mess, and still not fall off the cliff
of undefined behavior. The particular case of int may be done
with:

int
int_product_okay( int a, int b ){
return a == 0 || b == 0
|| a > 0 && b > 0 && a <= INT_MAX / b
|| a > 0 && b < 0 && -(b+1) <= (-(INT_MIN+1) - (a-1)) / a

I can appreciate why that is correct compared to:
<= (-INT_MIN -1 - a + 1) / a
but in my current daze I don't see why:
<= (-(INT_MIN+a)) / a

shouldn't work. The contents of the inner brackets can never overflow,
and the outer brackets have the same numeric final value as before,
and thus as no overflow can occur in either case, should result in the
same test being performed.
|| a < 0 && b > 0 && -(a+1) <= (-(INT_MIN+1) - (b-1)) / b
|| a < 0 && b < 0
&& -INT_MAX <= a && -INT_MAX <= b
&& -a <= INT_MAX / -b
;
}

This function body relies on the values of INT_MIN and INT_MAX
being of type int (as the Standard says they must). Of course
what we would really like is a macro definition that is more
generic, eg, starting

#define SIGNED_PRODUCT_IN_RANGE( a, b, lower, upper )

where the arguments for 'lower' and 'upper' need not be the
same type as the values being (putatively) multiplied, and
may be an unsigned type rather than a signed type. That's a
harder problem. :)

Even a simple predicate to ensure that the arithmetic result
fits into the range of the target type would be useful as a
language feature. I.e. the above function.

Phil
 
T

Tim Rentsch

Phil Carmody said:
Tim Rentsch said:
Keith Thompson said:
[snip]

Try writing *portable* C code that tells you whether
multiplying two given int values will overflow.

An insidiously tricky problem. It isn't conceptually difficult,
but it's also not easy to get the edge cases all right without
the code being a horrible mess, and still not fall off the cliff
of undefined behavior. The particular case of int may be done
with:

int
int_product_okay( int a, int b ){
return a == 0 || b == 0
|| a > 0 && b > 0 && a <= INT_MAX / b
|| a > 0 && b < 0 && -(b+1) <= (-(INT_MIN+1) - (a-1)) / a

I can appreciate why that is correct compared to:
<= (-INT_MIN -1 - a + 1) / a
but in my current daze I don't see why:
<= (-(INT_MIN+a)) / a

shouldn't work. [snip elaboration]

If 'a' is of type int, as indeed it is here, then yes, the
second expression you write gives the same result as mine.

However, if one is worried about handling cases when 'a'
might be an unsigned type rather than a signed type (which
in fact I was when first trying to construct a solution),
then this kind of rearrangement doesn't work, because
INT_MIN would be converted to an unsigned type before doing
the addition, and all sorts of havoc would ensue. Writing
the expression as I did, it's easy to see that both operands
of the subtraction operation are non-negative, and so
converting an operand from signed to unsigned will not
change its value. There are some other details to fill in
before getting a version that works even in the presence
of signedness mixing, but in a nutshell this is why I wrote
it the way I did.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,076
Messages
2,570,565
Members
47,201
Latest member
IvyTeeter

Latest Threads

Top