C or C++?

R

Roland Pibinger

Couldn't you just use forward declaration of
struct, while hiding definition in implementation file,
if data members are not meant for public access?

Yes, you can and C developers often do so. The 'handle' is also a
typical C idiom that hides implementation data. Static functions in
the implementation file serve as 'private' functions.
 
P

persenaama

C has always been considered "portable assembler".
Yes, there have always been idiots who believe anything they're told,
even when a few minutes research would tell them that they're wrong.

w00f w00f! The bark is loud but a barking dog doesn't bite. =^)

Abstract: . Of late it has become very common for research compilers
to emit C as their target code, relying on a C compiler to generate
machine code. In effect, C is being used as a portable compiler target
language. It offers a simple and effective way of avoiding the need to
re-implement effective register allocation, instruction selection, and
instruction scheduling, and so on, all for a variety of target
architectures. The trouble is that C was designed as a programming
language not as a... (Update)

Active bibliography (related documents): More All
0.5: Annotating the Java Bytecodes in Support of Optimization -
Hummel, Azevedo, Kolson.. (1997) (Correct)
0.3: C-: A Portable Assembly Language That Supports Garbage.. -
Jones, Ramsey, Reig (Correct)
0.2: Type-Secure Meta-Programming - Christopher Bentley Dornan
(1998) (Correct)

Similar documents based on text: More All
0.1: Modular Lazy Search for Constraint Satisfaction Problems -
Nordin, Tolmach (2001) (Correct)
0.1: Featherweight Concurrency in a Portable Assembly Language -
Ramsey, Jones (2001) (Correct)
0.1: Advice On Structuring Compiler Back Ends And Proving Them
Correct - Oliva (1994) (Correct)

BibTeX entry: (Update)


@article{ jones98portable,
author = "Simon Peyton Jones and Thomas Nordin and Dino Oliva",
title = "{C}--: {A} Portable Assembly Language",
journal = "Lecture Notes in Computer Science",
volume = "1467",
pages = "1--??",
year = "1998",
url = "citeseer.ist.psu.edu/341046.html" }



<snip snip>

There's more to be said about opinions in this tangent. I'm not here
to say that C is a portable assembler. I am here to say that a lot of
smart people consider it as such in specific contexts WHICH ARE OUT OF
SCOPE OF THIS SPECIFIC NEWSGROUP and the interests it represents. But
it doesn't mean that it is, quote, "utter nonsense".

Keeping open mind and being able to connect things from different
contexts is a marvelous skill to possess, please restrain from further
abuse of other's opinion. They are entitled to it and you are entitled
to yours; maybe some times you are not in possession of all the
relevant facts to draw a conclusion on someone's _opinion_, you
clearly had no insight on possible factors that might affect forming
of such opinion!

Thanks!
 
G

Gianni Mariani

What's wrong with it?

I don't think it supports the previous posters argument that OOP
requires polymorphism. I'm no OOP crtitic so I don't know if there is
anything wrong with it if it smacked me in the head.
 
P

Phlip

Gianni said:
I don't think it supports the previous posters argument that OOP
requires polymorphism. I'm no OOP crtitic so I don't know if there is
anything wrong with it if it smacked me in the head.

This question is a book unto itself. OO _should_ be defined so its
root principle is polymorphism, but the term OO is too abused for
WikiPedia to arbitrate any consensus.

One Robert C Martin says it best (IIRC): "Structured programming is
discipline imposed upon direct transfer of control flow. OO
programming is discipline imposed upon indirect transfer of control
flow."

By "discipline" he means that the statement if(foo){ int bar = 42; }
will refuse to compile that 'bar' outside its block {}. That is
Structured Programming - matching the scopes of variables to branches
of control flow.

In a structural paradigm, you can call methods indirectly, achieving
polymorphism, in one of two ways. You can write a 'switch' statement
that reads a type code, and branches to each target type, or you can
use a function pointer. Both systems suck, because to add another type
you must add a case to every matching switch statement, and to use a
function pointer you must generally assign the pointer, possibly
typecast it to call it, possibly check it for NULL, etc. All this
cruft raises the cost of polymorphism.

An OO language fixes this by merging the switch and function pointer
into a secret jump table. The language will typesafely manage the jump
table for you, mostly at compile time. That allows programmers to
impose discipline on how they partition a program into polymorphic
code blocks. And, once again, the compiler helps match variable scope
to code block scope. Objects are variables bound to polymorphic
behavior behind typesafe interfaces.
 
A

Alf P. Steinbach

* Phlip:
... OO _should_ be defined so its
root principle is polymorphism, but the term OO is too abused for
WikiPedia to arbitrate any consensus.

One Robert C Martin says it best (IIRC): "Structured programming is
discipline imposed upon direct transfer of control flow. OO
programming is discipline imposed upon indirect transfer of control
flow."

My simple definition of object-oriented versus routine-oriented
(procedural, functional decomposition) is mainly based on how knowledge
about and control of the system is distributed in the system:

Routine-oriented: Object-oriented:

* Routines have knowledge of * Objects (or classes) have
relevant objects (or classes) knowledge of relevant routines
and of other routines. and of other objects/classes.

* Centralized/hierarchical * Distributed/local execution
execution flow control; flow control; cooperative.
commander/subject-oriented.

Of course there are umpteen shades of system architecture in between and
besides, for example routines communicating via a common "blackboard".
But in practice they aren't much used. And I think that's because to do
something one needs to have the necessary knowledge: by deciding on
knowledge distribution, responsibility distribution is implicitly also
decided, and vice versa; it hangs together.

By this operational, knowledge-distribution based definition, the STL
parts of the standard library are mainly routine-oriented.

Further distinctions are IMHO not very relevant, because more selective
terms cannot be used in general to convey meaning (they can if they're
defined, of course).
 
G

Gianni Mariani

This question is a book unto itself. OO _should_ be defined so its
root principle is polymorphism, but the term OO is too abused for
WikiPedia to arbitrate any consensus.

One Robert C Martin says it best (IIRC): "Structured programming is
discipline imposed upon direct transfer of control flow. OO
programming is discipline imposed upon indirect transfer of control
flow."

By "discipline" he means that the statement if(foo){ int bar = 42; }
will refuse to compile that 'bar' outside its block {}. That is
Structured Programming - matching the scopes of variables to branches
of control flow.

In a structural paradigm, you can call methods indirectly, achieving
polymorphism, in one of two ways. You can write a 'switch' statement
that reads a type code, and branches to each target type, or you can
use a function pointer. Both systems suck, because to add another type
you must add a case to every matching switch statement, and to use a
function pointer you must generally assign the pointer, possibly
typecast it to call it, possibly check it for NULL, etc. All this
cruft raises the cost of polymorphism.

An OO language fixes this by merging the switch and function pointer
into a secret jump table. The language will typesafely manage the jump
table for you, mostly at compile time. That allows programmers to
impose discipline on how they partition a program into polymorphic
code blocks. And, once again, the compiler helps match variable scope
to code block scope. Objects are variables bound to polymorphic
behavior behind typesafe interfaces.


Then I and many many others I know of use the term OO very
incorrectly.

The first time I'd heard the term "object oriented" was at a talk in
'82 about Simula. IIRC there was never any discussion about
polymorphism at that talk.

I subsequently defined a language named "clone" and wrote a compiler
for it that pushed the idea of state machines in control systems (the
state of the art at the time was ladder logic). It was "object
oriented" because it broke the monolithic aspect of code development
at the time. If you looked at the state of the art at the time, there
was very little in the idea of objects. Wirth's book, Data Structues
+ Algorithms = Programs was breaking new ground for crusty old
programmers.

The "class" (where the compiler enforced the "this" parameter) which
was the pivot point where languages enforced Wirth's premise and
started calling sturctures + methods "objects" and hence "object
oriented".

It seems like a rewrite of history to say that the pivotal moment for
OO was polymorphism. It seems to me to not be supported by the history
I know. I was one of the few programmers that created function tables
and to some extent had polymorphic C code. For example a generic AVL
balanced binary tree algorithm which was not dissimilar to std::map
(in function not form). It had iterators ! No-one told me that it
was any different to other parts of the system and the system was
referred to as "object oriented" at the time.

Besides, are templates not viewed as a "static" polymorphism ? In the
example, std::map, it may do use polymorphism, but it sure does
everything my (at the time) polymorphic AVL tree class did.
 
P

Phlip

Gianni said:
The first time I'd heard the term "object oriented" was at a talk in
'82 about Simula. IIRC there was never any discussion about
polymorphism at that talk.

Yet Simula had polymorphism - that was the point. It's unfortunate that,
this far back, people generally didn't think to call it that. They probably
talked about methods and messages...
Besides, are templates not viewed as a "static" polymorphism ?

Absolutely. Given foo.bar(), foo could be a reference to a base class with
polymorphic derivatives of bar(). Or foo could be a templated item, and
bar() is anything that syntactically fits. Both situations are OO, because
they are polymorphic.

The benefit of a super-narrow definition for something is it's distinct. We
may not agree what is OO, but we both will [almost!] always agree what is
polymorphic. So defining OO as polymorphism is very useful.
In the
example, std::map, it may do use polymorphism, but it sure does
everything my (at the time) polymorphic AVL tree class did.

And that's why it was useful - you could minimize the abstract code and
extend the concrete code.
 
G

Gianni Mariani

... So defining OO as polymorphism is very useful.

You had me agreeing with you all the way until this. That's a jump to
light speed I can't see how you achieved.

(Not that it bothers me at all, just jumps out as A⇛B ∴ B⇛X).

.... testing out my unicode characters !
 
P

Phlip

You had me agreeing with you all the way until this. That's a jump to
light speed I can't see how you achieved.

If I were collaborating with someone over Visual Basic Classic code, and
they call their design OO, I don't mind. I use the verbiage, in kind, to
facilitate collaboration. And our code will naturally have lots of classes,
objects, methods, etc.

However, if we are later consuming intoxicants in a non-work setting, and
they ask me to dish on VB, I will cheerfully tell them that it's "Object
Based", not "Object Oriented". Given a derived class method, if you have a
reference to a base class object, you can call the derived method using
foo.bar(). But if you have a reference with the derived type, you must call
foo.MyClass_bar(). This isn't very polymorphic!

So I didn't mean the first kind of usefulness. One uses ones team's verbiage
first. I meant the second kind. It's very easy to distinguish the OO
languages from the poseurs, using only a simple criteria, not an endless
list of WikiPedia-linked ratings.
 
J

James Kanze

Couldn't you just use forward declaration of
struct, while hiding definition in implementation file,
if data members are not meant for public access?

It depends. That was the usual solution when all of the objects
were to be dynamically allocated anyway. It doesn't work too
well when you want objects to allocated on the stack.
 
P

persenaama

Assembler programming is not generally necessary or desirable on the
RISC based UNIX systems presently in use. RISC system performance is
so heavily dependent on pipelining and various caches that it is
extremely difficult, if not impossible, for an individual to write
more efficient assembler code than is generated by the higher level
language compilers. The C language provides all the hardware access
and system service capabilities traditionally provided by Assembler.

As noted above, C has filled the programming niche traditionally
occupied by Assembly language. In addition to its normal use as a high-
level programming language, C can act as a universally portable
Assembler.

....

Sugar with the Tea?
 
O

osmium

persenaama said:
Assembler programming is not generally necessary or desirable on the
RISC based UNIX systems presently in use. RISC system performance is
so heavily dependent on pipelining and various caches that it is
extremely difficult, if not impossible, for an individual to write
more efficient assembler code than is generated by the higher level
language compilers. The C language provides all the hardware access
and system service capabilities traditionally provided by Assembler.

What about the circular shift? What is that called in C? I think "most'
would be a better word choice than "all'.
 
P

Phlip

persenaama said:
Assembler programming is not generally necessary or desirable on the
RISC based UNIX systems presently in use. RISC system performance is
so heavily dependent on pipelining and various caches that it is
extremely difficult, if not impossible, for an individual to write
more efficient assembler code than is generated by the higher level
language compilers. The C language provides all the hardware access
and system service capabilities traditionally provided by Assembler.

May I translate that as "the chip manufacturers have snuggled up so tightly
with known compiler technology that their opcodes have matured from
comprehensible instruction sets, into complex things that can almost only be
compiled"?

The point is the chip manufacturers no longer consider the raw assembly
programmer as one of their customers. Hence they are able to focus on only
one customer, the compiler author...
 
B

Branimir Maksimovic

Assembler programming is not generally necessary or desirable on the
RISC based UNIX systems presently in use. RISC system performance is
so heavily dependent on pipelining and various caches that it is
extremely difficult, if not impossible, for an individual to write
more efficient assembler code than is generated by the higher level
language compilers.

While generally true, compiler can schedule instructions much better
then human, performance is not only reason to code in assembly.
There are real limitations in both C and C++, like regarding run time
code
generation as these languages are designed in a way that code will
always
execute from ROM.
On the other hand, while human will certainly not be more efficient
then compiler in general, human written assembly tend to be optimized
in a different way, leading to much shorter programs.
So I guess, this is draw situation.

Greetings, Branimir.
 
P

persenaama

What about the circular shift? What is that called in C? I think "most'
would be a better word choice than "all'.

Possible:
v = (v << n) | (v >> m);

Now it is quality of implementation issue to detect when the
combination of n and m can be implemented with a barrel shift.

I don't personally use C or C++ "as portable assembler", I think
pursuing such thing is folly, but that is just my opinion and as such
not interesting. I'm more inclined to use factory design pattern and
create implementation of the interface which MAY be optimized to the
platform / architecture.

I been working on OpenGL (ES) drivers for graphics hardware couple of
late years. The driver's job is pretty much to configure the hardware
into a state that can execute a compiled fragment and vertex programs
which are generated by the driver. The code generation is the most
demanding function of the driver, hands down, no contest.

All the .NET, Java, Shading Language etc. development of late is
steering the high-performance computing on desktop (and mobile)
environments to runtime code generation to be essential. This is area
where a lot of research is being done as we chat. Knowing how the
underlying hardware works is essential to the few that do this kind of
work, assembly is part of that. That's what I think where we're at
regarding assembly.

C and C++, these are tools which are used far and wide as interface
between the low-level and possible higher level code. On large number
of applications I observe (just personal, one man's point of view!)
the C/C++ _is_ the highest level of language hierarchy, if some random
scripting isn't considered. Universally this isn't true of course.
 
G

Gianni Mariani

All the .NET, Java, Shading Language etc. development of late is
steering the high-performance computing on desktop (and mobile)
environments to runtime code generation to be essential. This is area
where a lot of research is being done as we chat. Knowing how the
underlying hardware works is essential to the few that do this kind of
work, assembly is part of that. That's what I think where we're at
regarding assembly.
....
I've heard it said so many times that dynamic code optimizers are so
much better than static ones. Well, I have heard of a number of
isolated cases where that may make sense. However, there is somthing
happening now which kind of makes this all mute and it's a little hard
to beat. Since the MIPS R10000, there has been a steady improvement in
the ability for the CPU to do much of the dynamic optimization. If
you look at the Transmeta CPU's, you'll see the same thing happening
in software. However with the latest generation of speculatively
executed, register renaming, branch predicting CPU's, it's hard to
optimize better than the CPU itself.

So yes, dynamic optimization may be better, better done by the CPU
itself.
 
P

persenaama

I've heard it said so many times that dynamic code optimizers are so
much better than static ones. Well, I have heard of a number of
isolated cases where that may make sense. However, there is somthing

I never said they were. If you carefully read again, I was talking
about native binary code and that generating it is more productive
with online compiler than by hand using assembler.

It's really about feasibility of using static off line compiler for a
specific task. If you have a very complex state machine, which gives
you thousands or even millions of permutations for the generated code
it means in runtime you will be branching heavily to select the
correct binary to execute.

You can flatten the code with online code generator. Even if the code
isn't "optimal", it is still better than heavily nested conditional
basic blocks.
happening now which kind of makes this all mute and it's a little hard
to beat. Since the MIPS R10000, there has been a steady improvement in
the ability for the CPU to do much of the dynamic optimization. If
you look at the Transmeta CPU's, you'll see the same thing happening
in software. However with the latest generation of speculatively
executed, register renaming, branch predicting CPU's, it's hard to
optimize better than the CPU itself.

That's why we want to avoid branching and flatten the conditional
basic blocks: mis-predicted branches are expensive on setup like you
describe above.

So yes, dynamic optimization may be better, better done by the CPU
itself.

By itself it solves problem only partially. It means that low-level
bit-twiddling is less relevant than it was couple of architectures
ago, but still, it is not substitute for writing good code.

For example, you could write graphics code using putpixel(x,y,color)
function. Or, you could write scanline renderer which processes
fragments in order, so address computation logic is simpler:

*scan++ = color;

The putpixel() version of the same would look like this:

buffer[y * stride + x] = color;

What's wrong with above?

1. There is unnecessary multiplication
2. If this is non-inlined function, the call overhead
3. The memory write request is dependent on executing the address
computation logic
4. NOTE: the "extra" addition there might be free, depending on the
addressing modes a specific architecture supports.

The "CPU runtime optimization" can re-order and issue the translated
microcode instructions in more optimal sequence. Still, we added so
much cruft and overhead there for no real tangible gain. We just
burned trace/code cache even if we ran at the same performance without
consideration to the bigger picture. Yay.

The biggest optimization is still between the keyboard and the chair:
the human with living, thinking mind. :)


-- extra section --

Example of what OpenGL state machine's depth compare function would
look like in C++:

bool zpass;
uint32 depth = zbuffer[x];
switch ( depthFunc )
{
case DEPTHFUNC_LESS: zpass = z < depth; break;
case DEPTHFUNC_MORE: zpass = z > depth; break;
case DEPTHFUNC_LEQUAL: zpass = z <= depth; break;
case DEPTHFUNC_NOTEQUAL: zpass = z != depth; break;
// .... all depth function cases ....
}

if ( zpass )
{
process_fragment(...);
}
z += dxdz;

Since there are 8 different depth compare functions, we have to
generate code for all these. Above looks totally retarded way to do
it. Okay, so, smart way to go about it would have different innerloop
for different depth compare functions, right? Right..

OK, we did that. Now what? We have stenciling, alpha testing, alpha
blending, yada yada yada. We cannot realistically generate ALL
combinations of these. So we either make them own functions, call them
and have tens of functions calls PER FRAGMENT! That is even MORE
idiotic than what we have above!

So what the heck we going to do about this? At this moment in time, it
looks fairly attractive to GENERATE (!) the code dynamically. Doesn't
matter if we got 10,000,000 different innerloops to write.

The SIMPLEST mechanics to do this is to stitch the code. You write
nice neat pieces and stitch them together like lego blocks. You need
to specify carefully how you broadcast data from one stage to the
next. Not too difficult.

The next step in evolution is to generate intermediate representation
of the operations we want to do. Then we take this representation and
transform in meaningful ways. We could do constant folding, we could
do strength reduction, constant propagation. We could do instruction
selection and register allocation, we could do these in different ways
or combine some of the steps and so on and on. Effectively we would be
writing optimizing code generator. :)

Combine static and dynamic code generation, shake (don't stir!) and
enjoy the cocktail! :)
 
T

Tim H

It probably depends on where you learned C. I was a
"self-taught" C programmer, but the usual way I've always seen C
written (even 20 years ago) was to define a struct and a set of
functions to manipulate it, and then to cross your fingers that
no one accessed any of the struct except for your functions. I
adopted C++ originally solely for private---I hadn't the
slightest idea what OO was, and there weren't templates at the
time. Even today, the rest is just icing on the cake.

Encapsulation is essential, regardless of the paradigm.
Otherwise, the code rapidly becomes unmaintainable. That was
true already back before C++ came along.

I am a long-time C programmer and relative newcomer to C++. When I
really need tight encapsulation in C, I use opaque pointers. When I
really need encapsulation in C++, I use PIMPL or opaque pointers.

Private fields are nice, but the fact that private fields have to be
in a public header is irksome to me.

Tim
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Members online

Forum statistics

Threads
474,291
Messages
2,571,460
Members
48,140
Latest member
Falon396

Latest Threads

Top