Jon's many-language ray tracer

J

Joachim Durchholz

Philippa said:
Perhaps, but it's not going to go to Nx1 and 1xM - sooner or later that
amounts to assuming the problem's bilinear.

Hmm... for some definition of "bilinear", yes ;-)
Other structures no doubt exist - I'm having some fun exploring
extensible parsing for a wiki, though that has the advantage of an
easy fallback case ("it's not markup, so it must be plain text").

Which has its own problems - I'm involved in just such a wiki, too :)

Re "other structures": I think one can often (maybe always) restructure
the problem, just as one can restructure the nonlinear Tree data
structure into linear "nodes" and "links". The transformation usually is
nonobvious (otherwise the software would have been structured
differently), and that's where inventiveness comes in.

One example is e.g. particle physics simulation. For N particle types,
one could write N*(N-1)/2 functions for defining the interaction between
them. Or one could write N functions that define the interaction of a
particle with the various fields :)

Regards,
Jo
 
?

=?iso-8859-1?q?Torben_=C6gidius_Mogensen?=

alex goldman said:
Regarding Jon's ray tracer, I'll play the devil's advocate:

When you only have spheres, algebraic types are nice and dandy, but if you
had a plethora of different objects to render, some of them special cases
of others, perhaps using OOP and organizing them into a class hierarchy
would be better?

If you have N kinds of things ("objects") and M different things
("methods") you want to do with them, OO allows you to have the M
different "methods" close to each other for each "object", but the N
different "objects" will be spaced out. Using an ADT (ML/Haskell
datatype) allows you to keep the N "objects" together, but separates
the M "methods". You can view it as an NxM matrix that you lay out
either in row-major or column-major format.

Now, adding one "object" in OO allows the new code to be separated
from the old, but adding a new "method" requires global redefinition.
Using an ADT allows new "methods" to be added with separated code,
whereas adding a new kind of "object" requires global rewrites. In
both cases, the type system can help you figure out where you need to
add code for the difficult case. So it really boils down to whether
you need to add new "objects" more often than new "methods".

As for OO hierarchies, this really means that some "objects" may have
more "methods" than others. OO draws the hierarchy from the methods
perspective: Objects are classified by the methods they define. You
can also pick a object-centric view, where methods are classified by
which subset of objects they work on. Here, ADT's work fine: You can
have nested sum-types where a "method" is classified by which subtree
of the sum it works on. So, again, it depends on which axis you are
most likely to modify/extend.

As for which fits best in a ray-tracer, this depends on whether the
set of scene objects is more likely to be extended/subdivided than the
set of operations. I see no clear winner here, you can define it
either way: You can have a large number of different objects
(polygons, spheres, cylinders, etc.) and a single operation: Find
intersection with ray, or you can have a single type of objects (a
Bezier-patch) and a large number of operations: Find intersection,
find normal, find refraction index, find reflection index, find
bounding box, etc. OO may win in the first case, but it will
certainly lose in the second.

Even in compiler writing, where you are more likely to extend the
language on which you work instead of the number of things you do with
syntax, I personally find ADT's more to my taste than object
hierarchies: I like my type-checker to be separate from my parser and
code-generator, and things like optimising transformations (that has
to look at more than one node in the tree at a time) is a lot easier
using pattern-matching than classes.

Torben
 
J

Joachim Durchholz

Jesper said:
Forcing a default handler is the only solution I can think of in a
language that supports dynamic class loading. You then have the option
of throwing an exception in the default handler if you can't/won't
handle the sub type.

The problem is: who't that "you"? Programmer A, programmer B, or the
system integrator C who wants to combine the extensions from A and B?

Neither A nor B know there's an interaction. C doesn't know about the
internals and cannot fix any problems that might arise (in fact he'll
probably never become aware of the problem - that's the worst of all
scenarios: subtle bugs that go unnoticed until somebody recalculates the
results by hand...)
In some cases having a default handler and special handling of a few
sub types is what the developer wants.

It's OK if the semantics is always the same but the implementation can
be optimised for common combinations. The worst that will happen is that
the (N+1)(M+1) case will take more time than needed - but since we're in
an opimisation scenario here, integrator C is most likely the person
doing the performance tests and looking for bottlenecks.
In other cases you want special
handling of all sub types, but as you note this is impossible to check
at compile time on platforms that support dynamic class loading
(unless, like in the visitor pattern, the programmer is forced to
provide a list of supported sub types for multiple dispatch). Compiler
support for multiple dispatch could actually give you the option of
checking that all sub types available at compile time are handled in
separate methods, which would give some protection against the bugs
you mention.

Well, yes... but imagine a newbie programmer loading umpteen modules and
extensions (still in the phase of trying everything out) and getting
lots of errors that are related to modules he never even heard about!

Same can happen with experienced programmers: loading umpteen modules
and extensions because they are all needed, but first thing that happens
is that he'll have to write all those missing (N+1)(M+1) functions. Yet
another chore before you can actually begin with productive work *sigh*....


My personal favorite is to disallow multiple dispatch except within a
module boundary, where "module" is defined as "has one person or team
who's responsible for it" in addition to the usual modularisation
criteria. That way, programmers A and B are the same person and know
that they have a new (N+1)(M+1) case. (For example, this is how built-in
arithmetic is typically done: the author(s) of compiler and run-time
system know about all the admissible combinations, and if a new type is
introduced, they will be able to see all combinations that now need to
be handled. It's not a library context, but the issues are exactly those
we've been discussing - and if the arithmetic system is ever going to be
a programmer-providable module, this multiple dispatch problem needs to
be solved.)

Regards,
Jo
 
A

Andreas Rossberg

Torben said:
Even in compiler writing, where you are more likely to extend the
language on which you work instead of the number of things you do with
syntax, I personally find ADT's more to my taste than object

[ADT = algebraic datatype? Usually it stands for abstract datatype.]
hierarchies: I like my type-checker to be separate from my parser and
code-generator, and things like optimising transformations (that has
to look at more than one node in the tree at a time) is a lot easier
using pattern-matching than classes.

Indeed. In fact, I think your initial conjecture isn't even true: for
non-toy compilers it happens much more often that you introduce new
optimisations, new backends, whatever, i.e. new functions that operate
on (parts of) an AST, than that you change the language. Moreover, the
sheer number of auxiliary functions on ASTs that you need in a realistic
compiler already makes the OO approach unsuitable. Having details of
optimisation phases or multiple backends be scattered through a global
class hierarchy is totally the wrong kind of modularisation.

And I second pattern matching - you are likely to need nested patterns
for transformations, and those cannot be simulated by method dispatch
without introducing a maintenance nightmare.

Cheers,

- Andreas
 
M

Marcin 'Qrczak' Kowalczyk

Joachim Durchholz said:
My personal favorite is to disallow multiple dispatch except within a
module boundary, where "module" is defined as "has one person or team
who's responsible for it" in addition to the usual modularisation
criteria.

It's not sufficient for equality: you really want to define your own
equality for your own types, even in they come in different modules.

My personal favorite is no constraints, living with the possibility of
writing incomplete systems where certain combinations of types will
fail at runtime. This is because I know no system of constraints which
wouldn't throw out too many sensible programs.

My second favorite is Haskell classes with common extensions
(multiparameter classes, functional dependencies etc.). Availability
of operations is thus checked statically. The system is limited
by static typing, e.g. you can't use Haskell classes for different
kinds of nodes in an abstract syntax tree - you must use some other
mechanism for that. Algebraic types fit quite well, except that they
are not extensible.
 
J

Joachim Durchholz

Marcin said:
It's not sufficient for equality: you really want to define your own
equality for your own types, even in they come in different modules.

Equality it too complicated anyway. And it isn't even decidable if
functions can be constructed at run-time (as is common in functional
languages).

Besides, as soon as the data structures get large, the concept of
equality becomes less and less clear.

Actually it isn't even clear for strings. If you have ASCII, multi-byte,
and UTF strings, should they be "equal" if they denote the same
character sequence, or should they be "equal" if they are byte-for-byte
equal? The answer is: it depends. Which means: the programmer should
select the appropriate equivalence relationship.
BTW this isn't limited to representational variation: sometimes the
appropriate equivalence relationship is case-insensitive comparison, and
nobody finds it strange that it's syntactically different from standard
string equality!
Next is equivalence unter mutability. Say, a data structure has some
expensive-to-compute field that's present if it ever was queried, absent
otherwise (FPLers will recognise this as a "memoised" value). Should two
data objects with a different status in this field be considered equal
or not equal? Answer (again): it depends. When (say) testing a
marshalling algorithm or doing other internal work, I want it included;
when looking at the data structure from the outside, I want it excluded.
(This is just a variant of representational equivalence.)

So my answer is: you don't need equality, you need equivalence. Which is
just another binary operator with a boolean result, and doesn't
introduce any problems that weren't present before.

Regards,
Jo
 
M

Marcin 'Qrczak' Kowalczyk

Joachim Durchholz said:
Equality it too complicated anyway.

I encountered only two conceptual difficulties with equality:

- If many important types in a language are mutable (e.g. strings,
lists), the relation which should play the role of equality of these
values according to general principles (object identity) is usually
less often wanted than comparing elements.

- When comparing numbers, it's often useful to make models of the same
mathematical value expressed with different types compare as equal,
and to have special features of floating point equality (-0.0 is
equal to 0.0 even though they can be distinguished, NaN is not equal
to itself). This is incompatible with using equality for the most
distinguishing equivalence relation.

Of course various languages yield additional problems with associating
implementation of equality with types, but these are weaknesses of
particular languages, not conceptual problems.
And it isn't even decidable if functions can be constructed at
run-time (as is common in functional languages).

They should not be comparable for equality.
Besides, as soon as the data structures get large, the concept of
equality becomes less and less clear.

But for those types for which it is clear, the language should provide
a mechanism for this.

When I make a dictionary indexed by pairs of strings, I don't want to
be forced to manually specify how they should be compared.
Actually it isn't even clear for strings. If you have ASCII,
multi-byte, and UTF strings, should they be "equal" if they denote
the same character sequence, or should they be "equal" if they are
byte-for-byte equal?

They should be equal if they cannot be distinguished by other means,
not equal if they can.

Applying a lossy conversion between strings for determining equality
is asking for trouble. It's important which strings use which format.

Anyway, I see place for at most two string types in a general purpose
language: sequences of Unicode code points, and sequences of bytes.
BTW this isn't limited to representational variation: sometimes the
appropriate equivalence relationship is case-insensitive comparison,
and nobody finds it strange that it's syntactically different from
standard string equality!

Of course, but this is not equality, just some equivalence relation.
Structures like hash tables should not be restricted to using equality
on keys, as sometimes I want the keys to be compared e.g. case
insensitively. But equality should be the default because it's the
most common case and shouldn't be surprising.

My favorite way of providing the relation for key comparison is not
specifying a binary relation plus a hashing function, but specifying
a function which transforms the keys into things which should be then
compared using the default equality. Such function e.g. folds string
case, or extracts a tuple of fields from a record (if the equality
for this record isn't already defined to compare these fields).

This is not as general as an arbitrary binary relation in theory,
but covers all cases I encountered, and it's easier to use. No need
for providing a hashing function compatible with the equivalence.
It's also faster because the hash table stores transformed keys
instead of transforming them on the fly during searching.
Next is equivalence unter mutability. Say, a data structure has some
expensive-to-compute field that's present if it ever was queried,
absent otherwise (FPLers will recognise this as a "memoised" value).
Should two data objects with a different status in this field be
considered equal or not equal?

Does code outside the implementation of that structure need to
distinguish between this field being already computed and not, or this
is an internal detail and the value is always computed when asked for?
In the first case the presence of the field should be taken into account,
in the second case it should not.
 
C

Chris F Clark

Indeed. In fact, I think your initial conjecture isn't even true: for
non-toy compilers it happens much more often that you introduce new
optimisations, new backends, whatever, i.e. new functions that operate
on (parts of) an AST, than that you change the language. Moreover, the
sheer number of auxiliary functions on ASTs that you need in a
realistic compiler already makes the OO approach unsuitable. Having
details of optimisation phases or multiple backends be scattered
through a global class hierarchy is totally the wrong kind of
modularisation.

Have you ever tried writing a non-trivial compiler with the OO
approach? I have and maintain one on a daily basis. Now, I won't
dispute many of your claims, except that the OO approach is
unsuitable. You don't get too many auxillary functions scattered over
too classes unless your design is poor. Both the number of classes
and the number of functions in the compiler I maintain are too large
for me to track in my head (or even to estimate, other than to say
that both are well into the hundred(s)). However, I doubt that many
of the functions have specializations over more than a handful of
classes. The simple fact is that most functions if designed right are
not specialized except in terms of some very generic attributes, and
those attributes group into very regular hierarchies. Of course, the
hierarchies are not necessarily easily disentangled trees down to the
leaf level, so it is often useful to organize them as "properties"
(where the properties use inheritance hierarchies).

In addition, I would like to suggest that even in an evolving compiler
for a stable language, one is likely to add new AST types. That is a
very convenient way of adding a new specialization. If some
transformation yields a new result, it is useful to add one (or more)
AST types to represent the result of that transformation, even if
initially it may appear to be semanticly equivalent to some other AST
type. Adding the new type allows one to be more precise in the
semantics, as the transfromed node is often a special case of the
semanticly equivalent AST. For example, if you run a phase that sort
commutative operators so that constants only appear as the right
argument, it can be useful to add sorted_add AST nodes to indicate the
transformed outputs. Again, sorted may also be carried as a property
rather than a subtype, especially if it may be attached to many
distinct AST types. In either case, you can then specialize
transformations that only are safe (or are more efficient, etc.)
applied to sorted commuative operators.

I think OO inheritance gets criticized based on too many naive uses.
Too many people assume that it is a way of dividing the world into
distinct heirarchies, which doesn't match the shape of the world at
all. It is much better, for defining things where "this works like
that except for here and there". When used that way, one doesn't get
functions scattered over great masses of code. You get nice locality.

The point is when adding either N+1 or M+1, you usually have only a
few special cases to deal with, and you are generally adding both N+1
and M+1 at the same time, where the extra case(s) in the other
dimension is to capture some distinction your previous design didn't
distinguish, but you are really only adding a few cases to cover both
N+1 and M+1, 1 case for the primary body that applies almost
universally and some cases for the new distinguished elements,
depnding on how many categories of them there are.

Of course, this method only works if you are distnguishing things in
terms of characteristics (e.g. round, regular, parallel sides) rather
than final types (circle, ellipse, triangle, square, star).

-Chris
 
M

Marcin 'Qrczak' Kowalczyk

Chris F Clark said:
Have you ever tried writing a non-trivial compiler with the OO
approach? I have and maintain one on a daily basis. Now, I won't
dispute many of your claims, except that the OO approach is
unsuitable. You don't get too many auxillary functions scattered over
too classes unless your design is poor.

I've written a compiler using generic functions
(http://sourceforge.net/cvs/?group_id=110425). This means that
dispatched functions are defined separately from types they dispatch
on, and their specializations are defined separately from generic
functions and from the types (I tend to write them near the
introductions of generic functions though).

There are several phases in the compiler. Each phase examines the
result of the previous phase and prepares data for the next phase.
Between most pairs of phases the module is represented using a family
of types for different kinds of tree nodes. Each time it's a different
family of types. Families of types usually form a two-level hiearchy,
for example "lifted" code has abstract supertypes for expressions,
patterns, definitions and meanings, and each of them is further split
into concrete types.

Below are the counts of generic functions dispatched on nodes of
each kind of intermediate code, for each module (I didn't count
other generic functions, e.g. dispatched on the types of literals).

A traditional OO approach would force to put all functions dispatched
on the same type together. For example pretty printing is quite
independent from other processing and it's used only for debugging
except the final phase, thus I defined it in separate modules, so it
doesn't get in the way when I'm looking at more important code.

(An OO approach would perhaps use fewer function names, because for
clarity I use different generic functions for domain subsets which are
never mixed and dispatched dynamically. For example I have separate
functions LiftExpr, Lift1Pat, Lift2Pat, Lift1Def, Lift2Def. An OO
approach would probably spell them just Lift, Lift1 and Lift2,
especially as they would be put in separate namespaces. So the real
function counts in an OO approach might be smaller, but the amount
of forced demodularization would remain the same.)

Source code:
- SourcePrinter: 1 (pretty printing)
- Expander: 17 (transforming source code to abstract code)

Abstract code:
- Abstract: 2 (type definitions, auxiliary functions)
- AbstractPrinter: 6 (pretty printing)
- Interfaces: 1 (generating interface files)
- Expander: 5 (some minor functions)
- Lifter: 17 (transforming abstract code to lifted code)

Lifted code:
- Lifted: 5 (type definitions, auxiliary functions)
- LiftedPrinter: 4 (pretty printing)
- Lifter: 3 (some minor functions)
- Planner: 16 (transforming lifted code to sequential code)

Sequential code:
- Sequential: 4 (type definitions, auxiliary functions)
- SequentialPrinter: 4 (pretty printing)
- Planner: 2 (some minor functions)
- CCoder: 12 (transforming sequential code to C code)

C code:
- CCode: 1 (type definitions, auxiliary functions)
- CCodePrinter: 8 (pretty printing)
- CCoder: 2 (some minor functions)

With a traditional OO approach the same design would be doable, at the
cost of forcing to put together type definitions of a phase, auxiliary
functions working on these types, pretty printing, and the main work
of transforming these values to the next representation. I prefer
to have freedom to arrange modules according to my taste rather than
being forced to put together functions just because they are dispatched
on the same family of types.

Currently the representations are independent from one another, and
a module which implements a transformation phase depends only on the
representation it examines and the representation it produces. The OO
approach would significantly increase the depth of the dependency
graph: each family of type definitions would depend on the following
phases (because it includes code which produces the next phase).
Technically this might not mean anything, it only hurts my taste.

In this language modules are used as units of name visibility. For
example each module which implements a transformation phase exports
only one name, a function transforming the whole module being
compiled. If visibility boundaries were tied to classes, I would have
to either make more names public, or introduce lots of "friendships"
(or whatever the language provides for extending visibility). I prefer
to be able to design visibility domains independently from grouping
functions for the purpose of dispatching.
 
M

Marcin 'Qrczak' Kowalczyk

Marcin 'Qrczak' Kowalczyk said:
With a traditional OO approach the same design would be doable, at the
cost of forcing to put together type definitions of a phase, auxiliary
functions working on these types, pretty printing, and the main work
of transforming these values to the next representation. I prefer
to have freedom to arrange modules according to my taste rather than
being forced to put together functions just because they are dispatched
on the same family of types.

Two more points.

Besides generic functions which would correspond to dispatched methods
in OOP, there is other code. For example CCoder module (which
transforms sequential code to C code) includes many constants and
auxiliary functions for building parts of C code trees (something like
smart constructors, compositions of constructors etc.).

Note that I've even given this module the name which corresponds to
the kind of code it produces, not the kind of code it examines. This
transformation is more tied to the representation of its target than
to its source.

But with an OO approach these constants and functions operating
exclusively on C code would naturally go to the module which defines
types of C code, together with the pretty printer for the C code, but
far from the implementation of transformation of sequential code into
C code which actually uses them.

Well, I could write them in the module which uses them, i.e. the
module which includes the mentioned transformation (all of them would
be static constants or static functions), the module with sequential
code. This is consistent with my current design, but I'm afraid
contrary to typical OO practice which puts operations manipulating
values of some type together with the type definition, no matter
whether it builds them or examines them.


Another point. At some time I plan to add an internal interpreter (for
macros) of the language which is currently being compiled. This means
adding code which transforms some representation (probably lifted code)
into a representation which is suitable for execution.

In the current design this is possible by adding new modules, without
modifying any existing code. Of course they might trigger some tweaks
to existing code for convenience or efficiency, but in principle none
are needed.

In a design where functions dispatched on a type are defined together
with the type, I would have to add methods to existing classes.
The module which defines the lifted code would deal not only with
transforming it into sequential code for the compiler, but also with
making executable code for the interpreter from it.

I definitely prefer to have them written separately. The new module
will have little in common with existing code, it just dispatches on
the same family of types.

I admit that many of these issues are a matter of taste rather than
technical difficulties. It's hard to objectively examine how well code
is modularized and how focused are changes when it evolves.
 
C

Chris F Clark

Marcin 'Qrczak' Kowalczyk said:
I've written a compiler using generic functions
(http://sourceforge.net/cvs/?group_id=110425).

I think we have a different definition of what OO means, since much of
your description would match what I consider an OO approach, aside for
your persistent instance that certain things are defined separately.
And, I don't consider that to be relevant as to whether something is
OO. It's the dispatch to distinct functions which makes something OO,
because that's what allows the programming by differences. (That is
in my book, if one is doing dynamic dispatch (especially on type of a
function operand), one is doing OO. I realize that is not everyone's
definition, and for other people it is the strong association of
functions with type, which seems to be what your are critiquing.)
Currently the representations are independent from one another, and
a module which implements a transformation phase depends only on the
representation it examines and the representation it produces. The OO
approach would significantly increase the depth of the dependency
graph: each family of type definitions would depend on the following
phases (because it includes code which produces the next phase).

There is more likely to be some additional coupling in an OO approach.
However, not nearly as severe as you propose. It's more severe if you
use a "pure" OO approach where every function must be part of a type.
I prefer to be able to design visibility domains independently from
grouping functions for the purpose of dispatching.

That's not an unreasonable desire.
This is consistent with my current design, but I'm afraid contrary
to typical OO practice which puts operations manipulating values of
some type together with the type definition, no matter whether it
builds them or examines them.

Good OO practice puts the functions together which belong together.
The one caveat is that functions which are dispatched upon a type must
be associated with that type. However, good OO design would limit
that function to the part which is actually specialized.
In a design where functions dispatched on a type are defined together
with the type, I would have to add methods to existing classes.

Yes, it is true that the code which you wish to specialize on a type
must be defined "with" the type. Although in some OO languages, you
can "define" the type "piecemeal" and thus make this point moot. Even
in languages which require the type to be defined as one unit,
generally the code body for the function need not be in the type
declaration, so that one can still achieve the desired separation.
The type merely serves as a point where one can go to look and see a
listing of all the functions which specialize on it. The balance
between whether that is a burden or a blessing depends in part on
perspective.

-Chris
 
J

Joachim Durchholz

Marcin said:
[...] typical OO practice which puts operations manipulating
values of some type together with the type definition, no matter
whether it builds them or examines them.

Um... that's true for languages where there's no life outside of classes
(e.g. Smalltalk or Eiffel). I always found that approach a bit too far
on the dogmatic side, and would want modules that are *not* classes. (In
fact I have seen and programmes a few classes that were just containers
for functions, not data abstractions. That's contrary to the True OO
Way, but it made the software definitely more manageable.)

Regards,
Jo
 
?

=?iso-8859-1?q?Torben_=C6gidius_Mogensen?=

My second favorite is Haskell classes with common extensions
(multiparameter classes, functional dependencies etc.). Availability
of operations is thus checked statically. The system is limited
by static typing, e.g. you can't use Haskell classes for different
kinds of nodes in an abstract syntax tree - you must use some other
mechanism for that.

While type classes might not be suitable for individual nodes in a
syntax tree, they works well for syntactic categories -- expressions,
statements, declarations, etc. For example, you can make a "pretty
print" type class and instantiate it for each category, so you don't
need to invent names for each instance, and you can use a generic
(overloaded) function for such as error messages that return a message
string and a pretty-print of the offending sub-tree.

Torben
 
?

=?iso-8859-1?q?Torben_=C6gidius_Mogensen?=

If you have N kinds of things ("objects") and M different things
("methods") you want to do with them, OO allows you to have the M
different "methods" close to each other for each "object", but the N
different "objects" will be spaced out. Using an ADT (ML/Haskell
datatype) allows you to keep the N "objects" together, but separates
the M "methods". You can view it as an NxM matrix that you lay out
either in row-major or column-major format.

Have anyone considered a language/environment that allows both kinds
of view of the matrix. I.e., you can switch between the views as you
like?

Consider a program written as guarded equations. In a functional
language, you wuld normally have all equations for a function right
after each other, one for each constructor in the datatype. But you
might also allow the equations to be spread out, so you collect all
equations for the same constructor, but different functions. A tool
could easily transform the program between the forms, or provide
different views of an internal representation that might be different
from both. Such a tool could also show which equations you would need
to add if you add a new constructor or function symbol (by making
"scaffolds" for each).

Such an approach would require equations to be independent of order,
i.e., either disjoint or by using a rule like in HOPE, where overlaps
are allowed if one pattern is more specific than the other (in which
case it takes precedence), so it wouldn't immediately work for, say,
Haskell or SML.

Torben
 
?

=?ISO-8859-1?Q?Jens_Axel_S=F8gaard?=

Chris said:
I think we have a different definition of what OO means, since much of
your description would match what I consider an OO approach, aside for
your persistent instance that certain things are defined separately.
And, I don't consider that to be relevant as to whether something is
OO. It's the dispatch to distinct functions which makes something OO,
because that's what allows the programming by differences. (That is
in my book, if one is doing dynamic dispatch (especially on type of a
function operand), one is doing OO. I realize that is not everyone's
definition, and for other people it is the strong association of
functions with type, which seems to be what your are critiquing.)

The obligatory reference to Rees's list of OO features or
properties:

<http://store.yahoo.com/paulgraham/reesoo.html>
 
M

Marcin 'Qrczak' Kowalczyk

Jens Axel Søgaard said:
The obligatory reference to Rees's list of OO features or
properties:

<http://store.yahoo.com/paulgraham/reesoo.html>

For my language the only strong "no" is 9. Others are either "yes"
(1, 3, 4, 5, 7), or "can be made true but the common design makes it
false" (2), or "technically true but probably not in the way it was
intended" (6), or "hard to tell whether a feature qualifies as this"
(8). The concepts are too vague for me to be able to clearly state
whether they are supported.

I understood the Andreas's critique as criticizing 9.
 
J

Joachim Durchholz

Jens said:
The obligatory reference to Rees's list of OO features or
properties:

<http://store.yahoo.com/paulgraham/reesoo.html>

Argh... that list is a horrid mixture of valid points, non sequiturs,
and simple ideology. If an item is in the list, this says more about the
mental state of those who used it than about whether a language is OO or
not...

Judging by the commentary that follows the list, Jonathan Ree was well
aware of that. Just please don't use this list to decide whether your
favorite language is OO! I think points #1 (encapsulation), #3 (ad-hoc
polymorphism), #5 (everything is an object), #7 (subtyping/specification
inheritance), #8 (implementation inheritance), and #9 (dispatch on first
parameter) can indeed be used to characterise a language as "more" or
"less" or "this kind of" OO. Say, a language can be considered OO if it
has at least 50% of these points, or something like that...

#2 is just a consequence of #1 done well. Together with #4, these are
non sequiturs: there are distinctly non-OO languages that have it
(particularly the first public version of Ada), and there are OO
languages that don't have it (Smalltalk, early Eiffel). So these
criteria are useless to decide whether a language is OO or not.

#6 is just Smalltalk ideology. If you look at the way that "message
sends" are done or defined, you find it's dynamic dispatch (already
covered as #9) as the only access method (#1) plus a standard subroutine
call (another non sequitur).

Regards,
Jo
 
K

Ketil Malde

Jens Axel Søgaard said:
Chris F Clark wrote:

I think it is an overly broad definition, that is, if I understand the
terms correctly. Isn't algebraic data types a form of dynamic
dispatch? Certainly Haskell's classes seem to count.

Is Haskell then an OO language?
The obligatory reference to Rees's list of OO features or
properties:

<http://store.yahoo.com/paulgraham/reesoo.html>

I don't think this list is particularly good, either. E.g. in point
8, the description of "implementation inheritance" is so vague as to
cover almost any kind of abstraction. Perhaps that's a slight
exaggeration on my part, but I don't see why it wouldn't cover
e.g. HOFs.

-k
 
A

Andreas Rossberg

Nice list (although I don't really see the difference between
encapsulation and protection).
I understood the Andreas's critique as criticizing 9.

Right. IMO the defining characteristic of "object-oriented" programming
is structuring programs around autarkic "objects" that contact each
other in some way, but which all decide "by themselves" how they react
to that, in a black-box sort of manner. That's what the term suggests,
and it is the only characteristicum from the above list which you do
/not/ find in other paradigms one way or the other. I sometimes call it
"internalisation of operations", wrt a value. Technically, this is
usually modeled by the "sum-of-product-of-function" idea described in
the list.

I'm aware that some people are using the term OO in a wider sense, but I
don't like it. First - as Rees notes correctly - it practically renders
the term meaningless. Second, I believe it usually stems from either the
desire to comply with the marketing hype ("see, <my favorite language>
is OO too"), or the attempt to argue for the superiority of OO ("see,
all modern languages are OO"), or from the lack of background regarding
many of the ideas now captured as "inherently" OO ("OO is great because
it gives us modularity"). I find neither an acceptable reason.

In particular, I disagree with considering "dynamic dispatch" or "late
binding" OO. Or encapsulation. Or polymorphism. All are ingredients for
realising the above idea, but not defining characteristics. All of them
are found in any decent language. Often in much more flexible ways than
in typical OO languages.
 
?

=?ISO-8859-1?Q?Jens_Axel_S=F8gaard?=

Jens Axel Søgaard said:
I don't think this list is particularly good, either.

If nothing else the lists shows that one mans OO is
different from another's. As such it is provides
a good vehicle to get an OO discussion on track.

That is, I think Chris's and Marcin's discussion
is more interesting than the list it self.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Staff online

Members online

Forum statistics

Threads
473,995
Messages
2,570,228
Members
46,818
Latest member
SapanaCarpetStudio

Latest Threads

Top