What is Expressiveness in a Computer Language

J

Joachim Durchholz

Matthias said:
Erlang relies on a combination of purity, concurrency, and message
passing, where messages can carry higher-order values.

Data structures are immutable, and each computational agent is a
thread. Most threads consist a loop that explicitly passes state
around. It dispatches on some input event, applies a state
transformer (which is a pure function), produces some output event (if
necessary), and goes back to the beginning of the loop (by
tail-calling itself) with the new state.

Actually any Erlang process, when seen from the outside, is impure: it
has observable state.
However, from what I hear, such state is kept to a minimum. I.e. the
state involved is just the state that's mandated by the purpose of the
process, not by computational bookkeeping - you won't send file
descriptors in a message, but maybe information about the state of some
hardware, or about a permanent log.

So to me, the approach of Erlang seems to amount to "make pure
programming so easy and efficient that aren't tempted to introduce state
that isn't already there".

Regards,
Jo
 
S

Stephen J. Bevan

Darren New said:
No. AFAIU, an ADT defines the type based on the operations. The stack
holding the integers 1 and 2 is the value (push(2, push(1,
empty()))). There's no "internal" representation. The values and
operations are defined by preconditions and postconditions.

As a user of the ADT you get a specification consisting of a signature
(names the operations and defines their arity and type of each
argument) and axioms which define the behaviour of the operations.

The implementer has the task for producing an implementation (or
"model" to use alebraic specifiction terminology) that satisfies the
specification. One model that comes for free is the term model where
the operations are treated as terms and the representation of a stack
containing 2 and 1 would just be "(push(2, push(1, empty())))".
However, while that model comes for free it isn't necessarily the most
efficient model. Thus the non-free aspect of chosing a different
implementation is that technically it requires an accompanying proof
that the implementation satisfies the specification.
 
J

Joachim Durchholz

Andreas said:
AFAICT, ADT describes a type whose values can only be accessed by a
certain fixed set of operations. Classes qualify for that, as long as
they provide proper encapsulation.

The first sentence is true if you associate a semantics (i.e. axioms)
with the operations. Most OO languages don't have a place for expressing
axioms (except via comments and handwaving), so they still don't fully
qualify.

Regards,
jo
 
G

George Neuner

It appears you've written the code above to assume that the type system
can't cerify that age >= 18... otherwise, the if statement would not
make sense. It also looks like Java, in which the type system is indeed
not powerfule enough to do that check statically. However, it sounds as
if you're claiming that it wouldn't be possible for the type system to
do this? If so, that's not correct. If such a thing were checked at
compile-time by a static type check, then failing to actually provide
that guarantee would be a type error, and the compiler would tell you
so.

Now this is getting ridiculous. Regardless of implementation
language, Pascal's example is of a data dependent, runtime constraint.
Is the compiler to forbid a person object from aging? If not, then
how does this person object suddenly become a different type when it
becomes 18? Is this conversion to be automatic or manual?

The programmer could forget to make a manual conversion, in which case
the program's behavior is wrong. If your marvelous static type
checker does the conversion automatically, then obviously types are
not static and can be changed at runtime.

Either way you've failed to prevent a runtime problem using a purely
static analysis.


George
 
C

Chris Smith

George Neuner said:
Now this is getting ridiculous.

It's not at all ridiculous. The fact that it seems ridiculous is a good
reason to educate people about what static typing does (and doesn't)
mean, and specifically that it doesn't imply any constraints on the kind
of behavioral property that's being checked; but only on the way that
the check occurs.

(As a subtle point here, we're assuming that this really is an error;
i.e., it shouldn't happen in a correct program. If this is validation
against a user mistake, then that's a different matter; obviously, that
would typically be done with a conditional statement like if, and
wouldn't count as a type error in any context.)
Regardless of implementation
language, Pascal's example is of a data dependent, runtime constraint.

99% of program errors (excluding only syntax errors and the like) are
data-dependent runtime errors. Types are a way of classifying that data
until such errors become obvious at compile time.
Is the compiler to forbid a person object from aging? If not, then
how does this person object suddenly become a different type when it
becomes 18? Is this conversion to be automatic or manual?

The object doesn't have a type (this is static typing, remember?), but
expressions do. The expressions have types, and those types are
probably inferred in such a circumstance (at least, this one would even
more of a pain without type inference).
If your marvelous static type
checker does the conversion automatically, then obviously types are
not static and can be changed at runtime.

There really is no conversion, but it does infer the correct types
automatically. It does so by having more complex rules regarding how to
determine the type of an expression. Java's current type system, of
course, is considerably less powerful. In particular, Java mostly
assures that a variable name has a certain type when used as an
expression, regardless of the context of that expression. Only with
some of the Java 5 features (capture conversion, generic method type
inference, etc) did this cease to be the case. It would be necessary to
drop this entirely to make the feature above work. For example,
consider the following in a hypothetical Java language augmented with
integer type ranges:

int{17..26} a;
int{14..22} b;

...

if (a < b)
{
// Inside this block, a has type int{17..21} and b has type
// int{18..22}

signContract(a); // error, because a might be 17
signContract(b); // no error, because even though the declared
// type of b is int{14..22}, it has a local
// type of int{18..22}.
}

(By the way, I hate this example... hence I've changed buyPorn to
signContract, which at least in the U.S. also requires a minimum age of
18. Hopefully, you'll agree this doesn't change the sunstantive
content. :)

Note that when a programmer is validating input from a user, they will
probably write precisely the kind of conditional expression that allows
the compiler to deduce a sufficiently restrictive type and thus allows
the type checker to succeed. Or if they don't do so -- say, for
example, the programmer produces a fencepost error by accidentally
entering "if (age >= 17)" instead of "if (age > 17)" -- then the
compiler will produce a type error as a result. If the programmer
stores the value into a variable of general type int (assuming such a
thing still exists in our augmented Java-esque language), then the
additional type information is forgotten, just like type information is
forgotten when a reference of type String is assigned to a reference of
type Object. In both cases, a runtime-checked cast will be required to
restore the lost type information.

Incidentally, I'm not saying that such a feature would be a good idea.
It generally isn't provided in languages specifically because it gets to
be a big pain to maintain all of the type specifications for this kind
of stuff. However, it is possible, and it is a static type system,
because expressions are assigned typed syntactically, rather than values
being checked for correctness at runtime.
 
M

Marshall

Chris said:
[...] static typing does ... doesn't imply any constraints on the kind
of behavioral property that's being checked; but only on the way that
the check occurs.

Nice post.


Marshall
 
C

Chris Smith

I said:
Incidentally, I'm not saying that such a feature would be a good idea.
It generally isn't provided in languages specifically because it gets to
be a big pain to maintain all of the type specifications for this kind
of stuff.

There are other good reasons, too, as it turns out. I don't want to
overstate the "possible" until it starts to sound like "easy, even if
it's a pain". This kind of stuff is rarely done in mainstream
programming languages because it has serious negative consequences.

For example, I wrote that example using variables of type int. If we
were to suppose that we were actually working with variables of type
Person, then things get a little more complicated. We would need a few
(infinite classes of) derived subtypes of Person that further constrain
the possible values for state. For example, we'd need types like:

Person{age:{18..29}}

But this starts to look bad, because we used to have this nice property
called encapsulation. To work around that, we'd need to make one of a
few choices: (a) give up encapsulation, which isn't too happy; (b) rely
on type inference for this kind of stuff, and consider it okay if the
type inference system breaks encapsulation; or (c) invent some kind of
generic constraint language so that constraints like this could be
expressed without exposing field names. Choice (b) is incomplete, as
there will often be times when I need to ascribe a type to a parameter
or some such thing, and the lack of ability to express the complete type
system will be rather limiting. Choice (c), though, looks a little
daunting.

So I'll stop there. The point is that while it is emphatically true
that this kind of stuff is possible, it is also very hard in Java.
Partly, that's because Java is an imperative language, but it's also
because there are fundamental design trade-offs involved between
verbosity, complexity, expressive power, locality of knowledge, etc.
that are bound to be there in all programming languages, and which make
it harder to take one design principle to its extreme and produce a
usable language as a result. I don't know that it's impossible for this
sort of thing to be done in a usable Java-like language, but in any
case, the way to accomplish it is not obvious.
 
M

Marshall

Chris said:
But this starts to look bad, because we used to have this nice property
called encapsulation. To work around that, we'd need to make one of a
few choices: (a) give up encapsulation, which isn't too happy; (b) rely
on type inference for this kind of stuff, and consider it okay if the
type inference system breaks encapsulation; or (c) invent some kind of
generic constraint language so that constraints like this could be
expressed without exposing field names. Choice (b) is incomplete, as
there will often be times when I need to ascribe a type to a parameter
or some such thing, and the lack of ability to express the complete type
system will be rather limiting. Choice (c), though, looks a little
daunting.

Damn the torpedoes, give me choice c!

I've been saying for a few years now that encapsulation is only
a hack to get around the lack of a decent declarative constraint
language.


Marshall
 
J

Joachim Durchholz

Chris said:
For example, I wrote that example using variables of type int. If we
were to suppose that we were actually working with variables of type
Person, then things get a little more complicated. We would need a few
(infinite classes of) derived subtypes of Person that further constrain
the possible values for state. For example, we'd need types like:

Person{age:{18..29}}

But this starts to look bad, because we used to have this nice
property called encapsulation. To work around that, we'd need to
make one of a few choices: [...] (c) invent some kind of generic
constraint language so that constraints like this could be expressed
without exposing field names. [...] Choice (c), though, looks a
little daunting.

That's not too difficult.
Start with boolean expressions.
If you need to check everything statically, add enough constraints that
they become decidable.
For the type language, you also need to add primitives for type
checking, and if the language is stateful, you'll also want primitives
for accessing earlier states (most notably at function entry).
So I'll stop there. The point is that while it is emphatically true
that this kind of stuff is possible, it is also very hard in Java.

No surprise: It's always very hard to retrofit an inference system to a
language that wasn't designed for it.

This doesn't mean it can't be done. Adding genericity to Java was a
pretty amazing feat.
(But I won't hold my breath for a constraint-style type system in Java
anyway... *gg*)

Regards,
Jo
 
C

Chris Smith

Marshall said:
Damn the torpedoes, give me choice c!

You and I both need to figure out when to go to sleep. :) Work's gonna
suck tomorrow.
I've been saying for a few years now that encapsulation is only
a hack to get around the lack of a decent declarative constraint
language.

Choice (c) was meant to preserve encapsulation, actually. I think
there's something fundamentally important about information hiding that
can't be given up. Hypothetically, say I'm writing an accounting
package and I've decided to encapsulate details of the tax code into one
module of the application. Now, it may be that the compiler can perform
sufficient type inference on my program to know that it's impossible for
my taxes to be greater than 75% of my annual income. So if my income is
stored in a variable of type decimal{0..100000}, then the return type of
getTotalTax may be of type decimal{0..75000}. Type inference could do
that.

But the point here is that I don't WANT the compiler to be able to infer
that, because it's a transient consequence of this year's tax code. I
want the compiler to make sure my code works no matter what the tax code
is. The last thing I need to to go fixing a bunch of bugs during the
time between the release of next year's tax code and the released
deadline for my tax software. At the same time, though, maybe I do want
the compiler to infer that tax cannot be negative (or maybe it can; I'm
not an accountant; I know my tax has never been negative), and that it
can't be a complex number (I'm pretty sure about that one). I call that
encapsulation, and I don't think that it's necessary for lack of
anything; but rather because that's how the problem breaks down.

----

Note that even without encapsulation, the kind of typing information
we're looking at can be very non-trivial in an imperative language. For
example, I may need to express a method signature that is kind of like
this:

1. The first parameter is an int, which is either between 4 and 8, or
between 11 and 17.

2. The second parameter is a pointer to an object, whose 'foo' field is
an int between 0 and 5, and whose 'bar' field is a pointer to another
abject with three fields 'a', 'b', and 'c', each of which has the full
range of an unconstrained IEEE double precision floating point number.

3. After the method returns, it will be known that if this object
previously had its 'baz' field in the range m .. n, it is now in the
range (m - 5) .. (n + 1).

4. After the method returns, it will be known that the object reached by
following the 'bar' field of the second parameter will be modified so
that the first two of its floating point numbers are guaranteed to be of
the opposite sign as they were before, and that if they were infinity,
they are now finite.

5. After the method returns, the object referred to by the global
variable 'zab' has 0 as the value of its 'c' field.

Just expressing all of that in a method signature looks interesting
enough. If we start adding abstraction to the type constraints on
objects to support encapsulation (as I think you'd have to do), then
things get even more interesting.
 
J

Joachim Durchholz

Chris said:
I think
there's something fundamentally important about information hiding that
can't be given up.

Indeed.
Without information hiding, with N entities, you have O(N^2) possible
interactions between them. This quickly outgrows the human capacity for
managing the interactions.
With information hiding, you can set up a layered approach, and the
interactions are usually down to something between O(log N) and O(N log
N). Now that's far more manageable.

Regards,
Jo
 
P

Pascal Bourguignon

Chris Smith said:
But the point here is that I don't WANT the compiler to be able to infer
that, because it's a transient consequence of this year's tax code. I
want the compiler to make sure my code works no matter what the tax code
is. The last thing I need to to go fixing a bunch of bugs during the
time between the release of next year's tax code and the released
deadline for my tax software. At the same time, though, maybe I do want
the compiler to infer that tax cannot be negative (or maybe it can; I'm
not an accountant; I know my tax has never been negative),

Yes, it can. For example in Spain. Theorically, in France IVA can
also come out negative, and you have the right to ask for
reimbursement, but I've never seen a check from French Tax
Administration...
and that it
can't be a complex number (I'm pretty sure about that one).

I wouldn't bet on it.

For example, French taxes consider "advantages in nature", so your
income has at least two dimensions, Euros and and "advantages in
nature". Thanksfully, these advantages are converted into Euros, but
you could consider it a product by (complex 0 (- some-factor))...
 
M

Marshall

Joachim said:
Chris said:
For example, I wrote that example using variables of type int. If we
were to suppose that we were actually working with variables of type
Person, then things get a little more complicated. We would need a few
(infinite classes of) derived subtypes of Person that further constrain
the possible values for state. For example, we'd need types like:

Person{age:{18..29}}

But this starts to look bad, because we used to have this nice
property called encapsulation. To work around that, we'd need to
make one of a few choices: [...] (c) invent some kind of generic
constraint language so that constraints like this could be expressed
without exposing field names. [...] Choice (c), though, looks a
little daunting.

That's not too difficult.
Start with boolean expressions.
If you need to check everything statically, add enough constraints that
they become decidable.

I'm not sure I understand. Could you elaborate?

For the type language, you also need to add primitives for type
checking, and if the language is stateful, you'll also want primitives
for accessing earlier states (most notably at function entry).

Again I'm not entirely clear what this means. Are you talking
about pre/post conditions, or are you talking about having the
constraint language itself be something other than functional?


Marshall
 
M

Marshall

Chris said:
You and I both need to figure out when to go to sleep. :) Work's gonna
suck tomorrow.

It's never been a strong point. Made worse now that my daughter
is one of those up-at-the-crack-of-dawn types, and not old enough
to understand why it's not nice to jump on mommy and daddy's
bed while they're still asleep. But aren't you actually a time zone
or two east of me?

Choice (c) was meant to preserve encapsulation, actually. I think
there's something fundamentally important about information hiding that
can't be given up. Hypothetically, say I'm writing an accounting
package and I've decided to encapsulate details of the tax code into one
module of the application. Now, it may be that the compiler can perform
sufficient type inference on my program to know that it's impossible for
my taxes to be greater than 75% of my annual income. So if my income is
stored in a variable of type decimal{0..100000}, then the return type of
getTotalTax may be of type decimal{0..75000}. Type inference could do
that.

The fields of an object/struct/what have you are often hidden behind
a method-based interface (sometimes callled "encapsulated") only
because we can't control their values otherwise. (The "exposing
the interface" issue is a non-issue, because we're exposing some
interface or another no matter what.) The issue is controlling the
values, and that is better handled with a declarative constraint
language. The specific value in the fields aren't known until
runtime.

However for a function, the "fields" are the in and out parameters.
The specific values in the relation that the function is aren't known
until runtime either, (and then only the subset for which we actually
perform computation.)

Did that make sense?

But the point here is that I don't WANT the compiler to be able to infer
that, because it's a transient consequence of this year's tax code. I
want the compiler to make sure my code works no matter what the tax code
is. The last thing I need to to go fixing a bunch of bugs during the
time between the release of next year's tax code and the released
deadline for my tax software. At the same time, though, maybe I do want
the compiler to infer that tax cannot be negative (or maybe it can; I'm
not an accountant; I know my tax has never been negative), and that it
can't be a complex number (I'm pretty sure about that one). I call that
encapsulation, and I don't think that it's necessary for lack of
anything; but rather because that's how the problem breaks down.

There's some significant questions in my mind about how much of
a constraint language would be static and how much would be
runtime checks. Over time, I'm starting to feel like it should be
mostly runtime, and only occasionally moving into compile time
at specific programmer request. The decidability issue comes up.

Anyone else?

Just expressing all of that in a method signature looks interesting
enough. If we start adding abstraction to the type constraints on
objects to support encapsulation (as I think you'd have to do), then
things get even more interesting.

There are certainly syntactic issues, but I believe these are amenable
to the usual approaches. The runtime/compile time question, and
decidability seem bigger issues to me.


Marshall
 
J

Joachim Durchholz

Marshall said:
Joachim said:
Chris said:
For example, I wrote that example using variables of type int. If we
were to suppose that we were actually working with variables of type
Person, then things get a little more complicated. We would need a few
(infinite classes of) derived subtypes of Person that further constrain
the possible values for state. For example, we'd need types like:

Person{age:{18..29}}

But this starts to look bad, because we used to have this nice
property called encapsulation. To work around that, we'd need to
make one of a few choices: [...] (c) invent some kind of generic
constraint language so that constraints like this could be expressed
without exposing field names. [...] Choice (c), though, looks a
little daunting.
That's not too difficult.
Start with boolean expressions.
If you need to check everything statically, add enough constraints that
they become decidable.

I'm not sure I understand. Could you elaborate?

Preconditions/postconditions can express anything you want, and they are
an absolutely natural extensions of what's commonly called a type
(actually the more powerful type systems have quite a broad overlap with
assertions).
I'd essentially want to have an assertion language, with primitives for
type expressions.
Again I'm not entirely clear what this means. Are you talking
about pre/post conditions,

Yes.

Regards,
Jo
 
D

Darren New

Chris said:
// Inside this block, a has type int{17..21} and b has type
// int{18..22}

No what happens if right here you code
b := 16;

Does that again change the type of "b"? Or is that an illegal
instruction, because "b" has the "local type" of (18..22)?
signContract(a); // error, because a might be 17
signContract(b); // no error, because even though the declared
// type of b is int{14..22}, it has a local
// type of int{18..22}.


If the former (i.e., if reassigning to "b" changes the "static type" of
b, then the term you're looking for is not type, but "typestate".

In other words, this is the same sort of test that disallows using an
unassigned variable in a value-returning expression. When
{ int a; int b; b := a; }
returns a compile-time error because "a" is uninitialized at the
assignment, that's not the "type" of a, but the typestate. Just FYI.
Incidentally, I'm not saying that such a feature would be a good idea.

It actually works quite well if the language takes advantage of it
consistantly and allows you to designate your expected typestates and such.
 
C

Chris Smith

Marshall said:
It's never been a strong point. Made worse now that my daughter
is one of those up-at-the-crack-of-dawn types, and not old enough
to understand why it's not nice to jump on mommy and daddy's
bed while they're still asleep. But aren't you actually a time zone
or two east of me?

Yes, I confess I'm one time zone to your east, and I was posting later
than you. So perhaps it wasn't really past your bedtime.
The fields of an object/struct/what have you are often hidden behind
a method-based interface (sometimes callled "encapsulated") only
because we can't control their values otherwise.

I believe there are actually two kinds of encapsulation. The kind of
encapsulation that best fits your statement there is the getter/setter
sort, which says: "logically, I want an object with some set of fields,
but I can't make them fields because I lose control over their values".
That part can definitely be replaced, in a suitably powerful language,
with static constraints.

The other half of encapsulation, though, is of the sort that I mentioned
in my post. I am intentionally choosing to encapsulate something
because I don't actually know how it should end up being implemented
yet, or because it changes often, or something like that. I may
encapsulate the implementation of current tax code specifically because
I know that tax code changes on a year-to-year basis, and I want to
ensure that the rest of my program works no matter how the tax code is
modified. There may be huge structural changes in the tax code, and I
only want to commit to leaving a minimal interface.

In practice, the two purposes are not cleanly separated from each other.
Most people, if asked why they write getters and setters, would respond
not only that they want to validate against assignments to the field,
but also that it helps isolate changes should they change the internal
representation of the class. A publicly visible static constraint
language that allows the programmer to change the internal
representation of a class obviously can't make reference to any field,
since it may cease to exist with a change in representation.
However for a function, the "fields" are the in and out parameters.
The specific values in the relation that the function is aren't known
until runtime either, (and then only the subset for which we actually
perform computation.)

Did that make sense?

I didn't understand that last bit.
There are certainly syntactic issues, but I believe these are amenable
to the usual approaches. The runtime/compile time question, and
decidability seem bigger issues to me.

Well, the point of static typing is to do what's possible without
reaching the point of undecidability. Runtime support for checking the
correctness of type ascriptions certainly comes in handy when you run
into those limits.
 
C

Chris Smith

Darren New said:
No what happens if right here you code
b := 16;

Does that again change the type of "b"? Or is that an illegal
instruction, because "b" has the "local type" of (18..22)?

It arranges that the expression "b" after that line (barring further
changes) has type int{16..16}, which would make the later call to
signContract illegal.
If the former (i.e., if reassigning to "b" changes the "static type" of
b, then the term you're looking for is not type, but "typestate".

We're back into discussion terminology, then. How fun. Yes, the word
"typestate" is used to describe this in a good bit of literature.
Nevertheless, a good number of authors -- including all of them that I'm
aware of in programming language type theory -- would agree that "type"
is a perfectly fine word.

When I said b has a type of int{18..22}, I meant that the type that will
be inferred for the expression "b" when it occurs inside this block as
an rvalue will be int{18..22}. The type of the variable didn't change,
because variables don't *have* types. Expressions (or, depending on
your terminology preference, terms) have types. An expression "b" that
occurs after your assignment is a different expression from the one that
occurs before your assignment, so it's entirely expected that in the
general case, it may have a different type.

It's also the case (and I didn't really acknowledge this before) that
the expression "b" when used as an lvalue has a different type, which is
determined according to different rules. As such, the assignment to b
was not at all influenced by the new type that was arranged for the
expression "b" as an rvalue.

(I'm using lvalue and rvalue intuitively; in practice, these would be
assigned on a case-by-case basis along the lines of actual operators or
language syntax.)
In other words, this is the same sort of test that disallows using an
unassigned variable in a value-returning expression.

Yes, it is.
When
{ int a; int b; b := a; }
returns a compile-time error because "a" is uninitialized at the
assignment, that's not the "type" of a, but the typestate. Just FYI.

If you wish to say "typestate" to mean this, be my guest. It is also
correct to say "type".
It actually works quite well if the language takes advantage of it
consistantly and allows you to designate your expected typestates and such.

I'm not aware of a widely used language that implements stuff like this.
Are you?
 
D

Darren New

Chris said:
If you wish to say "typestate" to mean this, be my guest. It is also
correct to say "type".

Sure. I just wasn't sure everyone here was aware of the term, is all. It
makes it easier to google if you have a more specific term.
I'm not aware of a widely used language that implements stuff like this.

I've only used Hermes with extensive support for this sort of thing.
Hermes is process-oriented, rather than object-oriented, so it's a
little easier to deal with the "encapsulation" part of the equation
there. Sadly, Hermes went the way of the dodo.
 
D

David Hopwood

I don't think it would be a bad idea. Silently giving incorrect results
on arithmetic overflow, as C-family languages do, is certainly a bad idea.
A type system that supported range type arithmetic as you've described would
have considerable advantages, especially in areas such as safety-critical
software. It would be a possible improvement to Ada, which UUIC currently
has a more restrictive range typing system that cannot infer different
ranges for a variable at different points in the program.

I find that regardless of programming language, relatively few of my
integer variables are dimensionless -- most are associated with some
specific unit. Currently, I find variable naming conventions helpful in
documenting this, but the result is probably more verbose than it would
be to drop this information from the names, and instead use more
precise types that indicate the unit and the range.

When prototyping, you could alias all of these to bignum types (with
range [0..+infinity) or (-infinity..+infinity)) to avoid needing to deal
with any type errors, and then constrain them where necessary later.

It would take a little more work to write a program, but it would be no
more difficult to read (easier if you're also trying to verify its correctness).
Ease of reading programs is more important than writing.
There are other good reasons, too, as it turns out. I don't want to
overstate the "possible" until it starts to sound like "easy, even if
it's a pain". This kind of stuff is rarely done in mainstream
programming languages because it has serious negative consequences.

For example, I wrote that example using variables of type int. If we
were to suppose that we were actually working with variables of type
Person, then things get a little more complicated. We would need a few
(infinite classes of) derived subtypes of Person that further constrain
the possible values for state. For example, we'd need types like:

Person{age:{18..29}}

But this starts to look bad, because we used to have this nice property
called encapsulation.

I think you're assuming that 'age' would have to refer to a concrete field.
If it refers to a type parameter, something like:

class Person{age:Age} is
Age getAge()
end

then I don't see how this breaks encapsulation.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,236
Members
46,825
Latest member
VernonQuy6

Latest Threads

Top