why () is () and [] is [] work in other way?

A

Adam Skutt

The fact that you think that that's "differing behaviour" is what makes
it a misfeature. The fact that you think that '==' can take objects as
operands confirms that Java *does* confuse programmers.

The equality operator can absolutely be used between two objects. Try
it if you don't believe me. It always does identity comparison when
given two objects. It can also be given two primitives, and in this
case, it does value comparison. Despite performing different
operations with the same symbol, there's little risk of confusion
because I can trivially figure out if a variable is an object or an
primitive.
They're not "disjoint", in fact one almost always implies the other (*).

"Almost always" isn't a rebuttal. There's no requirement whatsoever
for the results of identity comparison to be related to the results of
value comparison, ergo they are disjoint. Changing one doesn't have
to influence the other. Please note that I never advocated doing what
Java does, I merely noted what it does.
Python's idea is that, by default, any object is equal to itself and
only itself.

Which is just wrong-headed. Many types have no meaningful definition
for value equality, ergo any code that attempts to perform the
operation is incorrect.
(*) nan == nan is false, but, at least conceptually, a 'NotComparable'
exception should be raised instead. That wouldn't be very useful, though.


There shouldn't be, to be fair.


Which is the whole problem. It's nice to keep erroneous conditions
out of your domain, but it's just not always possible. I don't know
how you implement NaN (which you need) without allowing for this. I
don't know how you implement SQL NULL without allowing for this.
While lots of problems can avoid this issue, I'm not sure all problems
can. Moreover, I don't know how to implement a value comparison for
many objects, so the operation should just be undefined.

I should point out that I was a little hasty in painting Python with
the same brush as C# and excluding Java. Python and Java are equally
bad: value equality defaults to identity equality but there are
distinct operations for telling them apart. People want identity
equality in Python write 'is', not '=='. People who explicitly want
value equality in Java write 'equals()'. I apologize, and blame
skipping breakfast this morning.

C# is arguably worse, since '==' on objects is defined as identity
equality unless it has been overridden. This means that that the
intent of the operation varies with no easy way to figure it out in
context, you simply have to know. C# also provides a way to test only
for identity, Object.ReferenceEquals(), but it's underused.
Ultimately this is really a problem of documentation: the language
shouldn't encourage conflation of intent in the manner it does.
I can agree on that, but that's something you can solve with a minor
modification to the language. What I was talking about is the core
design of Java and Python.

The only difference is I see is which comparison is performed by the
== symbol. But I don't see how nor why Python's decisions are
superior to Java. Plus, I never suggested that Python should do what
Java does, merely noted what it did since it seemed relevant to the
discussion.

Adam
 
A

Adam Skutt

And my apologies... I forgot to state my main point:
Programmer accessible object identity is the principal impediment to
referential transparency.
In a functional language one can bind a name to a value -- period.
There is nothing more essence-ial -- its platonic id -- to the name
than that and so the whole can of worms connected with object identity
remains sealed within the language implementation.

Yes, I agree that object identity is a major hold up, but I think side
effects are a bigger problem. It's possible in C++ to create types
that behave like the primitive types without too much difficulty,
hence making object identity unimportant. However, it's considerably
more difficult in C++ to write side-effect free code[1]. This is a
bit of an apple and orange thing, though. ;)

I often wonder what the world would be like if Python, C#, and Java
embraced value types more, and had better support for pure functions.
Unfortunately, building a language where all types behave like that is
rather difficult, as the Haskell guys have shown us ;).

Adam

[1] Or even just code that only uses side-effects the compiler
understands.
 
A

Adam Skutt

    I would suggest that "is" raise ValueError for the ambiguous cases.
If both operands are immutable, "is" should raise ValueError.

I don't know how you would easily detect user-defined immutable types,
nor do I see why such an operation should be an error. I think it
would end up violating the principal of least surprise in a lot of
cases, especially when talking about things like immutable sets, maps,
or other complicated data structures.

What I think you want is what I said above: ValueError raised when
either operand is a /temporary/ object. Really, it should probably be
a parse-time error, since you could (and should) make the
determination at parse time.
That's the case where the internal representation of immutables
shows through.

You still have this problem with mutable temporary objects, as my
little snipped showed. You're still going to get a result that's
inconsistent and/or "surprising" sooner or later. The problem is the
temporary nature of the object, not mutability.
    If this breaks a program, it was broken anyway.  It will
catch bad comparisons like

     if x is 1000 :
        ...

which is implementation dependent.

Yes, I agree that a correct fix shouldn't break anything except
already broken programs.

Adam
 
N

Nobody

I would suggest that "is" raise ValueError for the ambiguous cases.
If both operands are immutable, "is" should raise ValueError. That's the
case where the internal representation of immutables shows through.

This breaks one of the most common uses of "is", i.e. "x is None".

And it doesn't prevent a programmer from consfusing "is" and "==" with
mutable types.
If this breaks a program, it was broken anyway. It will
catch bad comparisons like

if x is 1000 :
...

which is implementation dependent.

The only way to completely eliminate bugs caused by the programmer relying
upon implementation-dependent behaviour is to eliminate implementation-
dependent behaviour altogether, which is throwing the baby out with the
bath water, IMHO.

All practical languages have some implementation-defined behaviour, often
far more problematic than Python's.
 
P

Paul Rubin

Nobody said:
All practical languages have some implementation-defined behaviour, often
far more problematic than Python's.

The usual reason for accepting implementation-defined behavior is to
enable low-level efficiency hacks written for specific machines. C and
C++ are used for that sort of purpose, so they leave many things
implementation-defined. Python doesn't have the same goals and should
leave less up to the implementation. Java, Ada, Standard ML, etc. all
try to eliminate implementation-defined behavior in the language much
more than Python does. I don't have any idea why you consider that to
be "throwing the baby out with the bath water".
 
S

Steven D'Aprano

I often wonder what the world would be like if Python, C#, and Java
embraced value types more, and had better support for pure functions.

They would be slower, require more memory, harder to use, and far, far
less popular. Some other languages just like Python, C# and Java would be
invented to fill those niches, and the functional-obsessed crowd would
then complain that they wished those languages would be more like Python,
C# and Java.
 
A

Adam Skutt

No, it can't be used between objects but only between primitives and
references (which should be regarded as primitives, by the way).

The only way to access an object is through a reference.
understand that your 'a' is not really an object but a reference to it,
everything becomes clear and you see that '==' always do the same thing.

Yes, object identity is implemented almost? everywhere by comparing
the value of two pointers (references)[1]. I've already said I'm not
really sure how else one would go about implementing it.
You might tell me that that's just an implementation detail, but when an
implementation detail is easier to understand and makes more sense than
the whole abstraction which is built upon it, something is seriously wrong.

I'm not sure what abstraction is being built here. I think you have
me confused for someone else, possibly Steven.

You're missing the big picture. The two comparisons are asking
different questions:
Value equality asks if the operands 'have the same state'
regardless of how they exist in memory.
Identity equality asks if the two operands are the same block of
memory.

The two are distinct because not all types support both operations.

If I write a function that does a value comparison, then it should do
value comparison on _every type that can be passed to it_, regardless
of whether the type is a primitive or an object, whether it has value
or reference semantics, and regardless of how value comparison is
actually implemented. If I write some function:
f(x : T, y : U) => x == y
where T and U are some unknown types, then I want the function to do a
value comparison for every type pairing that allows the function to
compile. Likewise, if I write a function that does identity
comparison, then it logically wants to do identity comparison on
_every type that can be passed to it_.

To accomplish this, I must have a distinct way of asking each
question. In Python we have '==' and 'is'[2]; in Java we have
'Object.equals()' and '=='; in C and C++ we distinguish by the types
of the variables being compared (T and T*).

Java gives '==' a different meaning for primitive types, but that
turns out to be OK because I can't write a function that takes both a
primitive type and a reference type at the same position. Yes, the
reason it does this is due to what I said above, but that doesn't have
any bearing on why we pick one operation over the other as
programmers.
The distinction between primitives and objects is unfortunate. It is as
if Java tried to get rid of pointers but never completely succeeded in
doing that.
It's the distinction between primitives and objects that should've been
an implementation detail, IMO.

Python's lack of this misfeature is what I'm really fond of.

If anything, you have that backwards. Look at Python: all variables
in Python have pointer semantics, not value semantics. In imperative
languages, pointers have greater utility over value types because not
all types can obey the rules for value types. For example, I don't
know how to give value semantics to something like a I/O object (e.g,
file, C++ fstream, C FILE), since I don't know how to create
independent copies.

One can obviously create an imperative language without pointers, but
I/O gets rather tricky.

Adam

[1] Though it need not be (and often isn't) as simple as comparing two
integers.

[2] Well, I suspect 'is' gets used mostly for comparisons against
None, True, and False in Python.
 
A

Adam Skutt

They would be slower, require more memory,

Funny, Haskell frequently beats C in both categories. MATLAB is
faster and more memory efficient than naive C matrix code, since it
has a very efficient copy-on-write implementation. As the various C++
matrix libraries will show you, efficient COW is much harder when you
have to deal with C++ aliasing rules.
harder to use, and far, far less popular.

Alas, these two are probably true.

Adam
 
P

Paul Rubin

Adam Skutt said:
Alas, these two are probably true.

Haskell is kind of abstruse and has a notoriously steep learning curve,
as it's mostly meant as a research testbed and as a playground for
language geeks. ML/OCaml is by all accounts much easier, and I know of
a couple of former Python projects that successfully migrated to OCaml
once Python's warts and low performance got too annoying. Erlang (which
is functional but untyped) has also been displacing Python in some
settings.
 
S

Steven D'Aprano

Funny, Haskell frequently beats C in both categories.

We've both been guilty of this, but don't confuse a language
implementation with a language. Haskell and C are languages, which in a
sense are like Platonic ideals: languages specify behaviour and
semantics, but have no costs.

When talking about resource usage, you need to talk about concrete
implementations of concrete tests, not hand-wavy "language X is faster".
And I'm afraid that your claim of Haskell frequently beating C doesn't
stand up to scrutiny.

http://shootout.alioth.debian.org/u64q/benchmark.php?test=all&lang=ghc&lang2=gcc

I'm seeing code generated by the Haskell GHC compiler being 2-4 times
slower than code from the C gcc compiler, and on average using 2-3 times
as much memory (and as much as 7 times).

Feel free to find your own set of benchmarks that show the opposite. I'd
be interested to see under what conditions Haskell might be faster than C.
 
O

OKB (not okblacke)

Adam said:
If I write a function that does a value comparison, then it should
do value comparison on _every type that can be passed to it_,
regardless of whether the type is a primitive or an object, whether
it has value or reference semantics, and regardless of how value
comparison is actually implemented. If I write some function:
f(x : T, y : U) => x == y
where T and U are some unknown types, then I want the function to
do a value comparison for every type pairing that allows the
function to compile. Likewise, if I write a function that does
identity comparison, then it logically wants to do identity
comparison on _every type that can be passed to it_.

What you say here makes perfect sense, but also shows that you
really shouldn't be using Python if you want stuff to work this way. In
Python any value of any type can be passed to any function. The claims
you are making about object identity and object equality are reasonable,
but as you show here, to really handle them requires dragging in a huge
amount of type-system baggage. Python's behavior is perfectly well-
defined. You might think it's not the best way to do it based on
abstract conceptual frameworks for how programming languages "should"
work, but it works just fine.


--
--OKB (not okblacke)
Brendan Barnwell
"Do not follow where the path may lead. Go, instead, where there is
no path, and leave a trail."
--author unknown
 
P

Paul Rubin

Steven D'Aprano said:
I'm seeing code generated by the Haskell GHC compiler being 2-4 times
slower than code from the C gcc compiler, and on average using 2-3 times
as much memory (and as much as 7 times).

Alioth isn't such a great comparison, because on the one hand you get
very carefully tuned, unidiomatic code for each language; but on the
other, you're somewhat constrained by the benchmark specs. Obviously C
is not much above assembler, so you can write almost-optimal programs if
you code close enough to the metal and suffer enough. If you're talking
about coding reasonably straightforwardly, C usually does beat Haskell
(once you've debugged the core dumps...) but there are exceptions to
that.
Feel free to find your own set of benchmarks that show the opposite. I'd
be interested to see under what conditions Haskell might be faster than C.

Haskell wasn't included in this multi-way comparison, but Ocaml beat C
by a significant factor at a straightforward vector arithmetic loop,
because it didn't have to pessimize around possible pointer aliasing:

http://scienceblogs.com/goodmath/2006/11/the_c_is_efficient_language_fa.php

GHC should be able to do similar things.

Also, here's a sort of cheating Haskell example: the straightforward
Haskell Fibonacci code is slower than C, but just sprinkle in a few
parallelism keywords and run it on your quad core cpu:

http://donsbot.wordpress.com/2007/11/29/use-those-extra-cores-and-beat-c-today

Note the Haskell code in that example is using arbitrary-precision
integers while C is using int64's. Yes, you could beat the GHC speed by
writing a lot more C code to manage Posix threads, locks, etc., but in
Haskell two do two things in parallel you can just say "par".

There is also work going on to support parallel listcomps (just like
regular ones but they run on multiple cores), and vector combinators
that offload the computation to a GPU. Those things are quite hard to
do in plain C, though there are some specialty libraries for it.

Finally, a less-cheating example (this is from 2007 and I think things
are even better now):

http://neilmitchell.blogspot.com/2007/07/making-haskell-faster-than-c.html

Gives a Haskell word count program

main = print . length . words =<< getContents

which could also be written (if the syntax looks better to you):

main = do
text <- getContents
print (length (words text))

The comparison C code is:

int main() {
int i = 0;
int c, last_space = 1, this_space;
while ((c = getchar()) != EOF) {
this_space = isspace(c);
if (last_space && !this_space)
i++;
last_space = this_space;
}
printf("%i\n", i);
return 0;
}

and GHC/Supero beats the C code by about 10% even though both use
getchar. The blog post explains, you could speed up the C code by
writing a rather contorted version, unrolling it into two separate
loops, one for sequences of spaces and one for non-spaces, and jumping
back and forth between the loops instead of using the last_space
variable. That is basically the code that Supero figures out how to
generate: two separate loops with transitions in the right places,
starting from very straightforward high-level input.

I'm not really good at Haskell even after fooling with it on and off for
several years now, and it certainly can't beat Python for ease-of-use
without a lot of experience. But in the hands of experts it is
incredibly powerful. It makes Python seem almost like a toy.
 
A

Adam Skutt

        What you say here makes perfect sense, but also shows that you
really shouldn't be using Python if you want stuff to work this way.  In
Python any value of any type can be passed to any function.  The claims
you are making about object identity and object equality are reasonable,
but as you show here, to really handle them requires dragging in a huge
amount of type-system baggage.

So the check gets deferred to runtime, and the programmer may need to
explictly throw 'NotImplemented' or something like that. Which is
what happens in Python. Not type-checking arguments simply moves the
burden from the language to the programmer, which is a standard
consequence of moving from static to dynamic typing.

Adam
 
S

Steven D'Aprano

They're not "disjoint", in fact one almost always implies the other (*).
Python's idea is that, by default, any object is equal to itself and
only itself. The fact that this is equivalent to "identity comparison"
is just a coincidence, from a conceptual point of view.

Define your terms: what do you mean by "equal"?

The complication is that "equal" has many meanings. Does 1/2 equal 2/4?
Well, yes, numerically, but numerical equality is not the only useful
sense of equality -- not even for mathematicians! Although the convention
to write "1/2 = 2/4" is too strong to discard, there are areas of
mathematics where 1/2 and 2/4 are not treated as equal regardless of
numerical equality.

http://en.wikipedia.org/wiki/Mediant_(mathematics)

In Python, "equal" can have any meaning we like, because we can override
__eq__. For most meaningful equality comparisons, we expect that X should
always equal itself, even if it doesn't equal anything else, and so __eq__
defaulting to an identity comparison if you don't override it makes good
sense.

Some people (e.g. the creator of Eiffel, Bertrand Meyer) argue that
identity should *always* imply equality (reflexivity). I disagree, but
regardless, reflexivity is *almost always* the right thing to do.

When it comes to equality, Python defaults to sensible behaviour. By
default, any object supports equality. By default, "a == a" is true for
any object a. If you want to define a non-reflexive type, you have to do
so yourself. If you want to define a type that doesn't support equality
at all, you have to do so yourself. But both use-cases are vanishingly
rare, and rather troublesome to use. It would be stupid for Python to
make them the default behaviour.

After all, Python is a tool, not a philosophy. There's no need to force
the user to start from a blank slate and define everything from first
principles when you can start with the common tools you normally need,
and add or subtract from it as needed.


(*) nan == nan is false, but, at least conceptually, a 'NotComparable'
exception should be raised instead. That wouldn't be very useful,
though.

NANs are comparable. By definition, NAN != x for every x. They're just
not reflexive.

There shouldn't be, to be fair.

I disagree. Violating reflexivity has its uses. NANs are the classic
example.

Another example is if you redefine "==" to mean something other than
"equals". If your class treats == as something other than equality, there
is no need for a==a to necessarily return True.
 
S

Steven D'Aprano

I would suggest that "is" raise ValueError for the ambiguous cases.
If both operands are immutable, "is" should raise ValueError. That's the
case where the internal representation of immutables shows through.

You've already made this suggestion before. Unfortunately you failed to
think it through: it would break *nearly all Python code*, and not just
"broken" code. It would break code that relies on documented language
features. It would break code that applies a standard Python idiom. I
count at least 638 places where your suggestion would break the standard
library.

[steve@ando ~]$ cd /usr/local/lib/python3.2/
[steve@ando python3.2]$ grep "if .* is None:" *.py | wc -l
638

That's an average of four breakages per module.

If this breaks a program, it was broken anyway.

Incorrect. Your suggestion breaks working code for no good reason.

Astonishingly, your suggestion doesn't break code that actually is broken:

def spam(arg=None):
if arg == None:
...

actually is broken, since it doesn't correctly test for the sentinel. You
can break it by passing an object which compares equal to None but isn't
actually None.
 
S

Steven D'Aprano

Nope. What I meant is that we can talk of equality whenever...

Sorry, that won't do it. You haven't defined equality, or given any way
of deciding whether two entities are equal. What you have listed are
three *properties* of equality, namely:

- reflexivity (a = a)
- symmetry (if a = b then b = a)
- transitivity (if a = b and b = c then a = c)

But those three properties apply to any equivalence relation, not just
equality. Examples:

"both are odd" (of integers)
"have the same birthday" (of people)
"is congruent to" (of triangles)
"is in the same tax bracket" (of tax payers)
"has the same length" (of pieces of string)
"both contain chocolate" (of cakes)

For example, if we define the operator "~" to mean "has the same
genes" (to be precise: the same genotype), then if Fred and Barney are
identical twins we have:

Fred ~ Fred
Fred ~ Barney and Barney ~ Fred

Identical triplets are rare (at least among human beings), but if we
clone Barney to get George, then we also have:

Fred ~ Barney and Barney ~ George => Fred ~ George.

So "have the same genes" meets all your conditions for equality, but
isn't equality: the three brothers are very different. Fred lost his arm
in a car crash, Barney is a hopeless alcoholic, and George is forty years
younger than his two brothers.
 
A

Adam Skutt

Useful... maybe, conceptually sound... no.
Conceptually, NaN is the class of all elements which are not numbers,
therefore NaN = NaN.

NaN isn't really the class of all elements which aren't numbers. NaN
is the result of a few specific IEEE 754 operations that cannot be
computed, like 0/0, and for which there's no other reasonable
substitute (e.g., infinity) for practical applications .

In the real world, if we were doing the math with pen and paper, we'd
stop as soon as we hit such an error. Equality is simply not defined
for the operations that can produce NaN, because we don't know to
perform those computations. So no, it doesn't conceptually follow
that NaN = NaN, what conceptually follows is the operation is
undefined because NaN causes a halt.

This is what programming languages ought to do if NaN is compared to
anything other than a (floating-point) number: disallow the operation
in the first place or toss an exception. Any code that tries such an
operation has a logic error and must be fixed.

However, when comparing NaN against floating point numbers, I don't
see why NaN == NaN returning false is any less conceptually correct
than any other possible result. NaN's very existence implicitly
declares that we're now making up the rules as we go along, so we
might as well pick the simplest set of functional rules.

Plus, floating point numbers violate our expectations of equality
anyway, frequently in surprising ways. 0.1 + 0.1 + 0.1 == 0.3 is
true with pen and paper, but likely false on your computer. It's even
potentially possible to compare two floating point variables twice and
get different results each time[1]! As such, we'd have this problem
with defining equality even if NaN didn't exist. We must treat
floating-point numbers as a special case in order to write useful
working programs. This includes defining equality in a way that's
different from what works for nearly every other data type.

Adam

[1] Due to register spilling causing intermediate rounding. This
could happen with the x87 FPU since the registers were 80-bits wide
but values were stored in RAM as 64-bits. This behavior is less
common now, but hardly impossible.
 
S

Steven D'Aprano

You're going to have to explain the value of an "ID" that's not 1:1 with
an object's identity, for at least the object's lifecycle, for a
programmer. If you can't come up with a useful case, then you haven't
said anything of merit.

I gave an example earlier, but you seem to have misunderstood it, so I'll
give more detail.


In the Borg design pattern, every Borg instance shares state and are
indistinguishable, with only one exception: object identity. We can
distinguish two Borg instances by using "is".

Since the whole point of the pattern is for Borg instances to be
indistinguishable, the existence of a way to distinguish Borg instances
is a flaw and may be undesirable. At least, it's exposing an
implementation detail which some people argue should not be exposed.

Why should the caller care whether they are dealing with a singleton
object or an unspecified number of Borg objects all sharing state? A
clever interpreter could make many Borg instances appear to be a
singleton. A really clever one could also make a singleton appear to be
many Borg instances.

Note that this is virtually the same situation as that which John Nagle
objects to, namely that the implementation detail of small ints being
singletons is exposed. There is only ever one 0 instance, but potentially
many 3579 instances.

John's argument is that Python should raise an exception if you compare
"2 is 2", or for that matter "3579 is 3579", which is foolish. If you're
going to change the semantics of "is", why not do something useful and
ensure that "3579 is 3579" returns True regardless of whether they
actually are the same instance or not?

That would be far more useful than raising an exception. It would
complicate the definition of "is", but perhaps that's a price people are
willing to pay for avoiding the (trivial) confusion about object identity.

[...]
How would inheritance work if I did that?

You don't inherit from Borg instances, and instances inherit from their
class the same as any other instance.
 
M

mwilson

Adam Skutt wrote:
[ ... ]
In the real world, if we were doing the math with pen and paper, we'd
stop as soon as we hit such an error. Equality is simply not defined
for the operations that can produce NaN, because we don't know to
perform those computations. So no, it doesn't conceptually follow
that NaN = NaN, what conceptually follows is the operation is
undefined because NaN causes a halt.

This is what programming languages ought to do if NaN is compared to
anything other than a (floating-point) number: disallow the operation
in the first place or toss an exception. Any code that tries such an
operation has a logic error and must be fixed.

There was a time when subtracting 5 from 3 would have been a logic error.
Your phrase "if we were doing the math ..." lies behind most of the history
of math, esp. as it concerns arithmetic. Mathematicians kept extending the
definitions so that they wouldn't have to stop. Feynman's _Lectures on
Physics_, chapter 22, "Algebra" gives a stellar account of the whole
process.

Mel.
 
S

Steven D'Aprano

On Apr 26, 5:10 am, Steven D'Aprano <steve
(e-mail address removed)> wrote:

Again, the fact that you somehow think this absurd family tree is
relevant only shows you're fundamentally confused about what object
oriented identity means. That's rather depressing, seeing as I've given
you a link to the definition.

Perhaps you failed to notice that this "absurd" family tree, as you put
it, consists of grandparent+parent+sibling+in-law. What sort of families
are you familiar with that this seems absurd to you?

I think you have inadvertently demonstrated the point I am clumsily
trying to make. Even when two expressions are logically equivalent, the
form of the expressions make a big difference to the comprehensibility of
the text. Which would you rather read?

for item in sequence[1:]: ...

for item in sequence[sum(ord(c) for c in 'avocado') % 183:]: ...

The two are logically equivalent, so logically you should have no
preference between the two, yes?

In a mathematical sense, you're saying that given f(x) = x+2, using f(x)
is somehow more "direct" (whatever the hell that even means)

I thought that the concept of direct and indirect statements would be
self-evident. Let me try again.

A statement is "direct" in the sense I mean if it explicitly states the
thing you intend it to state. A statement is "indirect" if it requires
one or more logical steps to go from the statement, as given, to the
conclusion intended.

"Queen Elizabeth II is the ruling monarch of the United Kingdom" is a
direct statement of the fact that Queen Elizabeth II is the ruling
monarch of the UK. (Do I really need to explain this?)

"Queen Elizabeth II is the Commander-in-chief of the Canadian Armed
Forces" is an *indirect* statement of the fact that Elizabeth is the
ruling monarch of the UK. It is indirect because it doesn't explicitly
say that she is monarch, but the Commander-in-Chief of the Canadian Armed
Forces is always the ruling monarch of Canada, and the ruling monarch of
Canada is always the ruling monarch of the UK. Hence, Elizabeth being
Commander-in-Chief necessarily implies that she is ruling monarch of the
United Kingdom (at least until there is change to Canadian law).

"a is b" is a direct test of whether a is b. (Duh.)

"id(a) == id(b)" is an indirect test of whether a is b, since it requires
at least three indirect steps: the knowledge of what the id() function
does, the knowledge of what the == operator does, and the knowledge that
equal IDs imply identity.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,146
Messages
2,570,832
Members
47,375
Latest member
FelishaCma

Latest Threads

Top