OO in Python? ^^

B

Bruno Desthuilliers

Matthias Kaeppler a écrit :
Hi,

sorry for my ignorance, but after reading the Python tutorial on
python.org, I'm sort of, well surprised about the lack of OOP
capabilities in python.

I beg your pardon ???
Honestly, I don't even see the point at all of
how OO actually works in Python.
For one, is there any good reason why I should ever inherit from a
class?

To specialize it (subtyping), or to add functionnalities (code reuse,
factoring)
^^ There is no functionality to check if a subclass correctly
implements an inherited interface

I don't know of any language that provide such a thing. At least for my
definition of "correctly".
and polymorphism seems to be missing
in Python as well.

Could you share your definition of polymorphism ?
I kind of can't imagine in which circumstances
inheritance in Python helps. For example:

class Base:
def foo(self): # I'd like to say that children must implement foo
pass

class Base(object):
def foo(self):
raise NotImplementedError, "please implement foo()"
class Child(Base):
pass # works
Does inheritance in Python boil down to a mere code sharing?

Yes. inheritence is initially made for code sharing (cf Smalltalk). The
use of inheritence for subtyping comes from restrictions of statically
typed languages [1]. BTW, you'll notice that the GoF (which is still one
of the best references about OO) strongly advise to program to
interfaces, not to implementations. And you'll notice that some patterns
only exists as workarounds for the restrictions enforced by statically
typed languages.


[1] should say : for *a certain class of* statically typed languages.
There are also languages like OCaml that relies on type inference.
And how do I formulate polymorphism in Python?

In OO, polymorphism is the ability for objects of different classes to
answer the same message. It doesn't imply that these objects should
inherit from a common base class. Statically typed languages like C++ or
Java *restrict* polymorphism.


(snip)

You don't need any of this.

class Foo:
def walk(self):
print "%s walk" % self.__class__.__name__

class Bar:
def walk(self):
print "I'm singing in the rain"

def letsgoforawalk(walker):
walker.walk()

f = Foo()
b = Bar()

letsgoforawalk(f)
letsgoforawalk(b)

Here, the function (BTW, did you know that in Python, functions are
objects too ?) letsgoforawalk expect an object that has the type 'object
that understand the message walk()'. Any object of this type will do -
no need to have a common base class.
I could as well leave the whole inheritance stuff out and the program
would still work (?).

Of course. Why should polymorphism need anything more ?
Please give me hope that Python is still worth learning :-/

It is, once you've unlearned C++/Java/ADA/whatever.
 
B

Bruno Desthuilliers

Matthias Kaeppler a écrit :
(snip)
I stumbled over this paragraph in "Python is not Java", can anyone
elaborate on it:

"In Java, you have to use getters and setters because using public
fields gives you no opportunity to go back and change your mind later to
using getters and setters. So in Java, you might as well get the chore
out of the way up front. In Python, this is silly, because you can start
with a normal attribute and change your mind at any time, without
affecting any clients of the class. So, don't write getters and setters."

Why would I want to use an attribute in Python, where I would use
getters and setters in Java?

Because you don't *need* getters/setters - you already got'em for free
(more on this latter).
I know that encapsulation is actually just
a hack in Python (common, "hiding" an implementation detail by prefixing
it with the classname so you can't access it by its name anymore? Gimme
a break...),

You're confusing encapsulation with information hiding. The mechanism
you're refering to is not meant to 'hide' anything, only to prevent
accidental shadowing of some attributes. The common idiom for
"information hiding" in Python is to prefix 'protected' attributes with
a single underscore. This warns developers using your code that this is
implementation detail, and that there on their own if they start messing
with it. And - as incredible as this can be - that's enough.

True, this won't stop stupid programmers from doing stupid thing - but
no language is 'idiot-proof' anyway (know the old '#define private
public' C++ trick ?), so why worry ?
but is that a reason to only write white box classes? ^^

First, what is a 'white box' class ?

public class WhiteBox {
protected integer foo;
protected integer bar;

public integer getFoo() {
return foo;
}
public void setFoo(integer newfoo) {
foo = newfoo;
}
public integer getBar() {
return bar;
}
public void setBar(integer newbar) {
bar = newbar;
}
}

Does this really qualify as a 'blackbox' ? Of course not, everything is
publicly exposed. You could have the exact same result (and much less
code) with public attributes.

Now what is the reason to write getters and setters ? Answer : so you
can change the implementation without breaking the API, right ?

Python has something named 'descriptors'. This is a mechanism that is
used for attribute lookup (notice that everything being an object,
methods are attributes too). You don't usually need to worry about it
(you should read about if you really want to understand Python's object
model), but you can still use it when you need to take control over
attribute access. One of the simplest application is 'properties' (aka
computed attributes), and here's an exemple :

class Foo(object):
def __init__(self, bar, baaz):
self.bar = bar
self._baaz = baaz

def _getbaaz(self):
return self._baaz

baaz = Property(fget=_getbaaz)

This is why you don't need explicit getters/setters in Python : they're
already there ! And you can of course change how they are implemented
without breaking the interface. Now *this* is encapsulation - and it
doesn't need much information hiding...
 
M

Mike Meyer

Paul Boddie said:
One classic example of a
weakly-typed language is BCPL, apparently, but hardly anyone has any
familiarity with it any more.

Actually, BCPL is what Stevenn D'Aprano called "untyped". Except his
definition is suitable for after everyone followed IBM's footsteps in
building general-purpose byte-addressable machines.

In BCPL, everything is a word. Given a word, you can dereference it,
add it to another word (as either a floating point value or an integer
value), or call it as a function.

A classic example of a weakly-typed language would be a grandchild of
BCPL, v6 C. Since then, C has gotten steadily more strongly typed. A
standard complaint as people tried to move code from a v6 C compiler
(even the photo7 compiler) to the v7 compiler was "What do you mean I
can't ....". Of course, hardly anyone has familiarity with that any
more, either.

<mike
 
M

Mike Meyer

Bruno Desthuilliers said:
I don't know of any language that provide such a thing. At least for
my definition of "correctly".

Well, since your definition of "correclty" is uknown, I won't use
it. I will point out that the stated goal is impossible for some
reasonable definitions of "correctly".

My definition of "correctly" is "meets the published contracts for the
methods." Languages with good support for design by contract will
insure that subclasses either correctly implement the published
contracts, or raise an exception when they fail to do so. They do that
by checking the contracts for the super classes in an appropriate
logical relationship, and raising an exception if the contract isn't
met.

<mike
 
M

Mike Meyer

Steven D'Aprano said:
Of course, the IT world is full of people writing code and not testing
it, or at least not testing it correctly. That's why there are frequent
updates or upgrades to software that break features that worked in the
older version. That would be impossible in a test-driven methodology, at
least impossible to do by accident.

That sentence is only true if your tests are bug-free. If not, it's
possible to make a change that introduces a bug that passes testing
because of a bug in the tests. Since tests are code, they're never
bug-free. I will agree that the frequency of upgrades/updates breaking
things means testing isn't being done properly.

<mike
 
T

Tom Anderson

I could be wrong, but I think Haskell is *strongly* typed (just like
Python), not *statically* typed.

Haskell is strongly and statically typed - very strongly and very
statically!

However, what it's not is manifestly typed - you don't have to put the
types in yourself; rather, the compiler works it out. For example, if i
wrote code like this (using python syntax):

def f(x):
return 1 + x

The compiler would think "well, he takes some value x, and he adds it to 1
and 1 is an integer, and the only thing you can add to an integer is
another integer, so x must be an integer; he returns whatever 1 + x works
out to, and 1 and x are both integers, and adding two integers makes an
integer, so the return type must be integer", and concludes that you meant
(using Guido's notation):

def f(x: int) -> int:
return 1 + x

Note that this still buys you type safety:

def g(a, b):
c = "{" + a + "}"
d = 1 + b
return c + d

The compiler works out that c must be a string and d must be an int, then,
when it gets to the last line, finds an expression that must be wrong, and
refuses to accept the code.

This sounds like it wouldn't work for complex code, but somehow, it does.
And somehow, it works for:

def f(x):
return x + 1

Too. I think this is due to the lack of polymorphic operator overloading.

A key thing is that Haskell supports, and makes enormous use of, a
powerful system of generic types; with:

def h(a):
return a + a

There's no way to infer concrete types for h or a, so Haskell gets
generic; it says "okay, so i don't know what type a is, but it's got to be
something, so let's call it alpha; we're adding two alphas, and one thing
i know about adding is that adding two things of some type makes a new
thing of that type, so the type of some-alpha + some-alpha is alpha, so
this function returns an alpha". ISTR that alpha gets written 'a, so this
function is:

def h(a: 'a) -> 'a:
return a + a

Although that syntax might be from ML. This extends to more complex
cases, like:

def i(a, b):
return [a, b]

In Haskell, you can only make lists of a homogenous type, so the compiler
deduces that, although it doesn't know what type a and b are, they must be
the same type, and the return value is a list of that type:

def i(a: 'a, b: 'a) -> ['a]:
return [a, b]

And so on. I don't know Haskell, but i've had long conversations with a
friend who does, which is where i've got this from. IANACS, and this could
all be entirely wrong!
At least the "What Is Haskell?" page at haskell.org describes the
language as strongly typed, non-strict, and allowing polymorphic typing.

When applied to functional languages, 'strict' (or 'eager'), ie that
expressions are evaluated as soon as they are formed; 'non-strict' (or
'lazy') means that expressions can hang around as expressions for a while,
or even not be evaluated all in one go. Laziness is really a property of
the implementation, not the the language - in an idealised pure functional
language, i believe that a program can't actually tell whether the
implementation is eager or lazy. However, it matters in practice, since a
lazy language can do things like manipulate infinite lists.

tom
 
A

Alex Martelli

Mike Meyer said:
That sentence is only true if your tests are bug-free. If not, it's
possible to make a change that introduces a bug that passes testing
because of a bug in the tests. Since tests are code, they're never
bug-free. I will agree that the frequency of upgrades/updates breaking
things means testing isn't being done properly.

Yours is a good point: let's be careful not to oversell or overhype TDD,
which (while great) is not a silver bullet. Specifically, TDD is prone
to a "common-mode failure" between tests and code: misunderstanding of
the specs (generally underspecified specs); since the writer of the test
and of the code is the same person, if that person has such a
misunderstanding it will be reflected equally in both code and test.

Which is (part of) why code developed by TDD, while more robust against
many failure modes than code developed more traditionally, STILL needs
code inspection (or pair programming), integration tests, system tests,
and customer acceptance tests (not to mention regression tests, once
bugs are caught and fixed;-), just as much as code developed otherwise.


Alex
 
A

Alex Martelli

Tom Anderson said:
Haskell is strongly and statically typed - very strongly and very
statically!
Sure.


However, what it's not is manifestly typed - you don't have to put the
types in yourself; rather, the compiler works it out. For example, if i
wrote code like this (using python syntax):

def f(x):
return 1 + x

The compiler would think "well, he takes some value x, and he adds it to 1
and 1 is an integer, and the only thing you can add to an integer is
another integer, so x must be an integer; he returns whatever 1 + x works
out to, and 1 and x are both integers, and adding two integers makes an
integer, so the return type must be integer", and concludes that you meant

hmmm, not exactly -- Haskell's not QUITE as strongly/rigidly typed as
this... you may have in mind CAML, which AFAIK in all of its variations
(O'CAML being the best-known one) *does* constrain + so that "the only
thing you can add to an integer is another integer". In Haskell, + can
sum any two instances of types which meet typeclass Num -- including at
least floats, as well as integers (you can add more types to a typeclass
by writing the required functions for them, too). Therefore (after
loading in ghci a file with
f x = x + 1
), we can verify...:

*Main> :type f
f :: (Num a) => a -> a


A very minor point, but since the need to use +. and the resulting lack
of polymorphism are part of what keeps me away from O'CAML and makes me
stick to Haskell, I still wanted to make it;-).


Alex
 
B

Bengt Richter


[OT} (just taking liberties with your sig ;-)
,<@><
°º¤ø,,,,ø¤º°`°º¤ø,,,,ø¤º°P`°º¤ø,,y,,ø¤º°t`°º¤ø,,h,,ø¤º°o`°º¤ø,,n,,ø¤º°


Regards,
Bengt Richter
 
B

bruno at modulix

Mike said:
Well, since your definition of "correclty" is uknown, I won't use
it.

!-)

My own definition of 'correctly' in this context would be about ensuring
that the implementation respects a given semantic.

But honestly, this was a somewhat trollish assertion, and I'm afraid
forgot to add a smiley here.
 
D

Donn Cave

hmmm, not exactly -- Haskell's not QUITE as strongly/rigidly typed as
this... you may have in mind CAML, which AFAIK in all of its variations
(O'CAML being the best-known one) *does* constrain + so that "the only
thing you can add to an integer is another integer". In Haskell, + can
sum any two instances of types which meet typeclass Num -- including at
least floats, as well as integers (you can add more types to a typeclass
by writing the required functions for them, too). Therefore (after
loading in ghci a file with
f x = x + 1
), we can verify...:

*Main> :type f
f :: (Num a) => a -> a


A very minor point, but since the need to use +. and the resulting lack
of polymorphism are part of what keeps me away from O'CAML and makes me
stick to Haskell, I still wanted to make it;-).

But if you try
f x = x + 1.0

it's
f :: (Fractional a) => a -> a

I asserted something like this some time ago here, and was
set straight, I believe by a gentleman from Chalmers. You're
right that addition is polymorphic, but that doesn't mean
that it can be performed on any two instances of Num. I had
constructed a test something like that to check my thinking,
but it turns out that Haskell was able to interpret "1" as
Double, for example -- basically, 1's type is Num too.
If you type the constant (f x = x + (1 :: Int)), the function
type would be (f :: Int -> Int). Basically, it seems (+) has
to resolve to a (single) instance of Num.

Donn Cave, (e-mail address removed)
 
T

Tom Anderson

[OT} (just taking liberties with your sig ;-)
,<@><
°º¤ø,,,,ø¤º°`°º¤ø,,,,ø¤º°P`°º¤ø,,y,,ø¤º°t`°º¤ø,,h,,ø¤º°o`°º¤ø,,n,,ø¤º°

The irony is that with my current news-reading setup, i see my own sig as
a row of question marks, seasoned with backticks and commas. Your
modification looks like it's adding a fish; maybe the question marks are a
kelp bed, which the fish is exploring for food.

Hmm. Maybe if i look at it through Google Groups ...

Aaah! Very good!

However, given the context, i think it should be:

,<OO><
°º¤ø,,,,ø¤º°`°º¤ø,,,,ø¤º°P`°º¤ø,,y,,ø¤º°t`°º¤ø,,h,,ø¤º°o`°º¤ø,,n,,ø¤º°

!

tom
 
T

Tom Anderson

hmmm, not exactly -- Haskell's not QUITE as strongly/rigidly typed as
this... you may have in mind CAML, which AFAIK in all of its variations
(O'CAML being the best-known one) *does* constrain + so that "the only
thing you can add to an integer is another integer". In Haskell, + can
sum any two instances of types which meet typeclass Num -- including at
least floats, as well as integers (you can add more types to a typeclass
by writing the required functions for them, too). Therefore (after
loading in ghci a file with
f x = x + 1
), we can verify...:

*Main> :type f
f :: (Num a) => a -> a

But if you try
f x = x + 1.0

it's
f :: (Fractional a) => a -> a

I asserted something like this some time ago here, and was set straight,
I believe by a gentleman from Chalmers. You're right that addition is
polymorphic, but that doesn't mean that it can be performed on any two
instances of Num.[/QUOTE]

That's what i understand. What it comes down to, i think, is that the
Standard Prelude defines an overloaded + operator:

def __add__(x: int, y: int) -> int:
<primitive operation to add two ints>

def __add__(x: float, y: float) -> float:
<primitive operation to add two floats>

def __add__(x: str, y: str) -> str:
<primitive operation to add two strings>

# etc

So that when the compiler hits the expression "x + 1", it has a finite set
of possible interpretations for '+', of which only one is legal - addition
of two integers to yield an integer. Or rather, given that "1" can be an
int or a float, it decides that x could be either, and so calls it "alpha,
where alpha is a number". Or something.

While we're on the subject of Haskell - if you think python's
syntactically significant whitespace is icky, have a look at Haskell's
'layout' - i almost wet myself in terror when i saw that!

tom
 
B

bonono

Tom said:
While we're on the subject of Haskell - if you think python's
syntactically significant whitespace is icky, have a look at Haskell's
'layout' - i almost wet myself in terror when i saw that!
Though one doesn't need to use indentation and write everything using
{} in Haskell.
 
M

Magnus Lycka

Welcome to Python Matthias. I hope you will enjoy it!

Matthias said:
Another thing which is really bugging me about this whole dynamically
typing thing is that it seems very error prone to me:

foo = "some string!"

# ...

if (something_fubar):
fo = "another string"

Oops, the last 'o' slipped, now we have a different object and the
interpreter will happily continue executing the flawed program.

As an old hardware designer from the space industry, I'm well
aquainted with the idea of adding redundancy to make things
more reliable. I also know that this doesn't come without a
prize. All this stuff you add to detect possible errors might
also introduce new errors, and it takes a lot of time and
effort to implement--time that could be spent on better things.

In fact, the typical solutions that are used to increase the
odds that hardware doesn't fail before its Mean Time To
Failure (MTTF), will significantly lower the chance that it
works much longer than its MTTF! More isn't always better.

While not the same thing, software development is similar.
All that redundancy in typical C++ programs has a high
development cost. Most of the stuff in C++ include files
are repeated in the source code, and the splitting of
code between include and source files mean that a lot of
declarations are far from the definitions. We know that
this is a problem: That's why C++ departed from C's concept
of putting all local declarations in the beginning of
functions. Things that are closely related should be as
close as possible in the code!

The static typing means that you either have to make several
implementations of many algorithms, or you need to work with
those convoluted templates that were added to the language as
an afterthought.

Generally, the more you type, the more you will mistype.
I even suspect that the bug rate grows faster than the
size of the code. If you have to type five times as much,
you will probably make typos five times as many times,
but you also have the problem that the larger amount of
code is more difficult to grasp. It's less likely that
all relevant things are visible on the screen at the same
time etc. You'll make more errors that aren't typos.

Python is designed to allow you to easily write short and
clear programs. Its dynamic typing is a very important
part of that. The important thing isn't that we are
relieved from the boring task of typing type declarations,
but rather that the code we write can be much more generic,
and the coupling between function definitions and function
callers can be looser. This means that we get faster
development and easier maintenace if we learn to use this
right.

Sure, a C++ or Java compiler will discover some mistakes
that would pass through the Python compiler. This is not
a design flaw in Python, it's a direct consequence of its
dynamic nature. Compile time type limitations goes against
the very nature of Python. It's not the checks we try to
avoid--it's the premature restrictions in functionality.

Anyway, I'm sure you know that a successful build with
C++ or Java doesn't imply correct behaviour of your
program.

All software needs to be tested, and if we want to work
effectively and be confident that we don't break things
as we add features or tidy up our code, we need to make
automated tests. There are good tools, such as unittest,
doctest, py.test and TextTest that can help us with that.

If you have proper automated tests, those tests will
capture your mistypings, whether they would have been
caught by a C++ or Java compiler or not. (Well, not if
they are in dead code, but C++/Java won't give you any
intelligent help with that either...)

I've certainly lost time due to mistyped variables now
and then. It's not uncommon that I've actually mistyped
in a way that Java/C++ would never notice (e.g. typed i
instead of j in some nested for loop etc) but sometimes
compile time type checking would have saved time for me.

On the other hand, I'm sure that type declarations on
variables would bring a rigidity to Python that would
cost me much more than I would gain, and with typeless
declarations as in Perl (local x) I would probably
waste more time on adding forgotten declarations (or
removing redundant ones) than I would save time on
noticing the variable mistypings a few seconds before
my unittests catch them. Besides, there are a number
of lint-like tools for Python if you want static code
checks.

As I wrote, the lack of type checking in Python is a
consequence of the very dynamic nature of the language.
A function should assume as little as possible about
its parameters, to be able to function in the broadest
possible scope. Don't add complexity to make you code
support things you don't know of a need for, but take
the chance Python gives you of assuming as little as
possible about your callers and the code you call.

This leads to more flexible and maintainable software.
A design change in your software will probably lead to
much more code changes if you write in C++ than if you
write in Python.

While feature-by-feature comparisions of different
programming languages might have some merit, the only
thing that counts in the end is how the total package
works... I think you'll find that Python is a useful
package, and a good tool in a bigger tool chest.
 
B

bonono

Magnus said:
The static typing means that you either have to make several
implementations of many algorithms, or you need to work with
those convoluted templates that were added to the language as
an afterthought.
I don't see this in Haskell.
While feature-by-feature comparisions of different
programming languages might have some merit, the only
thing that counts in the end is how the total package
works... I think you'll find that Python is a useful
package, and a good tool in a bigger tool chest.
That is very true.
 
M

Magnus Lycka

I don't see this in Haskell.

No, I was refering to C++ when I talked about templates.

I don't really know Haskell, so I can't really compare it
to Python. A smarter compiler can certainly infer types from
the code and assemble several implementations of an
algorithm, but unless I'm confused, this makes it difficult
to do the kind of dynamic linking / late binding that we do in
Python. How do you compile a dynamic library without locking
library users to specific types?

I don't doubt that it's possible to make a statically typed
language much less assembly like than C++...
 
B

bonono

Magnus said:
I don't really know Haskell, so I can't really compare it
to Python. A smarter compiler can certainly infer types from
the code and assemble several implementations of an
algorithm, but unless I'm confused, this makes it difficult
to do the kind of dynamic linking / late binding that we do in
Python. How do you compile a dynamic library without locking
library users to specific types?
I don't know. I am learning Haskell(and Python too), long way to go
before I would get into the the usage you mentioned, if ever, be it
Haskell or Python.
 
M

Magnus Lycka

I don't know. I am learning Haskell(and Python too), long way to go
before I would get into the the usage you mentioned, if ever, be it
Haskell or Python.

Huh? I must have expressed my thoughts badly. This is trivial to
use in Python. You could for instance write a module like this:

### my_module.py ###
import copy

def sum(*args):
result = copy.copy(args[0])
for arg in args[1:]:
result += arg
return result

### end my_module.py ###

Then you can do:
>>> from my_module import sum
>>> sum(1,2,3) 6
>>> sum('a','b','c') 'abc'
>>> sum([1,2,3],[4,4,4]) [1, 2, 3, 4, 4, 4]
>>>

Assume that you didn't use Python, but rather something with
static typing. How could you make a module such as my_module.py,
which is capable of working with any type that supports some
standard copy functionality and the +-operator?
 
B

bonono

Magnus said:
I don't know. I am learning Haskell(and Python too), long way to go
before I would get into the the usage you mentioned, if ever, be it
Haskell or Python.

Huh? I must have expressed my thoughts badly. This is trivial to
use in Python. You could for instance write a module like this:

### my_module.py ###
import copy

def sum(*args):
result = copy.copy(args[0])
for arg in args[1:]:
result += arg
return result

### end my_module.py ###

Then you can do:
from my_module import sum
sum(1,2,3) 6
sum('a','b','c') 'abc'
sum([1,2,3],[4,4,4]) [1, 2, 3, 4, 4, 4]

Assume that you didn't use Python, but rather something with
static typing. How could you make a module such as my_module.py,
which is capable of working with any type that supports some
standard copy functionality and the +-operator?
Ah, I thought you were talking about DLL or some external library
stuff. In Haskell, it use a concept of type class. Conceptually similar
to the "duck typing" thing in python/ruby. You just delcare the data
type then add an implementation as an instance of a type class that
knows about +/- or copy. The inference engine then would do its work.

I would assume that even in python, there are different implement of
+/- and copy for different object types.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,001
Messages
2,570,254
Members
46,849
Latest member
Fira

Latest Threads

Top