overriding __getitem__ for a subclass of dict

S

Steve Howell

I ran the following program, and found its output surprising in one
place:

class OnlyAl:
def __getitem__(self, key): return 'al'

class OnlyBob(dict):
def __getitem__(self, key): return 'bob'

import sys; print sys.version

al = OnlyAl()
bob = OnlyBob()

print al['whatever']
al.__getitem__ = lambda key: 'NEW AND IMPROVED AL!'
print al['whatever']

print bob['whatever']
bob.__getitem__ = lambda key: 'a NEW AND IMPROVED BOB seems
impossible'
print bob['whatever']

2.6.2 (release26-maint, Apr 19 2009, 01:56:41)
[GCC 4.3.3]
al
NEW AND IMPROVED AL!
bob
bob

In attempting to change the behavior for bob's dictionary lookup, I am
clearly doing something wrong, or maybe even impossible.

Obviously the examples are contrived, but I am interested on a purely
academic level why setting __getitem__ on bob does not seem to change
the behavior of bob['foo']. Note that OnlyBob subclasses dict;
OnlyAl does not.

On a more practical level, I will explain what I am trying to do.
Basically, I am trying to create some code that allows me to spy on
arbitrary objects in a test environment. I want to write a spy()
method that takes an arbitrary object and overrides its implementation
of __getitem__ and friends so that I can see how library code is
invoking the object (with print statements or whatever). Furthermore,
I want spy() to recursively spy on objects that get produced from my
original object. The particular use case is that I am creating a
context for Django templates, and I want to see which objects are
getting rendered, all the way down the tree. It would be pretty easy
to just create a subclass of the context method to spy at the top
level, but I want to recursively spy on all its children, and that is
why I need a monkeypatching approach. The original version had spy
recursively returning proxy/masquerade objects that intercepted
__getitem__ calls, but it becomes brittle when the proxy objects go
off into places like template filters, where I am not prepared to
intercept all calls to the object, and where in some cases it is
impossible to gain control.

Although I am interested in comments on the general problems (spying
on objects, or spying on Django template rendering), I am most
interested in the specific mechanism for changing the __getitem__
method for a subclass on a dictionary. Thanks in advance!
 
G

Gary Herron

Steve said:
I ran the following program, and found its output surprising in one
place:

class OnlyAl:
def __getitem__(self, key): return 'al'

class OnlyBob(dict):
def __getitem__(self, key): return 'bob'

import sys; print sys.version

al = OnlyAl()
bob = OnlyBob()

print al['whatever']
al.__getitem__ = lambda key: 'NEW AND IMPROVED AL!'
print al['whatever']

print bob['whatever']
bob.__getitem__ = lambda key: 'a NEW AND IMPROVED BOB seems
impossible'
print bob['whatever']

2.6.2 (release26-maint, Apr 19 2009, 01:56:41)
[GCC 4.3.3]
al
NEW AND IMPROVED AL!
bob
bob

It's the difference between old-style and new-style classes. Type dict
and therefore OnlyBob are new style. OnlyAl defaults to old-style. If
you derive OnlyAl from type object, you'll get consistent results.

Gary Herron


In attempting to change the behavior for bob's dictionary lookup, I am
clearly doing something wrong, or maybe even impossible.

Obviously the examples are contrived, but I am interested on a purely
academic level why setting __getitem__ on bob does not seem to change
the behavior of bob['foo']. Note that OnlyBob subclasses dict;
OnlyAl does not.

On a more practical level, I will explain what I am trying to do.
Basically, I am trying to create some code that allows me to spy on
arbitrary objects in a test environment. I want to write a spy()
method that takes an arbitrary object and overrides its implementation
of __getitem__ and friends so that I can see how library code is
invoking the object (with print statements or whatever). Furthermore,
I want spy() to recursively spy on objects that get produced from my
original object. The particular use case is that I am creating a
context for Django templates, and I want to see which objects are
getting rendered, all the way down the tree. It would be pretty easy
to just create a subclass of the context method to spy at the top
level, but I want to recursively spy on all its children, and that is
why I need a monkeypatching approach. The original version had spy
recursively returning proxy/masquerade objects that intercepted
__getitem__ calls, but it becomes brittle when the proxy objects go
off into places like template filters, where I am not prepared to
intercept all calls to the object, and where in some cases it is
impossible to gain control.

Although I am interested in comments on the general problems (spying
on objects, or spying on Django template rendering), I am most
interested in the specific mechanism for changing the __getitem__
method for a subclass on a dictionary. Thanks in advance!
 
S

Steve Howell

[see original post...]
I am most
interested in the specific mechanism for changing the __getitem__
method for a subclass on a dictionary.  Thanks in advance!

Sorry for replying to myself, but I just realized that the last
statement in my original post was a little imprecise.

I am more precisely looking for a way to change the behavior of foo
['bar'] (side effects and possibly return value) where "foo" is an
instance of a class that subclasses "dict," and where "foo" is not
created by me. The original post gives more context and example code
that does not work as I expect/desire.
 
S

Steve Howell

Steve said:
I ran the following program, and found its output surprising in one
place:
    class OnlyAl:
        def __getitem__(self, key): return 'al'
    class OnlyBob(dict):
        def __getitem__(self, key): return 'bob'
    import sys; print sys.version
    al = OnlyAl()
    bob = OnlyBob()
    print al['whatever']
    al.__getitem__ = lambda key: 'NEW AND IMPROVED AL!'
    print al['whatever']
    print bob['whatever']
    bob.__getitem__ = lambda key: 'a NEW AND IMPROVED BOB seems
impossible'
    print bob['whatever']
    2.6.2 (release26-maint, Apr 19 2009, 01:56:41)
    [GCC 4.3.3]
    al
    NEW AND IMPROVED AL!
    bobe
    bob

It's the difference between old-style and new-style classes.  Type dict
and therefore OnlyBob are new style.  OnlyAl defaults to old-style.  If
you derive OnlyAl from type object, you'll get consistent results.

Thanks, Gary. My problem is that I am actually looking for the
behavior that the old-style OnlyAl provides, not OnlyBob--allowing me
to override the behavior of al['foo'] and bob['foo']. I (hopefully)
clarified my intent in a follow-up post that was sent before I saw
your reply. Here it is re-posted for convenience of discussion:

"I am more precisely looking for a way to change the behavior of foo
['bar'] (side effects and possibly return value) where "foo" is an
instance of a class that subclasses "dict," and where "foo" is not
created by me."
 
J

Jon Clements

[see original post...]
I am most
interested in the specific mechanism for changing the __getitem__
method for a subclass on a dictionary.  Thanks in advance!

Sorry for replying to myself, but I just realized that the last
statement in my original post was a little imprecise.

I am more precisely looking for a way to change the behavior of foo
['bar'] (side effects and possibly return value) where "foo" is an
instance of a class that subclasses "dict," and where "foo" is not
created by me.  The original post gives more context and example code
that does not work as I expect/desire.

[quote from http://docs.python.org/reference/datamodel.html]
For instance, if a class defines a method named __getitem__(), and x
is an instance of this class, then x is roughly equivalent to
x.__getitem__(i) for old-style classes and type(x).__getitem__(x, i)
for new-style classes.
[/quote]

A quick hack could be:

class Al(dict):
def __getitem__(self, key):
return self.spy(key)
def spy(self, key):
return 'Al'
a = Al()
a[3] 'Al'
a.spy = lambda key: 'test'
a[3] 'test'
b = Al()
b[3]
'Al'

Seems to be what you're after anyway...

hth,
Jon.
 
S

Steve Howell

I am more precisely looking for a way to change the behavior of foo
['bar'] (side effects and possibly return value) where "foo" is an
instance of a class that subclasses "dict," and where "foo" is not
created by me.  The original post gives more context and example code
that does not work as I expect/desire.

[quote fromhttp://docs.python.org/reference/datamodel.html]
For instance, if a class defines a method named __getitem__(), and x
is an instance of this class, then x is roughly equivalent to
x.__getitem__(i) for old-style classes and type(x).__getitem__(x, i)
for new-style classes.


A quick hack could be:

class Al(dict):
  def __getitem__(self, key):
    return self.spy(key)
  def spy(self, key):
    return 'Al'
a = Al()
a[3] 'Al'
a.spy = lambda key: 'test'
a[3] 'test'
b = Al()
b[3]

'Al'

Seems to be what you're after anyway...
[/QUOTE]

This is very close to what I want, but the problem is that external
code is defining Al, and I do not seem to be able to get this
statement to have any effect:

a.__getitem__ = lambda key: test

How can I change the behavior of a['foo'] without redefining Al?
 
S

Steve Howell

[see original post...]
I am most
interested in the specific mechanism for changing the __getitem__
method for a subclass on a dictionary.  Thanks in advance!
Sorry for replying to myself, but I just realized that the last
statement in my original post was a little imprecise.
I am more precisely looking for a way to change the behavior of foo
['bar'] (side effects and possibly return value) where "foo" is an
instance of a class that subclasses "dict," and where "foo" is not
created by me.  The original post gives more context and example code
that does not work as I expect/desire.

[quote fromhttp://docs.python.org/reference/datamodel.html]
For instance, if a class defines a method named __getitem__(), and x
is an instance of this class, then x is roughly equivalent to
x.__getitem__(i) for old-style classes and type(x).__getitem__(x, i)
for new-style classes.

[/QUOTE]

Ok, thanks to Jon and Gary pointing me in the right direction, I think
I can provide an elaborate answer my own question now.

Given an already instantiated instance foo of Foo where Foo subclasses
dict, you cannot change the general behavior of calls of the form foo
[bar]. (Obviously you can change the behavior for specific examples of
bar after instantiation by setting foo['apple'] and foo['banana'] as
needed, but that's not what I mean.)

This may be surprising to naive programmers like myself, given that is
possible to change the behavior of foo.bar() after instantiation by
simply saying "foo.bar = some_method". Also, with old-style classes,
you can change the behavior of foo[bar] by setting foo.__getitem__.
Even in new-style classes, you can change the behavior of
foo.__getitem__(bar) by saying foo.__getitem__ = some_method, but it
is a pointless exercise, since foo.__getitem__ will have no bearing on
the processing of "foo[bar]." Finally, you can define __getitem__ on
the Foo class itself to change how foo[bar] gets resolved, presumably
even after instantiation of foo itself (but this does not allow for
instance-specific behavior).

Here is the difference:

foo.value looks for a definition of value on the instance before
looking in the class hierarchy
foo[bar] can find __getitem__ on foo before looking at Foo and its
superclasses, if Foo is old-style
foo[bar] will only look for __getitem__ in the class hierarchy if
Foo derives from a new-style class

Does anybody have any links that points to the rationale for ignoring
instance definitions of __getitem__ when new-style classes are
involved? I assume it has something to do with performance or
protecting us from our own mistakes?

So now I am still in search of a way to hook into calls to foo[bar]
after foo has been instantiated. It is all test code, so I am not
particularly concerned about safety or future compatibility. I can do
something really gross like monkeypatch Foo class instead of foo
instance and keep track of the ids to decide when to override
behavior, but there must be a simpler way to do this.
 
M

MRAB

Christian said:
Steve said:
Does anybody have any links that points to the rationale for ignoring
instance definitions of __getitem__ when new-style classes are
involved? I assume it has something to do with performance or
protecting us from our own mistakes?

Most magic methods are implemented as descriptors. Descriptors only
looked up on the type to increase the performance of the interpreter and
to simply the C API. The same is true for other descriptors like
properties. The interpreter invokes egg.__getitem__(arg) as
type(egg).__getitem__(egg, arg).
So now I am still in search of a way to hook into calls to foo[bar]
after foo has been instantiated. It is all test code, so I am not
particularly concerned about safety or future compatibility. I can do
something really gross like monkeypatch Foo class instead of foo
instance and keep track of the ids to decide when to override
behavior, but there must be a simpler way to do this.

Try this untested code:

class Spam(dict):
def __getitem__(self, key):
getitem = self.__dict__.get("__getitem__", dict.__getitem__)
return getitem(self, key)

Because dict is the most important and speed critical type in Python it
has some special behaviors. If you are going to overwrite __getitem__ of
a dict subclass then you have to overwrite all methods that call
__getitem__, too. These are get, pop, update and setdefault.
I wonder whether it's possible to define 2 behaviours, an optimised one
for instances of a class and another non-optimised one for instances of
a subclasses. That would make it easier to subclass built-in classes
without losing their speed.
 
S

Steve Howell

Most magic methods are implemented as descriptors. Descriptors only
looked up on the type to increase the performance of the interpreter and
to simply the C API. The same is true for other descriptors like
properties. The interpreter invokes egg.__getitem__(arg) as
type(egg).__getitem__(egg, arg).

Is the justification along performance lines documented anywhere?
So now I am still in search of a way to hook into calls to foo[bar]
after foo has been instantiated.  It is all test code, so I am not
particularly concerned about safety or future compatibility.  I can do
something really gross like monkeypatch Foo class instead of foo
instance and keep track of the ids to decide when to override
behavior, but there must be a simpler way to do this.

Try this untested code:

class Spam(dict):
    def __getitem__(self, key):
        getitem = self.__dict__.get("__getitem__", dict.__getitem__)
        return getitem(self, key)
[...]

Not sure how this helps me, unless I am misunderstanding...

It is the futility of writing lowercase_spam.__getitem__ that is
setting me back. For my use case I do not want to override
__getitem__ for all Spam objects, nor do I even have the option to
modify the Spam class in some cases.
 
S

Steve Howell

Try this untested code:
class Spam(dict):
    def __getitem__(self, key):
        getitem = self.__dict__.get("__getitem__", dict.__getitem__)
        return getitem(self, key)
[...]

[I originally responded...] Not sure how this helps me, unless I am misunderstanding...

Ok, now I get where you were going with the idea. The following code
runs as expected. Even in pure testing mode, I would want to make it
a little more robust, but it illustrates the basic idea that you can
monitor just particular objects by overriding the class method to look
for an attribute on the instance before doing any special processing.

class MyDict(dict):
pass

dict1 = MyDict()
dict1['foo'] = 'bar'

dict2 = MyDict()
dict2['spam'] = 'eggs'

dict3 = MyDict()
dict3['BDFL'] = 'GvR'

def spy(dict):
def mygetitem(self, key):
if hasattr(self, '__SPYING__'):
value = self.__class__.__old_getitem__(self, key)
print 'derefing %s to %s on %s' % (key, value, self)
return value
if not hasattr(dict.__class__, '__HOOKED__'):
setattr(dict.__class__, '__old_getitem__',
dict.__class__.__getitem__)
setattr(dict.__class__, '__getitem__', mygetitem)
setattr(dict.__class__, '__HOOKED__', True)
dict.__SPYING__ = True

dict1['foo'] # not spied yet
spy(dict1) # this changes class and instance
dict1['foo'] # spied
dict2['spam'] # not spied
spy(dict3) # this only changes instance
dict3['BDFL'] # spied
dict2['spam'] # spied

Thanks, Christian!
 
C

Carl Banks

Does anybody have any links that points to the rationale for ignoring
instance definitions of __getitem__ when new-style classes are
involved?  I assume it has something to do with performance or
protecting us from our own mistakes?


"Not important enough to justify complexity of implementation."

I doubt they would have left if out of new-style classes if it had
been straightforward to implement (if for no other reason than to
retain backwards compatibility), but it wasn't. The way attribute
lookups work meant it would have required all kinds of double lookups
and edge cases. Some regarded it as dubious to begin with. And it's
easily worked around by simply having __getitem__ call another method,
as you've seen. Given all this it made better sense to just leave it
out of new-style classes.

Unfortunately not all such decisions and justifications are collected
in a tidy place.


Carl Banks
 
S

Steve Howell

"Not important enough to justify complexity of implementation."

I doubt they would have left if out of new-style classes if it had
been straightforward to implement (if for no other reason than to
retain backwards compatibility), but it wasn't.  The way attribute
lookups work meant it would have required all kinds of double lookups
and edge cases.  Some regarded it as dubious to begin with.  And it's
easily worked around by simply having __getitem__ call another method,
as you've seen.  Given all this it made better sense to just leave it
out of new-style classes.

Actually, the __getitem__ workaround that I proposed earlier only
works on subclasses of dict, not dict themselves. So given a pure
dictionary object, it is impossible to hook into attribute lookups
after instantiation in debugging/tracing code. I know it's not a
super common thing to do, but it is a legitimate use case from my
perspective. But I understand the tradeoffs. They seem kind of 20th
century to me, but with Moore's Law declining and all, maybe it's a
bad time to bring up the "flexibility trumps performance"
argument. ;) The backward compatibility argument also seems a little
dubious, because if anybody *had* put __getitem__ on a dictionary
instance before, it would have already been broken code, and if they
hadn't done it, there would be no semantic change, just a performance
hit, albeit a pervasive one.
Unfortunately not all such decisions and justifications are collected
in a tidy place.

Yep, that's why I was asking here. I figured somebody might remember
a thread on python-dev where this was discussed, or something like
that.
 
G

greg

Christian said:
Most magic methods are implemented as descriptors. Descriptors only
looked up on the type to increase the performance of the interpreter and
to simply the C API.

There's also a semantic problem. Since new-style
classes are also instances (of class 'type') and you
can create subclasses of 'type', if special methods
were looked up on instances, it would be ambiguous
whether an attribute '__getitem__' on a class was
meant to specify the behaviour of the [] operation
on its instances, or on the class itself.

This problem didn't arise with old-style classes,
because classes and instances were clearly separated
(i.e. old-style classes were not old-style instances).
 
S

Steven D'Aprano

Actually, the __getitem__ workaround that I proposed earlier only works
on subclasses of dict, not dict themselves. So given a pure dictionary
object, it is impossible to hook into attribute lookups after
instantiation in debugging/tracing code.

If you have written your application to avoid unnecessary isinstance and
type checks, i.e. to use duck-typing, then a technique you might find
useful is delegation.


class DebuggingDict(object):
def __init__(self, dict_to_wrap, hook=None):
self.__dict__['_d'] = dict_to_wrap
self.__dict__['getitem_hook'] = hook
def __getattr__(self, name):
return getattr(self._d, name)
def __setattr__(self, name, value):
setattr(self._d, name, value)
def __getitem__(self, key):
if self.getitem_hook is not None:
self.getitem_hook(self, key)
return self._d[key]


And in use:
.... print "Looking for key", key
....
d = DebuggingDict({1:'a', 2: 'b'}, hook)
d[1]
Looking for key 1
'a'
Looking for key 2
'b'
 
S

Steve Howell

Actually, the __getitem__ workaround that I proposed earlier only works
on subclasses of dict, not dict themselves.  So given a pure dictionary
object, it is impossible to hook into attribute lookups after
instantiation in debugging/tracing code.

If you have written your application to avoid unnecessary isinstance and
type checks, i.e. to use duck-typing, then a technique you might find
useful is delegation.

class DebuggingDict(object):
    def __init__(self, dict_to_wrap, hook=None):
        self.__dict__['_d'] = dict_to_wrap
        self.__dict__['getitem_hook'] = hook
    def __getattr__(self, name):
        return getattr(self._d, name)
    def __setattr__(self, name, value):
        setattr(self._d, name, value)
    def __getitem__(self, key):
        if self.getitem_hook is not None:
            self.getitem_hook(self, key)
        return self._d[key]

And in use:

...     print "Looking for key", key
...>>> d = DebuggingDict({1:'a', 2: 'b'}, hook)

Looking for key 1
'a'>>> d[2]

Looking for key 2
'b'

Yep, this basically resembles the approach that I originally took for
the broader problem, which was that I wanted to see how a third party
library (the Django templating system) was accessing dictionaries that
referred to objects that my tracing code did not create. Although I
did not instantiate the original objects, I did have the ability to
substitute masquerade objects for the original objects before passing
them along, and my code for the masquerading objects was similar in
spirit to your DebuggingDict. It actually worked pretty well, except
that eventually my masquerade objects went off to places where I was
not fully able to maintain the illusion of being the original object.
My original post on this thread sheds some light on what I'm doing,
but basically I was trying to masquerade down the whole tree of calls
from Django, which worked fine as long as Django was trying first to
access my objects like dictionaries, which it always does first when
rendering template variables, but all bets are off after that (custom
filters, etc.).

Eventually, I realized that it was easier to just monkeypatch Django
while I was in test mode to get a more direct hook into the behavior I
was trying to monitor, and then I didn't need to bother with
overriding __getitem__ or creating complicated wrapper objects. I
wrote about it here if anybody is morbidly interested:

http://showellonprogramming.blogspot.com/
 
S

Steve Howell

Christian said:
Most magic methods are implemented as descriptors. Descriptors only
looked up on the type to increase the performance of the interpreter and
to simply the C API.

There's also a semantic problem. Since new-style
classes are also instances (of class 'type') and you
can create subclasses of 'type', if special methods
were looked up on instances, it would be ambiguous
whether an attribute '__getitem__' on a class was
meant to specify the behaviour of the [] operation
on its instances, or on the class itself.

This problem didn't arise with old-style classes,
because classes and instances were clearly separated
(i.e. old-style classes were not old-style instances).

That explanation makes some sense to me. Given the ambiguity and the
backward compatibility issues, I would argue that both of the
commented lines in the program below should fail hard with a useful
warning. Only one of them actually does. The other just silently no-
ops.

class A(dict):
pass

a = A()
a['ignore'] = 'x'
a.__getitem__ = lambda key: 'foo' # exercise in futility

b = dict()
b['ignore'] = 'x'
b.__getitem__ = lambda key: 'foo' # hard failure

Tested under 2.6.

It seems like if __getitem__ is truly a special method, it should get
special treatment whenever you try to use it, even if that special
treatment is just an explicit error message that its use makes no
sense in a particular context. If you allow __getitem__ to exist on
a, then you create situations where __getitem__ is basically an
instance method on an instance of a subtype of dict, which sounds
awfully ambiguous to me, given that the whole intent of __getitem__ is
to define the behavior of [] on classes.
 
C

Carl Banks

Actually, the __getitem__ workaround that I proposed earlier only
works on subclasses of dict, not dict themselves.  So given a pure
dictionary object, it is impossible to hook into attribute lookups
after instantiation in debugging/tracing code.  I know it's not a
super common thing to do, but it is a legitimate use case from my
perspective.  But I understand the tradeoffs.  They seem kind of 20th
century to me, but with Moore's Law declining and all, maybe it's a
bad time to bring up the "flexibility trumps performance"
argument. ;)

It's not performance, it's code complexity.

Implementing this would have to added a lot of extra code to the
Python codebase--more than you seem to realize--all in support of a
dubious behavior. This would have meant more opporunities for bugs,
and an increased maintainence burden.

 The backward compatibility argument also seems a little
dubious, because if anybody *had* put __getitem__ on a dictionary
instance before, it would have already been broken code, and if they
hadn't done it, there would be no semantic change, just a performance
hit, albeit a pervasive one.

Wrong. It's only dubious from your narrow mindset that focuses on
your own problem and ignores the greater problem. When new-style
classes were introduced, they made a decision that magic methods would
no longer work when defined on any instance. It affected a *lot* more
than just your dictionary use-case.

So please spare me any suggestions that the change didn't carry a
significant backwards incompatibility. It did, and they made the
change anyway. That should tell you how difficult it would have been
to implement.


Carl Banks
 
S

Steve Howell

It's not performance, it's code complexity.

Implementing this would have to added a lot of extra code to the
Python codebase--more than you seem to realize--all in support of a
dubious behavior.  This would have meant more opporunities for bugs,
and an increased maintainence burden.


Wrong.  It's only dubious from your narrow mindset that focuses on
your own problem and ignores the greater problem.  When new-style
classes were introduced, they made a decision that magic methods would
no longer work when defined on any instance.  It affected a *lot* more
than just your dictionary use-case.

So please spare me any suggestions that the change didn't carry a
significant backwards incompatibility.  It did, and they made the
change anyway.  That should tell you how difficult it would have been
to implement.

I am sorry having for a narrow mindset, and I apologize for not
seeming to realize how much extra code would go into the Python
codebase to allow me to hook into attribute lookups after
instantiation in debugging/tracing code.
 
S

Steve Howell

Steve Howell wrote:

...


Since nobody else has mentioned it, I'd point you at Mock objects:
     http://python-mock.sourceforge.net/
for another way to skin the cat that it sounds like has been
biting you.  They are surprisingly useful for exploratory
and regression testing.

Thanks, Scott. We do indeed use mocks in our development, and they
are useful. In a way I am trying to emulate some of what mock objects
do, but in more on an integration testing mode. For testing the
rendering of whole templates, it sometimes becomes convenient for at
least some of the objects that your code depends on to maintain their
implementation under the hood, but you also want to see where they're
going, which is why I want to be able to hook into the dictionary
lookup mechanism.

Even outside of pure unit testing, mock objects have been pretty
versatile in terms of giving us the lay of the land, even when we
start getting down into the Django template stack. It is mostly when
you start interacting with the ORM that you want to start using real
objects. On the one hand you want to isolate yourselves from the ORM
behavior to validate the rest of your code, but sometimes it is
difficult to emulate the exact semantics of the ORM, or maybe you are
trying to get a handle on where the ORM is hitting the database,
etc.

The so-called "real world" situations are when you start wanting a
messy hybrid of mock objects, merely-spied-on objects, real objects,
etc.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,962
Messages
2,570,134
Members
46,690
Latest member
MacGyver

Latest Threads

Top