Paolo said:
And mostly with sets remove operation expense should be sublinear or am
I wrong?
Is this fast as with lists?
It's faster then with lists... in sets, as with dicts, remove is on
average O(1).
Obviously if I use the ids as hash value nothing is guaranted about the
objects contents to be unique but I don't care.
My work is a self organizing net,in which the nodes keep a structure to
link other nodes.As the nature of the net,the links are moved frequently
so remove and add operations and contains query should be optimized.
Why objects need to be hashable for this? Isn't __hash__ there to solve
the problem?
The idea of a set of mutable sets looks a bit odd to me...
I don't get why the outer container should be a set, since you don't
care about uniqueness... if you are representing a graph (which seems
the case to me), I'd use an identifier for each node, and a dictionary
mapping node-ids to its adjacency set for each node. For instance,
0 <-- 1 --> 2 --> 3
| |
v v
4 --> 5
would be represented as
{0: set([]), 1: set([0, 2]), 2: set([2,4]), 3: set([5]), 4: set([5]),
5: set([])}
If node ids are consecutive integers, you could also of course use a
list as the outer structure.
PS: we could also discuss this in italian in it.comp.lang.python
