On Jul 13, 2005, at 9:29 PM, Peter Fein wrote: > > Ok, but this is a pretty large cost overhead compared to the constant > time len() of a PersistentDict or python dict. If you need to do a > lot > of len()'s, it adds up pretty quickly. Here is one way to address the issue. This isn't constant time, but it is pretty fast if you have your BNodes in ram and especially if your tree uses BNodes with higher degree. Evaluating this once during a request seems reasonable to me. Index: btree.py =================================================================== --- btree.py (revision 27028) +++ btree.py (working copy) @@ -203,6 +203,11 @@ self.items = self.nodes[0].items self.nodes = self.nodes[0].nodes + def get_count(self): + result = len(self.items) + for node in self.nodes or []: + result += node.get_count() + return result class BNode2 (BNode): minimum_degree = 2 class BNode4 (BNode): minimum_degree = 4 @@ -306,3 +311,8 @@ """() -> (key:anything, value:anything) Return the item whose key is maximal.""" return self.root.get_max_item() + + def get_count(self): + """() -> int + Compute and return the total number of items.""" + return self.root.get_count()