durusmail: durus-users: building a large BTree efficiently
building a large BTree efficiently
2005-10-14
2005-10-14
2005-10-14
2005-10-17
2005-10-17
2005-10-17
2005-10-17
2005-10-17
2005-10-19
2005-10-20
2005-10-20
2005-10-20
2005-10-20
2005-10-20
2005-10-26
2005-10-26
2005-10-26
building a large BTree efficiently
mario ruggier
2005-10-26
On Oct 26, 2005, at 4:44 PM, mario ruggier wrote:
> On Oct 20, 2005, at 10:04 PM, David Binger wrote:
>> On Oct 20, 2005, at 3:54 PM, mario ruggier wrote:
>>>
>>> I have added this to the processing of each item in the loop:
>>>
>>>             if item._p_is_saved():
>>>                 item._p_set_status_ghost()
>>>
>>> But, even with this, the "size" in the shrink_cache()'s
>>> (len(cache.objects)) grows as fast as the total number of items
>>> processed, as previously reported.
>>
>> Ghosting objects does not remove them from the cache.
>> It does, however, reduce the memory required to hold them
>> in memory.  A ghost object's __dict__ is empty.
>
> Thanks. Fwiw, i have tried running the generation process again. As
> the 10,000 object cylces accumulate (shrink_cache() is called at end
> of each), the virtual memory of the process seems more stable, but
> nevertheless continues to grow. And, on this machine with limited
> resources it does eventually lose it...

Can I force deletion of an item from the cache myself?
I.e. for long loops on big containers, could I do something equivalent
to, but without the havoc it causes:

             if item._p_is_saved():
                 del self._p_connection.cache[item._p_oid]

mario

reply