durusmail: durus-users: Pickling/exporting objects
Pickling/exporting objects
2005-03-02
2005-03-02
2005-03-02
2005-03-02
2005-03-02
2005-03-02
2005-03-02
2005-03-03
2005-03-03
2005-03-03
2005-03-03
2005-03-03
2005-03-04
2005-03-04
2005-03-04
2005-03-04
2005-03-04
2005-03-04
2005-03-05
Pickling/exporting objects
A.M. Kuchling
2005-03-02
On Wed, Mar 02, 2005 at 12:05:32PM -0500, A.M. Kuchling wrote:
> the export may collide with OIDs used by the current database.  It
> seems to me that a function similar to copy_object() would still be
> required.

Iteration 2: copy_obj() takes an object and cleans off the
.._p_connection and ._p_oid attributes.  The resulting object, untied
to any particular connection, can be placed in a different Durus
database.  export_object(obj, filename, key) opens the storage
'filename' and records a copy of the object in the root object using
'key'.  import_object(filename, key) opens the storage and returns a
cleaned copy of the object stored under 'key'.  This interface lets
you export multiple objects into a single storage.

(The 'cleaned' terminology isn't particularly descriptive of breaking
the link between an object and its storage; any better suggestions?
"Checked out"?  "Divorced"?)

--amk

import os
from ACE_model import get_model, get_connection

m = get_model('small')
print m

from durus import serialize, connection, file_storage


def copy_obj (obj):
    """Copies the object 'obj', cleaning it and all required objects
    of the attributes tying it to a particular Durus connection.
    """
    conn = obj._p_connection
    if conn is None:
        # Returning the existing object is convenient; will it hide errors,
though?
        return obj
        ##raise RuntimeError("Object %r is not stored in Durus database")

    ow = serialize.ObjectWriter(conn)
    root_oid = obj._p_oid
    queue = [root_oid]
    oid_coll = {}
    while len(queue):
        oid = queue.pop(0)
        if oid in oid_coll:
            continue

        # Get object for this OID
        obj = conn.get(oid)
        if obj is None:
            ##print 'no object with oid', repr(oid)
            continue

        oid_coll[oid] = obj

        # Ensure object is loaded
        if obj._p_is_ghost():
            conn.load_state(obj)

        # Get OIDs referenced by this object and add them to the queue
        data, refs = ow.get_state(obj)
        refs = serialize.split_oids(refs)
        queue.extend(refs)

    # Need to put this in a try/finally w/ preceding block
    ow.close()

    print len(oid_coll), 'objects'

    # Clear _p_ attributes
    for obj in oid_coll.values():
        obj._p_oid = obj._p_connection = None

    return oid_coll[root_oid]


def export_object (obj, filename, key):
    """Write the specified object to the given storage, using the given
    'key' within the root object."""
    fs = file_storage.FileStorage(filename)
    conn2 = connection.Connection(fs)
    root = conn2.get_root()
    copy = copy_obj(obj)
    root[key] = copy
    conn2.commit()
    del conn2
    fs.close()


def import_object (filename, key):
    """Return a copy of the object from the specified storage."""
    fs = file_storage.FileStorage(filename, readonly=True)
    conn2 = connection.Connection(fs)
    root = conn2.get_root()
    obj = root[key]
    result = copy_obj(obj)
    del conn2
    fs.close()
    return result

filename = '/tmp/new-fs'
if os.path.exists(filename):
    os.remove(filename)
export_object(m, filename, 'model')

m2 = import_object(filename, 'model')
m3 = copy_obj(m2)
m3.dump()



reply