Graham Fawcett wrote > I was caught up > in the idea that the data-provider interface had to be custom-fitted to > the data model (getUser, getPlant, etc.) but your findXXX() approach > removes that obstacle. Given those "simple" semantics, I see how one > could indeed abstract, for example, a User data source that could be > implemented with SQL, LDAP, ZODB,... It reminds me of both tuple-spaces > and the REST concept: a short list of well-known verbs, plus a long list > of nouns, equals a resilient and extensible interface. Indeed. In code, IMO, this translates to moving complexity out of the interface into the data and is an idea propagated by many. Python makes this particularly easy by providing keyword arguments, dynamically updateable classes, etc. > Transactions are something that I didn't see addressed in your document; > and I believe this is a service normally provided by a container (which > you seem to be implementing, whether intentionally or not!). In my model, a 'connection' corresponds to an instance of a data provider and contains commit() and rollback() methods. You get an objectclass from a connection and all objects of that objectclass are bound by the transactions of the connection. Of course, if a data provider does not support transactions, the methods become bogus. But the user knowingly uses such a provider and should not expect transactional consistency. >> All data providers have the same interface - this >> interface is fixed and does not change with the schema. Well, on second >> thought, the attribute names would change but not any method names. > > That raises some questions. Would you specify that certain keyword > arguments should (only) accept values of certain types? I'm not sure how > one would handle, for example, > > User.findOne(birthdate='1950.12.12') I conveniently leave out type issues from the interface (makes my life easier :). This is not to say that portability is affected, but that productivity is improved. For example I have a data provider based on pyPgSQL (a python interface for PostgreSQL). The interface simply passes through all objects and the underlying library deals with the objects. If a string is passed where an int was expected, it is pyPgSQL, and not the data interface, that raises an exception. For dates, pyPgSQL uses mxDateTime objects and so the code above would look like: User.find_one(birthdate=mx.DateTime.DateTime(1950,12,12) One can say that in the application schema, birthdate is of type mxDateTime. What happens if you move this application to the Firebird database, where the client library uses the Python DateTime instead? Used as is, it raises an exception. To fix it, you simply write an objectclass wrapper (not a provider) to wrap the user objectclass from the Firebird provider. This wrapper translates the birthdate objects back and forth. It may be possible to just assemble such an objectclass without writing any code, if an mxDateTime-DateTime attribute translator is available. Such translators, together with data join and attribute mapping facilities are what (hopefully will) allow us to adapt the underlying data to the schema required by the application. Introducing a set of data types at the data interface level IMO unnecessarily increases the complexity. Not only does the developer have to learn a whole new set of data types, but the types have to be mapped adequately to each and every back-end system as well. In contrast the 'transparent' approach allows developers to get up and running quickly and even lets them take advantage of very special data types of a specific provider if they want. > unless a standard date-encoding was a given. I imagine that the schema > would include such detail? There is no formal specification of the schema definition. I assume there will be information in the docs of each application so that users can configure data providers appropriately. Things like checking lenght of strings etc. is supposed to be done by either the application itself or the underlying data provider. > Intriguing stuff, Shalabh! I'm especially interested in seeing QLime > when it is available. I am looking at a couple of months to open source it. It will be released under a BSD-style license. I'm glad at least someone is interested :) > I'm going to let all of this simmer for a day or two, and will probably > respond again. With luck there will be an onslaught of commentary from > the quixote-users community in the meantime! > > I would be very interested in seeing/writing a proof-of-concept app that > used some of your ideas. An issue tracker, perhaps? ;-) By all means. I've written a few QLime apps myself but would be happier to see how it works out for others. > -- Graham Shalabh