On 9/20/05, David Bingerwrote: > On Sep 20, 2005, at 5:50 PM, Mike Orr wrote: > > >> > >> PROPFIND is a webdav method. Webdav is described in rfc 2518. > >> Maybe this is just a browser trying to mount your server. > >> > > > > But should it be? If the application is not meant to be WebDAV > > friendly, is there any reason not to send an error and force them to > > use GET? > > I agree that the unimplemented method response is appropriate here. > I just meant that these requests might be not be attacks. > Should it be Quixote or an application that restricts the request > method? > What if your application really wants to implement WebDAV? It looks like a job for ._q_traverse(). How about something like this? def get_methods(self, path): """Return a list of HTTP methods allowed for this traversal path.""" return ['GET', 'POST'] def _q_traverse(self, path): ... method = quixote.get_request().get_method().upper() if method == 'OPTIONS': return self.get_methods() # Properly formatted per RFC, of course.. elif method == 'TRACE': return quixote.get_request().get_headers() # Properly formatted. # request.get_headers() does not exist yet. # I'm not really sure what this method is supposed to do. elif method not in self.get_methods(path): quixote.get_response().set_status(501) return "ERROR MESSAGE" ... The user can override .get_methods() for WebDAV. They'll have to override ._q_traverse() for the action behavior. If it's reasonable we can add a WebDAV mixin class as an example. The behavior is so application-specific though I'm not sure there's much we could put in a mixin. Maybe just a note in the module docstring suggesting a pattern. > >> It will be interesting to see how well that works. > >> It seems relatively expensive. > >> > > > > It won't work for huge logs. The last organization I worked for had > > logs that were 1+ GB compressed. But normalizing the user agent field > > (or leaving it out) and using a datetime for the date and an int for > > the size should partly offset the overhead. I've also set an > > expiration of one week for images, and that cut the number of requests > > significantly. > > I was thinking that there might be contention at the database level > since this would, I assume, mean frequent writes to the same table. > Maybe that's not a problem, though. MySQL should multiplex them fine; it's designed to do that. If MySQL gets overwhelmed and aborts, we'll have to think of another strategy, but it doesn't seem close to that now. Actually, the company I last worked for *did* keep their multi-gigabyte access log in both MySQL and flat files. They normalized every string field including the URL, so any interesting query required joining several tables. Doing a query took twenty minutes each! -- Mike Orr or