I'm really targeting the Quixote folks with this message, but Medusa folks may be interested also, so I'm posting to both lists. I've noticed that the upload performance with Medusa and Quixote can be very bad... Especially with larger files (more than a couple hundred KB). I've dug around, and found that the medusa_http.QuixoteHandler uses the medusa.xmlrpc_handler.collector for it's collection dirty work... Well, medusa.xmlrpc_handler.collector uses the 'anti-pattern' of "string = string + string_part" when collecting data, which accounts for the crappy upload performance I've observed. I've modified my copy of medusa.xmlrpc_handler.collector to join a list of strings instead (*), and it's much (MUCH!) better. However, I'm not sure modifying part of Medusa is the best way to go... Maybe we should just provide our own collector class in quixote.medusa_http (they're quite trivial), and leave Medusa alone. OTOH, if anyone uses medusa for XMLRPC... They'd probably appreciate the better POST performance. A possible third option would be to modify QuixoteHandler to use the medusa.script_handler.collector instead, which uses StringIO as the buffer. This might be best (reusing as much existing code as possible, and all that), but I haven't looked into it very hard. Some of the work medusa.xmlrpc_handler.collector does would have to be shifted into the QuixoteHandler class. I'll be happy to provide the patches, whichever way we go, but I'm looking for opinions/preferences on which route to take. Thanks, Jason Sibre * I did some impromptu benchmarking, and learned that the list.append()/"".join(list) technique actually outperforms StringIO and cStringIO for large increments, and falls in between for small increments. For Medusa's purposes (I've observed 4KB chunks going to the collector) the list joining technique is the fastest. I've copied my test results below if anyone cares: 100000 iterations of adding 1 bytes Doing list joiner : 0.45186 seconds (100000 bytes returned) Doing str concat-er: 7.30244 seconds (100000 bytes returned) Doing StringIO : 1.03356 seconds (100000 bytes returned) Doing cStringIO : 0.20682 seconds (100000 bytes returned) 20000 iterations of adding 5 bytes Doing list joiner : 0.09181 seconds (100000 bytes returned) Doing str concat-er: 1.89697 seconds (100000 bytes returned) Doing StringIO : 0.21069 seconds (100000 bytes returned) Doing cStringIO : 0.03587 seconds (100000 bytes returned) 10000 iterations of adding 10 bytes Doing list joiner : 0.04546 seconds (100000 bytes returned) Doing str concat-er: 1.02731 seconds (100000 bytes returned) Doing StringIO : 0.10039 seconds (100000 bytes returned) Doing cStringIO : 0.01793 seconds (100000 bytes returned) 10000 iterations of adding 50 bytes Doing list joiner : 0.04703 seconds (500000 bytes returned) Doing str concat-er: 15.52068 seconds (500000 bytes returned) Doing StringIO : 0.11230 seconds (500000 bytes returned) Doing cStringIO : 0.02438 seconds (500000 bytes returned) 1000 iterations of adding 500 bytes Doing list joiner : 0.00532 seconds (500000 bytes returned) Doing str concat-er: 1.56356 seconds (500000 bytes returned) Doing StringIO : 0.01097 seconds (500000 bytes returned) Doing cStringIO : 0.00969 seconds (500000 bytes returned) 1000 iterations of adding 1000 bytes Doing list joiner : 0.00734 seconds (1000000 bytes returned) Doing str concat-er: 3.49345 seconds (1000000 bytes returned) Doing StringIO : 0.01283 seconds (1000000 bytes returned) Doing cStringIO : 0.02518 seconds (1000000 bytes returned) These are all 'wallclock' measurements, just what an anxious web-surfer is paying attention to, and the measurements include the "".join(list) or x.getvalue() calls (to be fair).