The number of rows actually returned is only in the order of 1500 at this stage. Running the results query directly against the database results in somewhat slow but much more reasonable performance than cattrack is showing (a second or so for each result). The logs show something like 30-70% for the DB portion of execution but I think there might be some overhead in how ruby is accessing the database that is contributing to this as it seems higher than I would expect.
The loops that iterate over the data and draw the image are also completely unoptimized, and I imagine add a significant amount of overhead.
I prototyped adding conditions to limit changes to 3 months and performance was still very poor, as well as going against part of the idea of having these graphs – which is to give us a more complete historical view of performance.
We really need to perform some level of aggregation for the older values in a format ready for graphing, but I don't have the time to work on this at the moment.
Peter has identified one of the fundamental problems with the infrastructure at the moment: we have a single-threaded app (which is also very slow – interpreter only), so if anything goes out to lunch we can't even serve basic pages... The regression report requires dozens of requests (usually several at a time).
Once I finish writing (quite a while) I will look at what we can do – including the possibility of changing the implementation to something that more active members of the project (> 0) have some experience with.