On Wed, Jul 25, 2007 at 01:25:52PM +0100, Bob Walker wrote:
not that i really care but no one else seems to have these issues.
well boston may have done but then crschimdt implemented mod_perl,
memcached and better indexes and indeed uses mysql i think.
I've definitely seen problems of this type -- in fact, I still do.
The big things to look for, however, require more knowledge of the usage
pattern. If we can get that, that's great: If you can get ahold of the
apache logs, the thing to look for is access of pages like ?action=index
with no index_type/index_value, or repeated access to large categories,
like the 'Restaurants' or 'Bars' category.
If it's really a case of 'fix it or kill it', I would bet that looking
at:
} elsif ($action eq 'index') {
$guide->show_index(
type => $q->param("index_type") ||
"Full",
value => $q->param("index_value") ||
"",
format => $format,
);
}
and killing show_index (and making it instead return a 'sorry! You can't
do that!' page) would significantly lower the load.
However, I can't guaraentee that -- another possibility is that spiders
are hammering the crap out of the site with lots of requests, in which
case, blocking spiders and working on that aspect of it might help.
As Bob mentioned, the indexes are Serious Business: if a small section
of query log from the site can be looked at, so that the queries can be
'explain analyze'd to check for sequential scans -- especially on the
metadata table, which is likely > 1mil rows if it's anything like boston
-- and indexes added.
I guess this is really directed towards Earle rather than towards Paul.
Regards,
--
Christopher Schmidt
Web Developer