Stories
Slash Boxes
Comments

Slash Open Source Project

This discussion has been archived. No new comments can be posted.
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
 Full
 Abbreviated
 Hidden
More | Login
Loading... please wait.
  • by glorat (2431) on Thursday May 23 2002, @09:26AM (#4873) Homepage
    Based on my experience of nearly bringing down my own site on occasion due to undue popularity... Slash is very easy to bring down with a /. effect. Based on a default installation of Slash, this is what would happen:

    All the /. users arrive at your site and to keep up with requests, Apache launches several threads to handle requests. Because we are talking mod_perl here and Slash is a memory hog, each Apache thread takes at least 10Mbs of unshared RAM - Sometimes more, sometimes less but often more as memory gets unshared *fast* and we are talking heavy load. So suppose all 256Mbs of RAM go to Apache, if you have 26 concurrent threads or more, you go into swap. When that happens, you are in serious shit... the server will spiral to a halt. Because of this, this was the first time I ever had to call up tech support for a hard reboot. Ssh was taking too long.

    Solution? The easy answer is to change your httpd.conf to minimise MaxServers. On my 128Mb box, I've had to put this down to 6 or 7. There are a whole load of other mod_perl tricks to use... see the performance guide on the mod_perl site for the better solutions

    Once you've limited concurrent connections, any time, you hit the connection ceiling, your clients will get queued or get a timeout error. That's the slashdot effect but at least it hasn't brought down your server!

    Now the other point I wanted to bring is that throughput is not only determined by how fast Slash can make an HTML page but also on bandwidth as to how long it takes to send the page back to clients. Low bandwidth and /. effect bites worse. Also, while that Apache thread is serving that slow client, 1 out of your 6/7 available threads is blocked up. The suggested way is to do some squid acceleration. I'm no expert on this, but it means you have 6/7 mod_perl threads making the pages and, say, 10-100 squid threads serving the pages back to clients thus increasing your throughput and lessening /. effect

    Those are some guidelines but realistically, if you ever get linked from /. and you are running your website from a single machine alone doing all things, you're gonna be toast!
    • Look into using Apache::SizeLimit, this is how we make sure on Slashdot that children are not being killed too quickly. If you can afford to do it, look into keeping a server for static content and a server for dynamic content.
      At least have an image server.
      --

      --
      You can't grep a dead tree.