Gorf Posted November 17, 2007 Share Posted November 17, 2007 Grettings. Our company runs a series of download servers for our content that are all Apache 2.0 on Dell servers (dual Xeon 3.06GHz with 2G memory). The content resides on a RAID 0 disk array to maximize data through put. We currently use the worker MPM configuration with the default install of Apache from Red Hat Enterprise Server 4. This is our configuration for the worker: ServerTokens Prod ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 300 KeepAlive Off MaxKeepAliveRequests 100 KeepAliveTimeout 15 EnableSendfile On ServerLimit 60 <IfModule worker.c> StartServers 2 MaxClients 1500 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> These servers move a lot of data. 6 servers saw 1400mb of traffic today. I really want to make sure I am maximizing these for performance. Recently I have been noticing that in ps there are as many as 50 or 60 httpd.worker processes and the memory is completely consumed. And to top it all off recently when I use scp to copy content to all the servers it seems to break Apache. It runs the CPU up to 100% (100% wa), and when I try to stop the service is hangs all the processes saying "httpd.worker <defunct>" in ps. Any thoughts or advice? Quote Link to comment Share on other sites More sharing options...
Gorf Posted November 17, 2007 Author Share Posted November 17, 2007 Sorry I should also comment that this is completely static content. Quote Link to comment Share on other sites More sharing options...
effigy Posted November 17, 2007 Share Posted November 17, 2007 Have you looked through this? Quote Link to comment Share on other sites More sharing options...
Gorf Posted November 18, 2007 Author Share Posted November 18, 2007 Of course. However that doesn't really give me any real-world information that is useful. I have since noted that I am hitting MaxClients so I changed up my worker config to be: <IfModule worker.c> StartServers 2 ServerLimit 100 MaxClients 2500 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> That seems to have helped with the MaxClients problem. But I'm still a little surprised that at 600 - 700 concurrent connections per server (and an average spit of about 250mb/s per server) that I am running the CPU up to 100%. So that's why I am here... looking for some real world suggestions from people. Quote Link to comment Share on other sites More sharing options...
jaymc Posted November 21, 2007 Share Posted November 21, 2007 KeepAlive should not be 15 If you need to use it, set it to 2-4 I'd put a bet on scripts causing the performance issues My server was getting throatled, I found it to be a website using GD image processing for a ton of images, absolute killer! Check your websites, scripts etc. What are they running Apache can cope really well with a ton of traffic, but if the pages are dynamic, it will cause more CPU and time to process a page Quote Link to comment Share on other sites More sharing options...
steviewdr Posted November 25, 2007 Share Posted November 25, 2007 Read: http://www.stdlib.net/~colmmacc/Apachecon-EU2005/scaling-apache-presentation.pdf and http://www.heanet.ie/conferences/2005/presentations/friday%20marina/scaling%20Cmc.pdf This server (ftp.heanet.ie) ships 3.5TB a day of 1 server!!!!! I cant see why ye need 6 servers to ship 1.4GB!? -steve Quote Link to comment Share on other sites More sharing options...
jdubs Posted November 26, 2007 Share Posted November 26, 2007 If your iowait is sitting at 100% you have some major major problems. Have you thought about using rsync instead of scp, that might be a little less strenuous on the procs/disks when updating small chunks of data. How much swap are you using? Have you ever thought about using a reverse squid proxy to cache the static content? Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.