Jump to content

Recommended Posts

Grettings.  Our company runs a series of download servers for our content that are all Apache 2.0 on Dell servers (dual Xeon 3.06GHz with 2G memory).  The content resides on a RAID 0 disk array to maximize data through put.  We currently use the worker MPM configuration with the default install of Apache from Red Hat Enterprise Server 4.  This is our configuration for the worker:

 

ServerTokens Prod

ServerRoot "/etc/httpd"

PidFile run/httpd.pid

Timeout 300

KeepAlive Off

MaxKeepAliveRequests 100

KeepAliveTimeout 15

EnableSendfile On

ServerLimit 60

 

<IfModule worker.c>

StartServers        2

MaxClients        1500

MinSpareThreads    25

MaxSpareThreads    75

ThreadsPerChild    25

MaxRequestsPerChild  0

</IfModule>

 

These servers move a lot of data.  6 servers saw 1400mb of traffic today.  I really want to make sure I am maximizing these for performance.  Recently I have been noticing that in ps there are as many as 50 or 60 httpd.worker processes and the memory is completely consumed.  And to top it all off recently when I use scp to copy content to all the servers it seems to break Apache.  It runs the CPU up to 100% (100% wa), and when I try to stop the service is hangs all the processes saying "httpd.worker <defunct>" in ps.

 

Any thoughts or advice?

Link to comment
https://forums.phpfreaks.com/topic/77682-advice-for-performance-tuning-apache-20/
Share on other sites

Of course.  However that doesn't really give me any real-world information that is useful.  I have since noted that I am hitting MaxClients so I changed up my worker config to be:

 

<IfModule worker.c>

StartServers        2

ServerLimit          100

MaxClients        2500

MinSpareThreads    25

MaxSpareThreads    75

ThreadsPerChild    25

MaxRequestsPerChild  0

</IfModule>

 

That seems to have helped with the MaxClients problem.  But I'm still a little surprised that at 600 - 700 concurrent connections per server (and an average spit of about 250mb/s per server) that I am running the CPU up to 100%.  So that's why I am here... looking for some real world suggestions from people.

 

 

KeepAlive should not be 15

 

If you need to use it, set it to 2-4

 

I'd put a bet on scripts causing the performance issues

 

My server was getting throatled, I found it to be a website using GD image processing for a ton of images, absolute killer!

 

Check your websites, scripts etc. What are they running

 

Apache can cope really well with a ton of traffic, but if the pages are dynamic, it will cause more CPU and time to process a page

If your iowait is sitting at 100% you have some major major problems. Have you thought about using rsync instead of scp, that might be a little less strenuous on the procs/disks when updating small chunks of data.

 

How much swap are you using? Have you ever thought about using a reverse squid proxy to cache the static content?

This thread is more than a year old. Please don't revive it unless you have something important to add.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.