Jump to content
RobertTG

Cache Memory Issue

Recommended Posts

Hello linuxers,

I'm running a CentOS server (specs below) with 32 GB memory. My problem is 18.49 GB of the 32GB is used by cache. That seems to be a lot. Is that a good thing or cache is using too much? I'm running a cryptocurrency website. I cache a lot of stuff because I'm using API to pull cryptocurrency prices like the ones on this page: https://www.cryptozink.io/live-cryptocurrency-coins-prices/.

Is cache keeping the site from running out of memory or it's using too much and I should I reduce the number of pages cache? If I don't cache so much, will my the site slows down?

Specs:

OS: CentOS Linux 7.7.1908 (Core)

CPU: Intel(R) Xeon(R) CPU E3-1225 v3 @ 3.20GHz (4 core(s))

PLSK.05980134.0003

System Uptime: 3 day(s) 09:58

Total memory: 31.17 GB

Used: 20.04 GB

Cache: 18.49 GB

Share this post


Link to post
Share on other sites

This is what Linux does.  When the OS has lots of extra memory available it will use it to cache the filesystem.

If more memory is needed it will take memory from cache and allocate it.  You can think of it as Linux being proactive in making use of an available resource rather than having it go to waste.  

In regards to your site, the main question would be what level of CPU usage (top is a good tool to start with) you have, and what else the server is being utilized for.  I don't know if you can increase the number of processes that are pulling crypto price data, or processes that are doing other sorts of data or number crunching to make more use of the memory and cpu resources you have.

If this server also hosts a website that handles clients and displays data or allows for searching, it might be better to just be satisfied that you have ample headroom, and work on building up your audience.  

Really nobody could advise you without knowing more about what you are doing, but there is no reason to worry about your cache, until such time as your server becomes IO bound, and you are seeing sluggishness and lots of IOWait, where the machine is doing nothing as it waits on the reading & writing of data.

Share this post


Link to post
Share on other sites

These are the results of "top." I was just concerned cache was using so much memory. 

top - 06:10:48 up 3 days, 12:57,  1 user,  load average: 0.10, 0.13, 0.14
Tasks: 159 total,   1 running, 158 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.7 us,  0.3 sy,  0.0 ni, 98.9 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 32687160 total, 11278224 free,  1208340 used, 20200596 buff/cache
KiB Swap: 16760828 total, 16760828 free,        0 used. 30824124 avail Memory

I have not optimize the server and mysql yet. I have scripts monitoring and when I have enough results, I'll optimize. 
 

Share this post


Link to post
Share on other sites

Top is something that you have to run for a while and watch while the server is under a sort of typical period of load.  At any one instance it can be misleading, but if we take this at face value, it's telling you that your cpu's are doing almost nothing, so you have a lot of cpu power you could harness for additional work.

Since you mention this server also runs a mysql db, I can assume that when you pull crypto data from the api's you mentioned you then process that data and store it in your mysql db.  

I can suggest to you that, with all the memory you have, and again assuming your db will grow over time, 2 things to look at:

  • Make sure all your tables are InnoDB!  If you did not declare them as such originally you can still use ALTER TABLE to change this.
  • Take a large chunk of your available server memory  (anywhere from 40-80%) and allocate it to the mysql innodb buffer pool.  This is mysql InnoDB data cache which will cache the results of queries in mysql memory.

Again you have to be somewhat careful that you retain enough memory for whatever else you might be doing.  Here's one article with some algorithms for how to determine what to allocate and configure for your mysql server:  https://scalegrid.io/blog/calculating-innodb-buffer-pool-size-for-your-mysql-server/

If you were to run for example, an AWS hosted MySQL server via their RDS service, you'd find that by default it's going to have allocated 80% of available instance memory to the buffer pool, so that gives you a good idea of what works well for most servers.

Once you were to reconfigure mysql to allocate a big chunk of memory, you'll see the cache go way down accordingly, and the performance of mysql should improve again assuming your system does frequent queries of your crypto pricing data in MySQL.

 

Share this post


Link to post
Share on other sites

You're running MySQL and only have 1GB in use out of 32? Increase that now, then watch how the server behaves.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.