Jump to content

Recommended Posts

I want to save live data into file. My script fetch the data (one by one) and save it into file by amend method. The live data is not regular and every time it must be checked if the value has been updated or it is still the last value. One way to do so is to read the last line of the file and compare it with the fetched value. But in this case, I need to read the file for every check.

 

Is there a way to remember the last value and compare it with the coming value? to write only new value into the file.

Link to comment
https://forums.phpfreaks.com/topic/245941-remember-the-last-value/
Share on other sites

A file is not a database.  The most efficient way of doing this is to use fget to read through the file to the last line and compare that each time.  There are numerous caching addons available that might allow you to optimize this if it proves to be a major bottleneck.  You might want to look into memcached and apc.

 

People use databases because they can efficiently seek to individual rows and have provisions for multi-user access.

Yes, I use append method to add new data as a new line.

 

The problem is that when I read the file to get the last line, php walks across the entire file. Since this repeats regularly, it is necessary to avoid heavy task.

 

I already use memcache (on nginx). Another way is to use mysql database. But the problem is that we regularly read and write data. Indexing mysql is good for the first one and bad for the latter.

Long thin tables are a mysql specialty.  Btree+ indexing guarantees that access to the last entered row will be lightning fast, and readers do not block writers.  Compare this to your file based approach and as the file grows in size reading it will become slower and slower.  SQLite is light, which is to say that it has no server.  This means that every process that does a sqllite connection needs to allocate memory for anything sql lite is going to do.  I would not use sql lite, however this does remind me that there are name/value pair db's like berkley db that are extremely fast.  So you could supplement this by having a key of "lastline" and just read that and only write to your log file when the value read does not match the 'lastline' key.    This is basically the same thing you would do with memcached or apc, but it might be easier for you to use a small berkley db.  Mongodb could be a great solution, but you would need to install the mongodb server and that seems like overkilll for this problem.

This thread is more than a year old. Please don't revive it unless you have something important to add.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.