Jump to content

Relinquishing memory in WHILE loop


Go to solution Solved by requinix,

Recommended Posts

I get the following error:  PHP Fatal error:  Allowed memory size of 134217728 bytes exhausted (tried to allocate 4096 bytes)

 

The script is being run on the CLI.  I will be inserting the data in another database it the while loop and it is a one time task application, and I don't care about performance.  Typical count values are less than 10.

 

How can I relinquish memory to prevent this.

 

Thanks

$stmt=$pdoTemp->query('SELECT timestamp, GROUP_CONCAT(measurement) measurement, GROUP_CONCAT(field) field, GROUP_CONCAT(value) value, COUNT(value) count FROM allpoints GROUP BY timestamp');
while($rs=$stmt->fetch()) {
    $measurements=explode(',',$rs->measurement);
    $field=explode(',',$rs->field);
    $value=explode(',',$rs->value);
    syslog(LOG_INFO,count($measurements).' '.count($field).' '.count($value).' '.$rs->count);
}

 

Link to comment
https://forums.phpfreaks.com/topic/304872-relinquishing-memory-in-while-loop/
Share on other sites

Why don't you just run a query to count them instead of concatenating then exploding?

 

Also, group_concat may silently truncate output if the length is exceeded.

The number of concated columns is small, and will not be exceeded.  My purpose of grouping is I want will be doing inserts on the common timestamps.

Is the query returning a lot of rows? Use an unbuffered query. Otherwise the memory problem isn't in that code.

 

The exceeded memory error occurs on one of the explode statements.  So, you feel it is MySQL memory related?  Could it be $rs=$stmt->fetch() is too big?  While I am using a while loop, I suspect that all results are first set in $rs, right?

$pdoTemp=new \PDO("mysql:host={$db['host']};dbname=temp;charset={$db['charset']}",$db['username'],$db['password'],array(\PDO::ATTR_EMULATE_PREPARES=>false,\PDO::MYSQL_ATTR_USE_BUFFERED_QUERY=>true,\PDO::ATTR_ERRMODE=>\PDO::ERRMODE_EXCEPTION,\PDO::ATTR_DEFAULT_FETCH_MODE=>\PDO::FETCH_OBJ));

$stmt=$pdoTemp->query('SELECT COUNT(value) FROM allpoints');

echo($stmt->fetchColumn());

2757207

Edited by NotionCommotion

The exceeded memory error occurs on one of the explode statements.

That only tells you where PHP ran out. It doesn't tell you where all the memory went.

 

If $pdoTemp->query() executes a buffered query then all the results will be loaded into memory. Hypothetically, there was a bit less than 128MB of memory used for it and one of the explode()s pushed PHP over the limit.

An unbuffered query would not use the memory for everything at the cost of keeping the query's resultset open longer.

 

While I am using a while loop, I suspect that all results are first set in $rs, right?

$rs is just one row.

 

 

Question: wouldn't all those counts be the same?

$rs is just one row.

 

Ah, of course it must be.

 

When to use buffered and non-buffered makes more sense now.  I found I needed to use closeCursor() to make things work, and now I know exactly why.   Is the cursor closed for a fetchAll() and not closed for a plain fetch() for all major DB's?  Are buffered queries typical to other DBs as well?

  • Solution

When to use buffered and non-buffered makes more sense now.  I found I needed to use closeCursor() to make things work, and now I know exactly why.   Is the cursor closed for a fetchAll() and not closed for a plain fetch() for all major DB's?  Are buffered queries typical to other DBs as well?

In general you want a buffered query: it's much faster, frees up your connection for another query, and releases database resources. An unbuffered query is the opposite of that, but uses far less memory for large resultsets.

 

The second point in there is where the closed cursor comes in. Basically, a resultset needs to reach the end* so that both sides of the connection know the data is read and the server can do other stuff. IIRC closeCursor()-type functionality guarantees that the resultset was fully read and then immediately releases some client resources (which would have been freed normally eventually anyways).

fetchAll() reads everything (either from memory or from the server) and gives you an array, so one way or another the resultset is fully used up. fetch() gives you just one row so you're still subject to un/buffered query rules.

 

And yes, a distinction between buffered and unbuffered resultsets is common in client APIs for databases. (The server doesn't know or care either way.)

 

* Past the end, actually. The client doesn't know it's at the end until it fails to read another row. The row count is actually maintained by the client as a counter from it reading rows - it's not sent by the server with the results.

  • Like 1
This thread is more than a year old. Please don't revive it unless you have something important to add.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.