Jump to content

Relinquishing memory in WHILE loop


NotionCommotion

Recommended Posts

I get the following error:  PHP Fatal error:  Allowed memory size of 134217728 bytes exhausted (tried to allocate 4096 bytes)

 

The script is being run on the CLI.  I will be inserting the data in another database it the while loop and it is a one time task application, and I don't care about performance.  Typical count values are less than 10.

 

How can I relinquish memory to prevent this.

 

Thanks

$stmt=$pdoTemp->query('SELECT timestamp, GROUP_CONCAT(measurement) measurement, GROUP_CONCAT(field) field, GROUP_CONCAT(value) value, COUNT(value) count FROM allpoints GROUP BY timestamp');
while($rs=$stmt->fetch()) {
    $measurements=explode(',',$rs->measurement);
    $field=explode(',',$rs->field);
    $value=explode(',',$rs->value);
    syslog(LOG_INFO,count($measurements).' '.count($field).' '.count($value).' '.$rs->count);
}

 

Link to comment
Share on other sites

Is the query returning a lot of rows? Use an unbuffered query. Otherwise the memory problem isn't in that code.

 

The exceeded memory error occurs on one of the explode statements.  So, you feel it is MySQL memory related?  Could it be $rs=$stmt->fetch() is too big?  While I am using a while loop, I suspect that all results are first set in $rs, right?

$pdoTemp=new \PDO("mysql:host={$db['host']};dbname=temp;charset={$db['charset']}",$db['username'],$db['password'],array(\PDO::ATTR_EMULATE_PREPARES=>false,\PDO::MYSQL_ATTR_USE_BUFFERED_QUERY=>true,\PDO::ATTR_ERRMODE=>\PDO::ERRMODE_EXCEPTION,\PDO::ATTR_DEFAULT_FETCH_MODE=>\PDO::FETCH_OBJ));

$stmt=$pdoTemp->query('SELECT COUNT(value) FROM allpoints');

echo($stmt->fetchColumn());

2757207

Link to comment
Share on other sites

The exceeded memory error occurs on one of the explode statements.

That only tells you where PHP ran out. It doesn't tell you where all the memory went.

 

If $pdoTemp->query() executes a buffered query then all the results will be loaded into memory. Hypothetically, there was a bit less than 128MB of memory used for it and one of the explode()s pushed PHP over the limit.

An unbuffered query would not use the memory for everything at the cost of keeping the query's resultset open longer.

 

While I am using a while loop, I suspect that all results are first set in $rs, right?

$rs is just one row.

 

 

Question: wouldn't all those counts be the same?

Link to comment
Share on other sites

$rs is just one row.

 

Ah, of course it must be.

 

When to use buffered and non-buffered makes more sense now.  I found I needed to use closeCursor() to make things work, and now I know exactly why.   Is the cursor closed for a fetchAll() and not closed for a plain fetch() for all major DB's?  Are buffered queries typical to other DBs as well?

Link to comment
Share on other sites

When to use buffered and non-buffered makes more sense now.  I found I needed to use closeCursor() to make things work, and now I know exactly why.   Is the cursor closed for a fetchAll() and not closed for a plain fetch() for all major DB's?  Are buffered queries typical to other DBs as well?

In general you want a buffered query: it's much faster, frees up your connection for another query, and releases database resources. An unbuffered query is the opposite of that, but uses far less memory for large resultsets.

 

The second point in there is where the closed cursor comes in. Basically, a resultset needs to reach the end* so that both sides of the connection know the data is read and the server can do other stuff. IIRC closeCursor()-type functionality guarantees that the resultset was fully read and then immediately releases some client resources (which would have been freed normally eventually anyways).

fetchAll() reads everything (either from memory or from the server) and gives you an array, so one way or another the resultset is fully used up. fetch() gives you just one row so you're still subject to un/buffered query rules.

 

And yes, a distinction between buffered and unbuffered resultsets is common in client APIs for databases. (The server doesn't know or care either way.)

 

* Past the end, actually. The client doesn't know it's at the end until it fails to read another row. The row count is actually maintained by the client as a counter from it reading rows - it's not sent by the server with the results.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.