Jump to content

Server segfault's when trying to PDO::rollBack() ...


Recommended Posts

Hi folks, 

Running, 
PHP 7.1.4 
Lighttpd/1.4.32
on a small NAS, which is kinda old.... but all I've got as a server ;)

I'll try my best to explain my problem... hopefully someone can shed some light on this...

Essentially, I've written a "database wrapper" to handle any DB requests (selects, inserts, etc). It's using PDO...

At some point in my dbwrapper, I create a wrapperStatement object, passing along by reference, the PDO connection. After creating this wrapperStatement object, I call a function within it that'll do multiple inserts....

 

On my NAS, this multiInserts() function seems to be cause a 500 http server error. When I tried running the same script, directly on the server's command line with php-cgi, I get a Segmentation Fault. 

HOWEVER, I've installed PHP and Lighttpd on my working PC, and the code in multiInserts() just works !
 

class dbwrapper {
  ...
  $con = &$this->getConnection();
  $dbwrapperStmnt = new wrapperStatement($con)
  $dbwrapperStmnt->multiInserts();
  ...
}

class wrapperStatement {

  private $_conn;

  public function __construct(&$conn){
    $this->_conn = &$conn;
  }

  public function multiInserts(){

    // init data
    $data = array (
      array ( "jdoe1", "jdoe1@email.ca", "abc1"),
      array ( "jdoe2", "jdoe2@email.ca", "abc2"),
      array ( "jdoe3", "jdoe3@email.ca", "abc3"),
      array ( "jdoe4", "jdoe4@email.ca", "abc4"),
      array ( "jdoe5", "jdoe5@email.ca", "abc5"),
    );

    // init variables
    $username = null;
    $email = null;
    $passwd = null;

    $sql = "INSERT INTO users (USERNAME, EMAIL, PASSWD) VALUES (:username, :email, :passwd)";

    try
    {
      $con = new PDO("mysql:dbname=somedb;host=somehost;port=3306", "user", "passwd");
      $con->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
      
      //$con = $this->_conn;

      $stmt = $con->prepare($sql);
      $stmt->bindParam(':username', $username, PDO::PARAM_STR);
      $stmt->bindParam(':email', $email, PDO::PARAM_STR);
      $stmt->bindParam(':passwd', $passwd, PDO::PARAM_STR);
    }
    catch ( PDOException $ex )
    {
      die($ex->getMessage());
    }

    $errorList = array ();
    $totalRows = count($data);
    for ( $row = 0; $row < $totalRows; $row ++ )
    {
      $username = $data[$row][0];
      $email = $data[$row][1];
      $passwd = $data[$row][2];

      try
      {
        $con->beginTransaction();
        $stmt->execute();
        $con->commit();
      }
      catch ( PDOException $ex )
      {
        // HERE!!!
        // IF USING $this->_conn AS CONNECTION, SEGFAULT IS PRODUCED ON ROLLBACK!
        if ( $con->inTransaction() ) $con->rollBack();

        // just push the error to an array of failed transactions
        array_push($errorList, array ( $row, $ex->errorInfo[1], $ex->getMessage() ));
      }
    }

    if ( $errorList )
    {
      // print the errors
      foreach ( $errorList as $error )
      {
        echo "Row #:" . $error[0] . "<br>";
        echo "Code :" . $error[1] . "<br>";
        echo "Mesg :" . $error[2] . "<br><br>";
      }
    }
  }
}


Now, I've pin-pointed the exact line causing the segfault, and it's when I try a rollback() if I got errors (i.e. duplicate username, etc). 

What I've also found, if running on my NAS, is that if instead of using $this->_conn as my connection object (which I really should!), I create a NEW PDO object and use that one instead, it works, no segfaults!! 

At first, I thought it might have something to do with passing my $con has reference to my wrapperStatement constructor (perhaps I had done it wrong?), so I checked with var_dump($con)...

Before creating my wrapperStatement constructor, inside my dbwrapper... 
var_dump($con) = object(PDO)#2 (0) { }

And after, checking $this->_conn inside multiInserts() :
var_dump($this->_conn) = object(PDO)#2 (0) { } 

So, the #2 tells me it's the same object! (or am I mistaking?). It should be good. I still tried passing the $con object to my statement constructor as a 'copied' value instead of a reference, and problem remains.. changes nothing!

So, I'm very confused here as to what could be happening ? As additional info, my original $con object passed to the statement has the following PDO Attributes
 

$pdoAttribs = [
  PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
  PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC,
  PDO::ATTR_EMULATE_PREPARES => false,
  PDO::ATTR_AUTOCOMMIT => false,
  PDO::ATTR_PERSISTENT => true
];


Also, I do not have the exact same versions of PHP / Lighttpd on my server as on my working PC

Server:
PHP 7.1.4 
Lighttpd/1.4.32

PC:
PHP  7.4.3
Lighttpd/1.4.55

But, it's complicated to get the same versions installed everywhere... HOWERVER, I did at least match the configurations for both lighttpd and php configs. 

Here's a link to my phpinfo() : http://corbeauperdu.ddns.net/phpinfo.php

And as for my lighttpd.conf :
 

# Special Configuration file for the DNS-325
# web servers are declared as virtual hosts

# include, relative to dirname of main config file
include "/etc/lighttpd/lighttpd.conf"

# adding extra modules
server.modules += ( "mod_redirect" )
server.modules += ( "mod_compress" ) # PRoy: enable compression module
server.modules += ( "mod_expire" )   # PRoy: used to set expiry time for cache on clients

# add support for new virtual host
$HTTP["host"] =~ "(^|\.)myhost.ddns.net$" {
  server.document-root = "/srv/WWW/html"
  server.errorlog = "/srv/WWW/logs/error.log"
  server.error-handler-404 = "/blog/index.php"
  accesslog.filename = "/srv/WWW/logs/access.log"

  # files to check for if / is requested
  index-file.names = ( "index.php", "index.html", "index.htm" )

  ###########
  # PRoy: compress content to DISK as to save CPU-cycles when requesting these file types (otherwise, CPU will compress it all everytime!)
  compress.allowed-encodings = ("gzip", "deflate")
  compress.filetype  = ("text/plain", "text/html", "text/javascript", "text/css", "text/xml" )
  compress.cache-dir = "/srv/WWW/tmp/wwwcache"

  ###########
  # PRoy: setup CACHING on clients
  # make clients think their content is already up-to-date
  # mod_expire will actually send a Cache-Control / Expires HTTP header on each item it's called upon (css, jpg, or etc) or called URL path
  #
  # Cache based on suffix
  $HTTP["url"] =~ "\.(jpg|gif|png|css|js|eot|svg|ttf|woff|woff2)$" {
    expire.url = ( "" => "access plus 13 months" )
  }
  # Cache based on path
  expire.url += ( "/calchom/css/"  => "access plus 13 months",
                 "/blog/videos/"  => "access plus 13 months",
  )

  # Anything else not caught by mod_expires will be cached on client with these directives
  setenv.add-response-header += (

    # cache for 1 month
    #"Cache-Control" => "public, max-age=2592000, must-revalidate, proxy-revalidate"
    # no cache at all!
    "Cache-Control" => "no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0, post-check=0, pre-check=0"
  )

  ###########
  cgi.assign = (".cgi" => "")
  static-file.exclude-extensions = ( ".php", ".pl", ".fcgi", ".cgi" )
  flv-streaming.extensions = ( ".flv" )

  ##########
  # PHP Server configuration php-cgi vs php-fpm)
  # PHP-CGI (2020-02-02 : works good!)
  # Normally, PHP-FPM should be faster, but not on NAS (?)
  fastcgi.server = ( ".php" => ("localhost" => ("bin-path" => "/ffp/bin/php-cgi", "socket" => "/tmp/php-cgi.socket", "bin-environment" => ( "PHP_FCGI_CHILDREN" => "0" ), "max-procs" => "1") ) )

  ###########
  server.upload-dirs=( "/srv/WWW/tmp/wwwuploads" )
  server.max-request-size = 4194303
  server.network-backend="writev"

}


I know this is a long post, and a long shot.... Hopping to hear back from you folks.

Million thanks!!
Pat

Get rid of those references. At best you don't need them, at worst you should not have them. References are about reusing variables, so unless you actually want all those $named things to be the same variable then you don't want references. Note I said "variables", not "values". PHP will already reuse object values for you.

Run it from the CLI and post a backtrace.

10 hours ago, requinix said:

Get rid of those references. At best you don't need them, at worst you should not have them. References are about reusing variables, so unless you actually want all those $named things to be the same variable then you don't want references. Note I said "variables", not "values". PHP will already reuse object values for you.

Run it from the CLI and post a backtrace.

Thanks for your reply. 

Okay, I managed to get the coredump, but I'm still trying really hard to get gdb installed and working properly on my NAS server... No luck yet, but working on it.

Meanwhile, digging deeper, I did find exactly what was causing the segfault. It's my PDO Attribute:

PDO::ATTR_EMULATE_PREPARES => false

If I set this to TRUE, then it works... Now, this is an issue for me, since I'm using this to protect against SQL injections...  Any ideas ?

P.s.: as soon as I can get gdb working properly, I'll re-post trace...  Is there any way I can just submit the coredump somewhere, or do I absolutely need to 'gdb php-cgi corefile' ?

Thanks again. 

Edited by PatRoy
38 minutes ago, PatRoy said:

Okay, I managed to get the coredump, but I'm still trying really hard to get gdb installed and working properly on my NAS server... No luck yet, but working on it.

Just as a FYI, I got gdb-7.3.1 installed from a package for my NAS, but when trying to run it with the coredump, I get the following:

root@datahub:/mnt/HD/HD_a2/DATA/NAS_Core_Dumps# gdb /ffp/bin/php-cgi core-php-cgi.12447
GNU gdb (GDB) 7.3.1
Copyright (C) 2011 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "arm-ffp-linux-uclibcgnueabi".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>...
  
Reading symbols from /mnt/HD/HD_a2/ffp/bin/php-cgi...I'm sorry, Dave, I can't do that.  Symbol format `elf32-littlearm' unknown.
....

But, that's beyond the scope of this post... Trying to build another gdb from source for now..

  • Don't set autocommit to false. BeginTransaction will cancel it for the duration of the transaction until a commit or rollback is called.
  • Don't commit every insert, do one commit after all the inserts are executed (or rollback if there was an exception)
  • Don't save plaintext passwords. Use password_hash() and password_verify()

I have rewritten your code (working, for me at any rate)

<?php
class wrapper  
{
    private $conn;
    
    public function __construct($con)
    {
        $this->conn = $con;
    }
    
    public function multiInsert($data)
    {
        
        $this->conn->beginTransaction();
        try {
            
            $stmt = $this->conn->prepare("INSERT INTO user (username, email, passwd) VALUES (?,?,?)");
        
            foreach ($data as $rec) {
                $stmt->execute($rec);
            }
            $this->conn->commit();
        }
        catch (PDOException $e) {
            $this->conn->rollback();
            throw $e;
        }
    }
}

const HOST     = 'localhost';
const USERNAME = '...';
const PASSWORD = '...';
const DBNAME   = '...';

    // PDO database connection
    //
    $dsn = "mysql:dbname=".DBNAME."; host=".HOST."; charset=utf8";

    $db = new pdo($dsn, USERNAME, PASSWORD, 
        [
            PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
            PDO::ATTR_EMULATE_PREPARES => false,
            PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC
        ]);
        

    $data = array (
          array ( "jdoe1", "jdoe1@email.ca", "abc1"),
          array ( "jdoe2", "jdoe2@email.ca", "abc2"),
          array ( "jdoe3", "jdoe3@email.ca", "abc3"),
          array ( "jdoe4", "jdoe4@email.ca", "abc4"),
          array ( "jdoe5", "jdoe5@email.ca", "abc5"),
        );
        
    $wrap = new wrapper($db);
    
    $wrap->multiInsert($data);
    
    /* RESULTS
        mysql> select * from user;
        +--------+----------+----------------+--------+
        | userid | username | email          | passwd |
        +--------+----------+----------------+--------+
        |      1 | jdoe1    | jdoe1@email.ca | abc1   |
        |      2 | jdoe2    | jdoe2@email.ca | abc2   |
        |      3 | jdoe3    | jdoe3@email.ca | abc3   |
        |      4 | jdoe4    | jdoe4@email.ca | abc4   |
        |      5 | jdoe5    | jdoe5@email.ca | abc5   |
        +--------+----------+----------------+--------+
    */
?>

 

6 hours ago, Barand said:
  • Don't set autocommit to false. BeginTransaction will cancel it for the duration of the transaction until a commit or rollback is called.
  • Don't commit every insert, do one commit after all the inserts are executed (or rollback if there was an exception)
  • Don't save plaintext passwords. Use password_hash() and password_verify()

Hey Barand,

All valid points, but none that'll help my problem out ;)

To answer you...

  • Okay for the autocommit attribute.. though it doesn't change anything in my case..
  • Here's the thing: I WANT to commit on every inserts! I already have another function as you said and did, but I want to be able to have a choice to commit on every row of data, and if one fails, just populate an array of failed ones, and keep on trying the remaining data... It's a very "generic" DB Wrapper I've written, which I'll hope to use in other projects. Doing it JUST like you did, if one fails, then everything is rolled back..
  • For the passwords... yep I know this... but it was for code clarity ;)

Thanks :)

9 minutes ago, PatRoy said:

I want to be able to have a choice to commit on every row of data

Autocommit!

5 minutes ago, PatRoy said:

Doing it JUST like you did, if one fails, then everything is rolled back..

Yes, that's the whole idea behind transactions and committing or rolling back .

 

try

    public function multiInsert($data)
    {
            $errorList = [];
            $stmt = $this->conn->prepare("INSERT INTO user (username, email, passwd) VALUES (?,?,?)");
        
            foreach ($data as $rec) {
                try {
                    $stmt->execute($rec);
                }
                catch (PDOException $ex) {
                    $errorList[] = [ $rec, $ex->errorInfo[1], $ex->getMessage() ];
                }
            }
            return $errorList;
    }

 

Okay, I see what you mean about the autocommit...

I tried as you said, essentially either remove the ATTR_AUTOCOMMIT => false, or I tried setting it to TRUE also.. I've kept only the $stmt->execute() in my loop, removing the beginTransaction() and commit(), but no luck.... I'm still getting a segfault / http 500 error :(

It really is just the "ATTR_EMULATE_PREPARES => FALSE" causing this segfault...  

I've managed to get PHP 7.1.33 on my working PC, although not quite the same as on my server (7.1.4), and still working on PC.... 

I'm very puzzled here.... :(

 

10 hours ago, requinix said:

That backtrace would really be nice to see...

UPDATE: alrighty, I finally managed to get gdb working on my server. Here's the output of GDB :

root@datahub:/mnt/HD/HD_a2/DATA/NAS_Core_Dumps# gdb /ffp/bin/php-cgi core-php-cgi.11576
GNU gdb (GDB) 7.3.1
Copyright (C) 2011 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "armv5tel-unknown-linux-gnueabi".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>...
Reading symbols from /mnt/HD/HD_a2/ffp/bin/php-cgi...(no debugging symbols found)...done.
[New LWP 11576]
[Thread debugging using libthread_db enabled]
Core was generated by `/ffp/bin/php-cgi test.php'.
Program terminated with signal 11, Segmentation fault.
#0  0x40d4c494 in pdo_mysql_stmt_param_hook () from /ffp/lib/php/modules/pdo_mysql.so
(gdb) bt
#0  0x40d4c494 in pdo_mysql_stmt_param_hook () from /ffp/lib/php/modules/pdo_mysql.so
#1  0x000ce55c in ?? ()
Cannot access memory at address 0x0
#2  0x000ce55c in ?? ()
Cannot access memory at address 0x0
Backtrace stopped: previous frame identical to this frame (corrupt stack?)

 

Not sure what to think about this backtrace line... "Backtrace stopped: previous frame identical to this frame (corrupt stack?)". I don't really understand it all...

Thanks again for your help guys :)

 

Edited by PatRoy
57 minutes ago, requinix said:

Hmm, not quite complete. And getting a better backtrace will take some effort.

PHP 7.1 isn't supported anymore. In fact 7.2 is old as well. Can you update to PHP 7.3?

 

It'll take me quite a bit of effort, cause I can't find packages for it for my NAS (it's running Fonz Fun_plug 0.7)... Kinda old, and VERY slow to compile on... I'll still see if I can manage to compile a newer version..

However, I doubt this is a PHP 7.1 issue (?), since I did manage to install PHP 7.1.33 on my PC and my code is working fine there...

I'll repost when / if I manage to still compile a newer php ...

 

 

Just as an update, I haven't yet managed to upgrade my PHP on my NAS: it is pretty difficult to compile because of a series of problems, which leads to having to recompile other libs (i.e. wrong PCRE version), which leads to other problems... 

For now, I changed my code for that particular function so that it replaces my SQL ':variable' to the PDO::quote(values), and PDO::exec() instead of using bindParam.

This way, I can keep my PDO::ATTR_EMULATE_PREPARES => FALSE, and it works...  I don't like it, because it's a little slower, but don't have a choice for now :(

 

This thread is more than a year old. Please don't revive it unless you have something important to add.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.