Jump to content

NotionCommotion

Members
  • Content Count

    1,889
  • Joined

  • Last visited

  • Days Won

    8

NotionCommotion last won the day on January 3

NotionCommotion had the most liked content!

Community Reputation

29 Good

About NotionCommotion

Contact Methods

  • Website URL
    http://

Profile Information

  • Gender
    Not Telling

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I have a hosted VPS running PHP 7.3.5 and a local RPi running 7.0.33, and will refer to them as VPS and RPi. I also have the following very basic socket client and server. The server on the VPS and the client on the RPi using TCP works great and so does both the server and the client on the VPS using TLS. But when I put the server on the VPS and the client on the RPi using TLS, I can only send relatively short packages (approximately 2,600 characters). And there is no hard limit because one moment 2,628 characters works, but then a little later 2,619 will not. Does it look like something I am doing wrong or maybe upgrading PHP on the RPi will help? Thank you <?php require 'simple_common.php'; $loop = React\EventLoop\Factory::create(); $connector = new React\Socket\Connector($loop); logger("simple client is running using ".($tls?'tls':'tcp')); if($tls) { $sslContextOptions = [ 'peer_name'=> get('peer_name'), //'verify_peer'=>false, //'verify_peer_name'=>false, 'allow_self_signed'=>true, 'cafile' => get('cafile'), //Certificate Authority file used with verify_peer. Use public cert since selfsigned? //'verify_depth' not set and defaults to none //'ciphers' not used and defaults to DEFAULT //'capath'=>null, //Not needed since cafile is given //'CN_match' and 'SNI_server_name' are depreciated and peer_name should be used instead. ]; $connector = new React\Socket\SecureConnector($connector, $loop, $sslContextOptions); } $connector->connect(get('ipPort'))->then(function (React\Socket\ConnectionInterface $connection) use ($loop) { logger("onConnection"); setEvents($connection, 'client'); $string=str_repeat(substr(str_repeat('0123456789',10),0,-1), 26).'023456789023456789023456789023456789023456789023456789'; logger("write strlen: ".strlen($string)); logData('write', $string); $connection->write($string); }); $loop->run(); <?php require 'simple_common.php'; $loop = React\EventLoop\Factory::create(); logger("simple server is running using ".($tls?'tls':'tcp')); $socket = new React\Socket\Server(get('ipPort'), $loop); if($tls) { $sslContextOptions = [ 'local_cert' => get('local_cert'), //local certificate file w/ or w/o private key 'local_pk' => get('local_pk'), //Only needed if private key is not in local_cert //'passphrase' not set as local_cert is not encoded //'capture_peer_cert'=>true, //a peer_certificate context option will be created containing the peer certificate //'capture_peer_cert_chain'=>true, //a peer_certificate_chain context option will be created containing the certificate chain. //'SNI_enabled'=>true, //server name indication will be enabled. Enabling SNI allows multiple certificates on the same IP address. //'disable_compression'=>true, //disable TLS compression. This can help mitigate the CRIME attack vector. //'peer_fingerprint'=>[] //A hash or array of hashes which the remote certificate digest must match. ]; $socket = new React\Socket\SecureServer($socket, $loop, $sslContextOptions ); } $socket->on('connection', function (React\Socket\ConnectionInterface $connection) use ($loop) { logger('onConnection'); setEvents($connection, 'server'); $loop->addPeriodicTimer(1, function ($timer) use ($connection) { $data='Hello!'; logData('write', $data); $connection->write($data); }); }); $socket->on('error', function (\Exception $e) { logger('onServerError: '.$e->getMessage()); }); $loop->run(); <?php function setEvents($connection, string $type){ $connection->on('data', function ($data)use($type) { static $totalLng=0; $curLng=strlen($data); $totalLng+=$curLng; logger("$type onData: current: $curLng total: $totalLng"); logData('read', $data); }); $connection->on('error', function (\Exception $e)use($type) { logger("$type onError: ".$e->getMessage()); }); $connection->on('close', function()use($type) { logger("$type onClose"); }); $connection->on('end', function()use($type) { logger("$type onEnd"); }); $connection->on('drain', function()use($type) { logger("$type onDrain"); }); $connection->on('pipe', function()use($type) { logger("$type onPipe"); }); } function emptyLog($type){ $f = @fopen(get($type), "r+"); if ($f !== false) { ftruncate($f, 0); fclose($f); } } function logData($type, $string){ file_put_contents(get($type), $string, FILE_APPEND); } function logger($msg){ echo($msg.PHP_EOL); syslog(LOG_INFO, $msg); } function get($name){ $map = [ 'ipPort'=>'184.154.134.91:1338', 'peer_name'=>'tapmeister.com', 'cafile'=>'test_ss_crt.pem', //Use public cert since selfsigned? Why does even test_ss_csr.pem work? 'local_pk'=>'test_ss_key.pem', 'local_cert'=>'test_ss_crt.pem', //Why does even test_ss_csr.pem work? 'read'=>'raw_read_stream.log', 'write'=>'raw_write_stream.log', 'tls'=>true, ]; if(!isset($map[$name])) exit("Invalid name $name"); return $map[$name]; } ini_set('display_startup_errors', '1'); ini_set('display_errors', '1'); error_reporting(E_ALL); require 'vendor/autoload.php'; $tls=$argv[1]??get('tls'); emptyLog('write'); emptyLog('read');
  2. NotionCommotion

    Why use json_encode() on a string

    Reasons to use json_last_error() is obvious when decoding, but was not so to me when encoding. Looking at the documents, malformed UTF-8 characters will result in an error. The examples given show encoding a string, however, an array with an index value which is malformed UTF-8 characters will result in the same error. Are there other cases which json_encode() will result in error? Why would one actually want to use json_encode() on a string?
  3. NotionCommotion

    How to access symmetric key?

    Is your logger just a simple homespun write to a file or something more? When monitoring the server, how do you deal with keeping each client separate? Thanks PS. Sorry for getting off topic.
  4. NotionCommotion

    Dependency Injection via setter method

    Who says so? A container is nice because it allows you to configure your potential objects without creating them, but I think containers are just a means to help implement DI. Or maybe I should say DI is a good means to implement composition and contains are a good means to implement DI. Just my two bits...
  5. NotionCommotion

    How to access symmetric key?

    Can you elaborate? I like it! I take it you log the entire message with either deliminator or length prefix, right? Have you ever used CBOR (which I am doing) or similar or compressed JSON? With straight JSON, it should be easy enough to determine message breaks based on visually looking for known words, but not so if scrambled. I probably need to log both the pre and post CBOR raw message, and maybe take other steps. Any lessons learned would be appreciated.
  6. NotionCommotion

    How to access symmetric key?

    Was afraid of that. Evidently, Firefox and Chrome both support logging the symmetric session key used to encrypt TLS traffic to a file, and Wireshark is configured to use this file and then can decrypted TLS traffic. See https://redflagsecurity.net/2019/03/10/decrypting-tls-wireshark/. Well, I am not accessing the connection using FF or Chrome so that doesn't help me, but maybe there is a different way to do so with some Linux command?
  7. NotionCommotion

    How to access symmetric key?

    If a client connects to a ReactPHP TLS socket server, is it possible to obtain the symmetric key from within the PHP code? Hoping it will allow me to decrypt analysis traffic between two using Wireshark.
  8. NotionCommotion

    Get variable value to another PHP page using AJAX

    First of all, not really sure what this line is all about. echo "$voice_id, . "&". , $voice_name"; I see your "alert" test in your jQuery ajax success. That is good, but do more. Did you use Chrome's console inspector (or similar other browsers) to ensure the browser client is sending data? Next, be really simple at the server and just use exit('<pre>'.print_r($_POST).'</pre>'); or var_dump($_POST);.
  9. NotionCommotion

    Returning before complete

    Thanks kicken, When executing sleep 60, I would see a delay of 60 seconds which definitely exceeds the next loop tick. I just stumbled upon a possible culprit. At the end of the following script (which actually has timeouts, etc, but were removed in this example. if you think applicable, I can post the whole thing), you will see me performing back-to-back sends first to register and second to do the real request. $loop = \React\EventLoop\Factory::create(); (new \React\Socket\TimeoutConnector(new \React\Socket\Connector($loop), 5, $loop)) ->connect('127.0.0.1:1337') ->then(function (\React\Socket\ConnectionInterface $stream) { $lengthPrefixStream = new LengthPrefixStream($stream); $lengthPrefixStream->on('data', function($data) use ($stream){ syslog(LOG_INFO, 'on data: '.json_encode($data)); if($data['id']===1) { //response that registration was accepted } else { //real response with data } }); $lengthPrefixStream ->send($this->makeJsonRpcRequest('register',$this->logon, 1)->toArray()) ->send($jsonRpcRequest->toArray()); }); Under this scenario, I only see the on data syslog when both requests are complete. I've since changed it to first perform the registration request and only perform the real request after receiving the registration response, and now seem to get the results I wanted/expected. While I think my problem is solved, I still question why I wasn't getting the intermediate responses. Any ideas? The LengthPrefixStream class which you basically wrote is below. On a related, note, if you have any recommendations for the below class, or know why I might have previously been having it implement EventEmitterInterface, please let me know. Thanks <?php namespace NotionCommotion\SocketServer; use Evenement\EventEmitterInterface; use Evenement\EventEmitterTrait; use React\Stream\DuplexStreamInterface; /* Parses stream based on length prefix, and emits data or error if bad JSON is received. */ class LengthPrefixStream {// why => implements EventEmitterInterface { use EventEmitterTrait; private $socket=false, $debug, //If null, do not debug. If zero, don't crop message. Else log and crop. $buffer='', $encryptCBOR=null, $timestamp, $client; //Will be set after object is created public function __construct(DuplexStreamInterface $socket, int $debug=null){ $this->timestamp=time(); $this->socket = $socket; $this->socket->on('data', function($data){ $this->parseBuffer($data); }); $this->debug=$debug; $this->log('on connect with '.$socket->getRemoteAddress()); //$this->socket->on('error', function($error, $stream){$this->log($error, LOG_ERR);}); } public function setClient(AbstractClient $client):self{ $this->client=$client; return $this; } public function getClient():AbstractClient{ return $this->client; } public function getRemoteAddress():string{ return $this->socket->getRemoteAddress(); } public function getLocalAddress():string{ return $this->socket->getLocalAddress(); } public function send(array $msg, string $debug=null):self{ if($this->isConnected()) { $this->log('onSend'.($debug?" ($debug): ":': ').json_encode($msg)); $msg = $this->encode($msg); if(!$this->socket->write(pack("V", strlen($msg)).$msg)) { $this->log("LengthPrefixStream::send() buffer data: $msg", LOG_ERR); } } else { $this->log('LengthPrefixStream::send() not connected (should never happen). '.json_encode($msg), LOG_ERR); } return $this; } private function parseBuffer(string $data):void{ $this->buffer .= $data; do { $checkAgain = false; if(($bufferLength = strlen($this->buffer)) > 3 ){ $length = unpack('Vlen', substr($this->buffer, 0, 4))['len']; if(is_int($length)) { if($bufferLength >= $length + 4){ try { $msg=$this->decode(substr($this->buffer, 4, $length)); $this->log('onData: '.json_encode($msg)); $this->emit('data', [$msg]); } catch(\InvalidArgumentException $e) { $this->log('onError: '.$e->getMessage(), LOG_ERR); $this->emit('errorJson', [$e->getMessage()]); } $this->buffer = substr($this->buffer, $length+4); $checkAgain = strlen($this->buffer)>=4; } else { $this->log('Buffer is 4 or less bytes. Skip.'); } } else { $this->log('LengthPrefixStream - Invalid Prefix', LOG_ERR); $this->emit('errorPrefix', ["Invalid length prefix provided by client: ".substr($this->buffer, 0, 3)]); } } else { $this->log('Buffer less than 3 bytes. Skip.'); } } while ($checkAgain); } public function isConnected():bool { return boolval($this->socket); } public function close():void { $this->log('LengthPrefixStream::close()'); $this->socket->close(); } public function getTimestamp():int { return $this->timestamp; } private function log(string $msg, int $format=LOG_INFO):void { if(!is_null($this->debug) || $format!==LOG_INFO) { $name = $this->client?$this->client->getClientType():'No Client'; $msg=$this->debug?substr($msg, 0, $this->debug):$msg; syslog($format, "Debug ($name): $msg"); } } private function encode(array $msg):string { return $this->encryptCBOR===true?\CBOR\CBOREncoder::encode($msg):json_encode($msg); } private function decode(string $msg):array { if($this->encryptCBOR===true) { if(($rs=\CBOR\CBOREncoder::decode($msg))===false){ throw new \InvalidArgumentException('Invalid CBOR: '.substr($this->buffer, 4, $length)); } } elseif($this->encryptCBOR===false) { $rs=json_decode($msg, true); if (json_last_error() !== JSON_ERROR_NONE){ throw new \InvalidArgumentException('Invalid JSON: '.json_last_error_msg()." (".json_last_error().'): '.substr($this->buffer, 4, $length)); } } else { //First time communication $rs=json_decode($msg, true); if (json_last_error() === JSON_ERROR_NONE){ $this->encryptCBOR=false; } elseif(($rs=\CBOR\CBOREncoder::decode($msg))===false){ throw new \InvalidArgumentException('Invalid JSON: '.json_last_error_msg()." (".json_last_error().'): '.substr($this->buffer, 4, $length)); } else { $this->encryptCBOR=true; } } return $rs; } }
  10. NotionCommotion

    Returning before complete

    I am confusing promises with callbacks. But are not both promises and callbacks both design patterns for dealing with asynchronous operations, and while XMLHttpRequest might not directly provide asynchronous functionality, does it not exhibit such behavior through use of promises or callbacks? It seems to me that the main differences is if multiple callbacks are used, they must be nested and each must have its own catch(), however, promises are chained and have a single common catch(). No? My previous post showed something like the following. I would have thought that ['success'=>true] would have been first returned to the caller stream before the sleep blocking function, but I am observing differently. Do you think I have some error elsewhere and shouldn't be observing differently? function executeSpecificRequestCommand(\React\Socket\ConnectionInterface $connection){ syslog(LOG_INFO, 'start'); $msg = json_encode(['success'=>true]); $connection->write(pack("V", strlen($msg)).$msg); exec('sleep 60'); syslog(LOG_INFO, 'save results'); } $loop = \React\EventLoop\Factory::create(); (new \React\Socket\TcpServer('127.0.0.1:1337', $loop)) ->on('connection', function (\React\Socket\ConnectionInterface $connection) { $connection->on('data', executeSpecificRequestCommand($data)); }); $loop->run(); Assuming that I am observing the expected behavior, I will need to utilize either a callback or promise, or do the command in the background. Note really sure about best way to utilize callbacks, and am thinking that maybe adding a background task to a gearmanclient? For using promises, maybe something like the non-working following? function executeSpecificRequestCommand(\React\Socket\ConnectionInterface $connection, array $data){ syslog(LOG_INFO, 'start'); someAsyncOperation($data) //sleep 60 ->then(function($result){ syslog(LOG_INFO, 'save results'); }) ->catch(function($error){ // Handle error }); $msg = json_encode(['success'=>true]); $connection->write(pack("V", strlen($msg)).$msg); syslog(LOG_INFO, 'end'); } I like your use of the 200 versus 201 responses. Thanks!
  11. NotionCommotion

    Returning before complete

    Thanks requinix, Any reason this this logic shouldn't be located at the socket server instead of the HTTP server? Benefits seem to be: 1) the HTTP side shouldn't need to know it is a long process and 2) it provides more information specifically that the SS has been reached without decrease in UX. Currently, I haven't injected or passed the connection to executeSpecificRequestCommand(), but if I did, I could do something like the following: public function executeSpecificRequestCommand(array $data):void { $this->connection->send(['success'=>true]); //which will be returned to HTTP server's socket client which will then result in a HTTP 201 response $result = $this->doSomething($data); //which takes a long time $this->saveInDB($result); } But what I was trying to get at (granted, my subject title was really crappy) is "I think" XMLHttpRequest kind of does what a promise/deferred does (still got to read up). var xhttp = new XMLHttpRequest(); xhttp.onreadystatechange = function() { if (this.readyState == 4 && this.status == 200) { //response from really long process (i.e. HTTP request between client and server } }; xhttp.open("GET", "bla.php", true); xhttp.send(); ... and I shouldn't need/want to pass the connection, but instead do something like... public function executeSpecificRequestCommand(array $data):bool { $thing = new XMLHttpRequestLikePromise(); $thing->onComplete = function() { $this->saveInDB($result); }; $thing->open('doSomething', $data); return $thing->startAkaSend(); } Granted, the above script is nonsense, but hopefully communicates what I am trying to convey.
  12. NotionCommotion

    Returning before complete

    Browser HTTP client makes a request to a HTTP web server which makes a HTTP cURL request to a HTTP REST API which initiates a ReactPHP socket client to make a request to a socket server, and the socket server script eventually execute the following method: public function executeSpecificRequestCommand(array $data):bool { $status = $this->doSomething($data); return $status; //{success: $status} will be returned to the socket client } All is good until doSomething() takes a lot of time and results in a cURL error between the HTTP web server and HTTP REST API. For this particular case, the task isn't meant to provide immediate feedback to the user, but to do some work and update the database, and as such, my desire is to return true status and then perform the work instead of extending the cURL timeout. One option is to ran some process in the background and return status, but I don't think doing so is really right. Using a queue seems excessive as I am already decoupled via the socket. As such, I will probably just add some logic between the initial $string->on('data') and this executeSpecificRequestCommand() method to determine whether the success message should be returned before or after the method is complete. Before doing so, however, I would like to know if there is a more appropriate approach to this scenario. It appears that maybe a child process or a deferred might be appropriate, but am not sure whether I am going down the wrong path.
  13. NotionCommotion

    File permission issues

    I need to do a better job reading the documentation. move_uploaded_file ( string $filename , string $destination ) : bool This function checks to ensure that the file designated by filename is a valid upload file (meaning that it was uploaded via PHP's HTTP POST upload mechanism). If the file is valid, it will be moved to the filename given by destination.
  14. NotionCommotion

    File permission issues

    I didn't think the seggid would be applicable to files but only directories, but evidently it "kind of" does. It appears a capital S means the setgid is set but doesn't do anything. Regardless, will definitely go with find. Thanks drwxrwsr-x 3 michael michael 37 Jun 7 10:08 bbb -rw-rwSr-- 1 michael michael 0 Jun 7 10:08 x
  15. NotionCommotion

    File permission issues

    Thanks kicken, It was my understanding that the set-guid bit would apply to downstream directories, but as you indicate I was mistaken. I just made a new folder with the bit set, and then as another user added directories and see how the bit is set in those directories. But my storage directory just has the root directory set, so this will obviously not work. Any reason recursively adding the bit won't work for an existing directory? And your point about copied files preserving their original guid makes total sense now that you said it. I don't like the idea of copying files. I also would rather not use the chgrp solution. But I could create the new files in some directory other than /tmp and have the set-guid bit set on this new directory. Having apache creating uploaded files to this new directory sounds like a bad idea, but it would be simple to have ssphpd create its files there. This would require me to assign ssphpd the apache group instead of how I was originally going to do it, but doesn't seem like a big deal.
×

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.