Jump to content

NotionCommotion

Members
  • Posts

    2,446
  • Joined

  • Last visited

  • Days Won

    10

Everything posted by NotionCommotion

  1. The approach to select a server file will be completely different. One possible strategy would be to do something like the following. Make a GET request to /files which returns [{"id": 123, "name": "some_file.txt"}, {"id": 321, "name": "some_other_file.txt"}, ...]. User selects a file and store the value on the client. If you are using XMLHttpRequest, it will be easier since it just be stored in the DOM, however, other options will work and maybe just delete it every time the page is accessed so you don't send old files. When the user sends the message off, include the ID with the data.
  2. There isn't a single desired output, and sometimes it will be fairly similar to the received data but must often it will be like the following: $series = [ new Series(new Point(51), new Aggregator('mean'), new DataValues([4.3527550826446255, 4.472219604166665, 4.332343173277662, ..., 4.272219604166665)])), new Series(new Point(55), new Aggregator('mean'), new DataValues([5.3527550826446255, 5.472219604166665, 5.332343173277662, ..., 5.272219604166665)])), new Series(new Point(56), new Aggregator('max'), new DataValues([0.6175362252066116, 0.609555860166668, 0.604520167014615, ..., 0.619555604166668)])) ]; $timeValues=new TimeValues(["2020-06-13T14:02:02Z","2020-06-13T14:02:12Z","2020-06-23T14:02:22Z", ..., "2020-06-23T14:02:22Z"]); $seriesCollection = new SeriesCollection($timeValues, ...$series); To be flexible, will likely just drop the raw data in some class: class RawDataCollection { public function __construct(string $name, array $columns, array $values) { // } } Not really sure whether best or whether it even matters to have some method to create the desired output: class RawDataCollection { public function createDataCollection(TransformerInterface $transformer):DataCollectionInterface { } } class SeriesCollectionTransfomer implements TransformerInterface { // } or do something like: class SeriesCollection { public function __construct(RawDataCollection $rawData) { // } } Was I correct in my observation that iterating over a large loop multiple times is often more efficient that iterating over it only once but then iterating over some smaller loop?
  3. I have an application which takes some time and would like to take steps to improve execution speed. Sample data is provided as JSON and is as follows where the values array has few columns and many rows. My desired outcome is three PHP objects for mean_P51, mean_P55, and max_P56 which all have a reference to the time array as well as their own values array. { "name": "L2", "columns": ["time", "mean_P51", "mean_P55", "max_P56"], "values": [ ["2020-06-13T14:02:02Z", 4.3527550826446255, 5.668302919254657, 0.6175362252066116], ["2020-06-13T14:02:12Z", 4.472219604166665, 5.493282520833331, 0.6095558604166668], ["2020-06-23T14:02:22Z", 4.332343173277662, 5.477678517745302, 0.6014520167014615], ... ["2020-06-23T14:02:22Z", 4.272219604166665, 5.468302919254657, 0.6195558604166668] ] } Originally, I thought that it would be more efficient to iterate over the big values array once and on each iteration process each column (I envision myself walking one mile and snapping my fingers three times each foot, or walking three miles and snapping my fingers once every foot). What I witness, however, is it is faster to iterate over the big values array multiple times to generate each object, and I've done a few simple tests comparing iterating big loops within little loops to little loops within big loops, and get similar results. I guess this makes sense I am not really stopping when I snap my fingers and if I did walking three miles would likely be quicker. Is this expected behavior? If there is too much data, I've needed to use a JSON stream parser and either a generator or iterator. I haven't tested it yet, but expect a generator would be more efficient than an iterator and using PHP's built-in json_decode() and an array would be more efficient than either a generator or iterator. Think I need to test this hypothesis or is it likely correct? Any other general strategies one should take when working with large datasets? Thanks
  4. You can redirect using PHP with header("Location: you_disired_url.bla"); or with JavaScript after getting your response. Make sure you understand between server side and client side.
  5. While relative paths certainly work, they are often more trouble than they are worth. If you don't want to hard code them but still want to be flexible, consider using a PHP variable to define the base to relative path and use it in your URLs.
  6. Thanks kicken, Both of your comments helped me. Never realized the implications of the composer.lock file until now, and will defiantly keep it under version control. Regarding the FK error, you were right and it turned out to be utf versus utf8mb4.
  7. Yes, I spent some time but there are 376 tables to sort through. I would rather get to some usable state and migrate the applicable data. PS. My hack solution didn't quite get me there as there as expected many missing classes.
  8. I just updated a site as follows: cd /var/www/concrete5 composer update Among other changes, the following were made (more on this later): - Updating concrete5/core (8.5.2 => 8.5.4): Downloading (100%) - Updating doctrine/collections (1.6.4 => 1.6.5): Downloading (100%) - Updating doctrine/lexer (1.2.0 => 1.2.1): Downloading (100%) - Updating doctrine/inflector (1.3.1 => 1.4.3): Downloading (100%) - Updating doctrine/cache (1.10.0 => 1.10.1): Downloading (100%) - Updating doctrine/annotations (1.10.2 => 1.10.3): Downloading (100%) - Updating doctrine/common (2.12.0 => 2.13.3): Downloading (100%) - Updating doctrine/instantiator (1.3.0 => 1.3.1): Downloading (100%) - Updating doctrine/orm (v2.7.2 => v2.7.3): Downloading (100%) When accessing the site, I now have the following error: errno: 150 "Foreign key constraint is incorrectly formed I expected I should have first used concrete5's update script, but too late for that. So, then I changed composer.json require concrete5/core from ^8.5 to 8.5.2 hoping to return to the previous state. Concreete5 was downgraded as desired, but now I get the following error: Class 'Doctrine\Common\Persistence\Mapping\Driver\MappingDriverChain' not found On another concrete5 site which still works, I have the following two files, however, on the broken one I only have the second: vendor/doctrine/persistence/lib/Doctrine/Common/Persistence/Mapping/Driver/MappingDriverChain.php:13: class MappingDriverChain extends \Doctrine\Persistence\Mapping\Driver\MappingDriverChain vendor/doctrine/persistence/lib/Doctrine/Persistence/Mapping/Driver/MappingDriverChain.php:17:class MappingDriverChain implements MappingDriver So, now I will attempt to downgrade doctrine from 2.7.3 to 2.7.2. The base composer.json file has no reference to Doctrine, but there are two other related composer files: vendor/concrete5/doctrine-xml/composer.json { "name": "concrete5/doctrine-xml", "description": "Define database structure via XML using Doctrine data types", "keywords": [ "doctrine", "xml", "structure", "database", "schema" ], "homepage": "https://github.com/concrete5/doctrine-xml", "license": "MIT", "autoload": { "psr-4": { "DoctrineXml\\": "src/" } }, "require": { "php": ">=5.3" }, "require-dev": { "doctrine/dbal": "2.5.*" } } vendor/concrete5/dependency-patches/composer.json { "type":"library", "license":"MIT", "name":"concrete5/dependency-patches", "description":"Patches required for concrete5 dependencies", "homepage":"https://github.com/concrete5/dependency-patches", "authors":[ { "name":"Michele Locati", "email":"michele@locati.it", "role":"author", "homepage":"https://mlocati.github.io" } ], "require":{ "mlocati/composer-patcher": "^1.0.0" }, "extra":{ "patches": { "doctrine/annotations:1.2.7": { "Fix access array offset on value of type null": "doctrine/annotations/access-array-offset-on-null.patch" }, "doctrine/orm:2.5.14": { "Fix UnitOfWork::createEntity()": "doctrine/orm/UnitOfWork-createEntity-continue.patch" }, "zendframework/zend-stdlib:2.7.7": { "Fix ArrayObject::unserialize()": "zendframework/zend-stdlib/ArrayObject-unserialize-continue.patch" }, "sunra/php-simple-html-dom-parser:1.5.2": { "Fix minus in regular expressions": "sunra/php-simple-html-dom-parser/minus-in-regular-expressions.patch" }, "phpunit/phpunit:4.8.36": { "Avoid each() in Getopt": "phpunit/phpunit/Getopt-each.patch" }, "tedivm/jshrink:1.1.0": { "Fix continue switch in Minifier": "tedivm/jshrink/fix-minifier-loop.patch", "Update to upstream version 1.3.2": "tedivm/jshrink/update-upstream-1.3.2.patch" }, "zendframework/zend-code:2.6.3": { "Fix continue switch in FileGenerator and MethodReflection": "zendframework/zend-code/switch-continue.patch" }, "zendframework/zend-http:2.6.0": { "Remove support for the X-Original-Url and X-Rewrite-Url headers": "zendframework/zend-http/no-x-original-url-x-rewrite.patch" }, "zendframework/zend-mail:2.7.3": { "Fix idn_to_ascii deprecation warning": "zendframework/zend-mail/fix-idn_to_ascii-deprecation-warning.patch" }, "zendframework/zend-validator:2.8.2": { "Fix idn_to_ascii/idn_to_utf8 deprecation warning": "zendframework/zend-validator/fix-idn_to_-deprecation-warning.patch" } } } } Neither seem to be applicable, but the doctrine version has to be specified somewhere. How does composer determine which version and how can I downgrade the dependency package? Thanks PS. As a hack solution, I replaced the entire vendor/doctrine directory from one from another site, and have things working. Still, want to know how to do this right.
  9. Yes, I see your point, but I interpreted it differently. While I couldn't find in the docs, [] has higher priority maybe. Regardless, the C girls and boys coded exactly how I wanted it to be. return is_null($cache[$class])?$cache[$class] = new $class:$cache[$class];
  10. It is not as readable as kicken's code but it isn't horrible. Doesn't it work specifically because ?? has higher precedence than = ? If $cache[$class] isn't NULL, return it, else return $cache[$class] which is equal to new $class.
  11. Thanks kicken, I've used this approach when I have various groups where objects are first assigned to a group and then processed. $grouper->assign($thing) would determine which group $thing belongs to based on $thing's properties and then get existing or create new $group and execute $thing->setGroup($group) , and Thing::setGroup($group) would execute $this->group=$group and $group->registrar($this), and group would all the thing to its collection. The point is $grouper needs to pass an existing Group object should it exist. Instead of TimeUnit::usedByTheseTimes, it would be Group::members or something. Yeah, I totally understand why you said "Not entirely sure what you mean by that". Yep, PHP7 and like the ?? operator. Just peeked at requinix's reply. Will go with $cache. Thanks both of you!
  12. Would a socket server which runs continuously be considered a "constrained environment" or even then let PHP clean up unused variables? My first static create method accepts "hour", "day", "week", etc and creates a TimeUnit object which get's injected into each new Time object. After being constructed, a TimeUnit never changes and each "day TimeUnit" in every instance of Time is the same. Registry/singleton pattern or create/destroy each TimeUnit as needed? While it doesn't for my application, maybe only use a singleton if TimeUnit had some "usedByTheseTimes" property? Thanks for correcting my use of the word "stack". See your point. I will call it "registry" unless you think I shouldn't.
  13. I know it is often frivolous, but sometimes I can't help trying to optimize things, and use the following two methods to create either an object injected with a string or one of a set of objects. For both cases, the class has no setters so there is no chance that the object will later be changed. I first stated doing this in continuously running server applications which I believe makes sense, however, it has leaked over to standard HTTP requests. Please commit good or bad on this approach. Thanks public static function create(string $unit):self { static $stack=[]; if(!isset($stack[$unit])) { $stack[$unit] = new self($unit); } return $stack[$unit]; } public static function create(string $class):FunctionInterface { static $stack=[]; if(!isset($stack[$class])) { $stack[$class] = new $class; } return $stack[$class]; }
  14. Maybe I am wrong but sure looks like serial communication (most likely RS-485) and not Ethernet.
  15. It is uncharacteristic for me to say so, but I believe you have an xy problem.
  16. I assume the errors being generated the API's server, and not yours. True? Have you tried adding lines like syslog(LOG_INFO, $xml); to make sure they reflect what you expected?
  17. After creating a core dump and getting a backtrace, I saw that my ide debugger somehow prevented getting the info. Disabled the ide and no more segfault. Generated a fake segfault and was able to view with gdb.
  18. Eureka! – Set fs.suid_dumpable for setuid or otherwise protected/tainted binaries. # vi /etc/sysctl.conf fs.suid_dumpable = 2 Following is the meaning of each predefined value: 0 – (default): traditional behaviour. Any process which has changed privilege levels or is execute only will not be dumped. 1 – (debug): all processes dump core when possible. The core dump is owned by the current user and no security is applied. This is intended for system debugging situations only. 2 – (suidsafe): any binary which normally not be dumped is dumped readable by root only. This allows the end-user to remove such a dump but not access it directly. For security reasons, core dumps in this mode will not overwrite one another or other files. This mode is appropriate when administrators are attempting to debug problems in a normal environment.
  19. Agree SIGSEGV isn't recognized by php7.3 but only 7.4, and also had to find a different way to generate a segfault. So, no setting of CoreDumpDirectory in httpd.conf required for you? Also, are you running nginx or apache? Thanks
  20. According to http://httpd.apache.org/docs/current/mod/mpm_common.html#coredumpdirectory, looks like I must explicitly set CoreDumpDirectory. I tried "CoreDumpDirectory /tmp/coredumps", but got an httpd error that the directory didn't exist. Turns out I needed to set PrivateTmp=false in /etc/systemd/system/httpd.service. Tried but no good. Then went back to PrivateTmp=true and made systemd-private-7c09e14e22c34aaca6abb409f543b500-httpd.service-Txh4xo as writable by all (0777) but still nothing. https://cwiki.apache.org/confluence/display/HTTPD/CoreDump shows the following, however, I haven't done it as I don't know if it is required. sysctl -w kernel.core_pattern=/some/core/pattern
  21. I don't know whether this is relevant. The request which causes the segfault is a POST request. My original thought was to capture the input stream, save it to a file, and then somehow (haven't figure this part out yet) write it to the body when executing the script on the CLI so that I would generate a core dump. The strange this is while $_POST is populated, stream_get_contents(fopen(php://input, r)) is empty. Does this make any sense? I ended up trying to just write to $_POST, however, I got some other error, and think I will abandon this idea and focus on how to property create a core dump when using Apache and php-fpm. <?php $server=$_SERVER;; if (php_sapi_name() == "cli") { $request=json_decode(file_get_contents('request.json'), true); $_SERVER=$request['$_SERVER']; $_GET=$request['$_GET']; $_POST=$request['$_POST']; $_COOKIE=$request['$_COOKIE']; } elseif($_SERVER['REQUEST_METHOD'] !=='GET') { syslog(LOG_ERR, '$_POST: '.json_encode($_POST)); //For unknow reasons, $_POST is populated but php://input is not when using Concrete5??? $stream = fopen('php://input', 'r'); $content = stream_get_contents($stream); syslog(LOG_ERR, 'stream_get_contents(fopen(php://input, r)): '.$content); $content = file_get_contents('php://input'); syslog(LOG_ERR, 'file_get_contents(php://input): '.$content); file_put_contents('request.json', json_encode(['$_GET'=>$_GET, '$_POST'=>$_POST, '$_COOKIE'=>$_COOKIE, '$_SERVER'=>$_SERVER])); } require('concrete/dispatcher.php');
  22. Yes, systemd, but tried and no go. Wouldn't there be some sort of error log if an attempt to create a large core dump exceeded some setting? Its like php-fpm isn't even trying to do so because I didn't have some setting correct instructing it to do so. This file caused crash on first PHP 7.4 and then PHP 7.2, but not PHP7.3. After install, still having crashes when making post request via web server. Don't think it is related but am using the composer version. As an interim fix, thinking of writing a script which serializes request and saves to file. Then I could run from the cli some script which populates the request body, etc and runs index.php. Pretty sad that I need to resort to doing so. If I can't figure out how to create core dump using php-fpm, any better options? Related topic. "unlimitted" sounds scary. Any risk of some infinite core being created?
  23. Thanks to both of your help, I am definitely closer. Turned out ulimit settings prevented me from generating a core dump. I created a php script designed to generate a segfault, ran it from the cli, and a core dump was created. But when I run the script through the webserver, I still generate the segfault but there is no core dump file created. Tried using multiple different users but nothing. Any thoughts? Thanks [php73] prefix = /run/php-fpm user = michael group = apache listen.owner = apache listen.group = apache listen.mode = 0600 listen = $pool.sock pm = ondemand pm.max_children = 50 ;php_flag[display_errors] = off php_admin_value[error_log] = /var/opt/remi/php73/log/php-fpm/error.log php_admin_flag[log_errors] = on ;php_admin_value[memory_limit] = 128M php_value[session.save_handler] = files php_value[session.save_path] = /var/opt/remi/php73/lib/php/session php_value[soap.wsdl_cache_dir] = /var/opt/remi/php73/lib/php/wsdlcache ;php_value[opcache.file_cache] = /var/opt/remi/php73/lib/php/opcache ;Used to create core backtraces rlimit_core = unlimited
  24. Haven't checked ulimit_settings, but I will. Ah, I needed to add the php script to run. Thanks, will give it a try.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.