-
Posts
4,704 -
Joined
-
Last visited
-
Days Won
179
Everything posted by kicken
-
Hard to really say, it's just an ever ongoing process really. I still wouldn't consider myself any kind of expert. I'd say I just have an average knowledge of how it works and how to use it. I also don't use it a lot either. I have a few smaller sites that use it and work on them occasionally but probably at least 80% of my time is spent maintaining/improving the legacy code in a decade old website. I'd say it was probably at least six months or so, maybe a year before I'd say I had a decent grasp on how to actually work with Symfony instead of against it. For example, the forms component was annoying when I first started because I had no idea how to use it and be able to control how the forms were rendered so I frequently just wrote the form HTML myself and processed the data by grabbing it from the Request object just like how I would in in the old site using $_GET/$_POST. Every time I needed a form though I'd try and do it the Symfony way first and eventually I figured it out and started converting all the old forms. And only in the last year or so have I gotten to the point where I fell like I'm really taking advantage of using services and dependency injection by defining my controllers as services and moving more of my logic out of the controllers and into other service classes. As far as resources go there's nothing in particular. Just google and the source code for the most part. Whenever I get stuck or want to do something I am unsure of I start dissecting the source code and/or search google for help. The official documentation is ok and generally worth looking through but I find it too shallow and doesn't go into enough detail for things. It's hard to navigate also, even if I know something is in the official documentation (ie, form control constraints) it's far easier to find it via google than to go to the Symfony site and try to find it. Analyzing the source is a great way to get things figured out, even if it's not the fastest. It has the additional benefit of demonstrating good design solutions as well. At lot of what I've learned over the years hasn't really been Symfony specific but more general design principals and how to apply them successfully. I've been applying a lot of this to that decade old app as I revamp old features. For example, newer features get implemented in a dependency injection friendly way and the logic is more confined to dedicated and re-usable service classes rather than in each individual page file. My goal is to eventually morph the website into something more Symfony like and maybe eventually migrate it entirely. It'll depend mostly on how you store your users and their associated roles. For example, if you wanted you could actually make separate classes like you suggested and just have each class can override the getRoles() method to return the appropriate roles. In my case, I have a single UserAccount class with a property called dynamicRoles which is a list of UserRole objects that define which roles a user has. Then the UserAccount::getRoles() method iterates over that list and gets the actual role names and returns them. UserAccount.orm.yml AppBundle\Entity\UserAccount: type: entity ... oneToMany: dynamicRoles: targetEntity: UserRole mappedBy: user UserAccount::getRoles() /** * @return string[] */ public function getRoles(){ $roleStrings = $this->dynamicRoles->map(function(UserRole $userRole){ return $userRole->getRole(); })->toArray(); return $roleStrings; } Finding which users have a given role then is a matter of querying the DB for the UserRole objects that have the given role then getting the associated user: $userList = $em->getRepository(UserRole::class)->findBy([ 'role' => 'ROLE_RUN_OFFICIAL' ]); $userList = array_map(function(UserRole $role){ return $role->getUser(); }, $userList); I think that the above could be simplified to remove the UserRole object and just store an array of strings directly on the UserAccount object instead, but that kind of echos back to the previous point of things being an ever on-going learning process. When I originally wrote this I didn't know as much and needed that extra object. I would guess you probably wouldn't need to be checking your roles much as you'd likely just configure the different routes to only be accessible by specific roles. I just mentioned the methods so you'd know about them in case you did need them. I mostly just use them to show/hide various buttons/links based on what roles a user has on a few select pages. For an JSON API I don't imagine you'd have much use for them as you'd just control access to the endpoints in your security configuration but now you know if you do need them. ** My apps are all Symfony 3/4 based, I've no idea if anything of the above has changed in newer versions. Eventually I'll get mine updated and learn what's new.
-
Sounds like Roles is what you want. Define various roles that represent the actions a user can take and check for those in your code. Then you can create meta roles that are combinations of those actions and assign those to the users. For example, I have a small application to handle student transcripts. Any user that can login can lookup/view students and add notes. Other limited actions include importing/editing student/course data, running official/unofficial transcript reports, and managing users. For that I have a role setup such as: role_hierarchy: ROLE_RUN_UNOFFICIAL: - 'ROLE_USER' ROLE_RUN_OFFICIAL: - 'ROLE_RUN_UNOFFICIAL' ROLE_MANAGE_USERS: - 'ROLE_USER' ROLE_MANAGE_STUDENTS: - 'ROLE_USER' ROLE_DEVELOPER: - 'ROLE_USER' - 'ROLE_RUN_OFFICIAL' - 'ROLE_RUN_UNOFFICIAL' - 'ROLE_MANAGE_USERS' - 'ROLE_MANAGE_STUDENTS' - 'ROLE_MANAGE_COURSES' ROLE_MANAGE_COURSES: - 'ROLE_USER' ROLE_USER is applied to every user and is your basic "is a user" role and can search/view student data. ROLE_RUN_UNOFFICIAL applies to users who can run unofficial transcript reports. Add ROLE_USER below it means that anyone who has this role also has ROLE_USER ROLE_RUN_OFFICIAL applies to users can run official reports. Being able to run official reports can also run unofficial reports so they inherit that role as well (and by extension, ROLE_USER). ROLE_DEVELOPER is a special role for myself and a few others that basically opens up everything. Once you setup your role hierarchy as you need it then you can use them either in your security configuration as needed to limit access or check in your code using the isGranted controller method. if ($officialButton->isClicked()){ if (!$this->isGranted('ROLE_RUN_OFFICIAL')){ throw $this->createAccessDeniedException(); } return $this->processTranscriptRequest(Mode::OFFICIAL, $data); } In your templates, you can use the is_granted test to conditionally show things: <td> {% if is_granted('ROLE_RUN_OFFICIAL') %} {{ form_widget(form.official, { 'attr': { 'class': 'button' } }) }} {% endif %} </td>
-
Interpreting/correcting code for api-platform and JWT
kicken replied to NotionCommotion's topic in PHP Coding Help
That constructor syntax is new to PHP 8. It lets you declare the class properties as constructor arguments and reduces the amount of boiler plate code. I might warm up to it eventually but at the moment I'm not a big fan and will probably stick to the traditional syntax for a while. The arguments for $pathItem is also a new PHP 8 feature. This is useful for functions that take a lot of parameters as you can specify just the ones you care about and leave the rest to their default values. In the past you'd generally either pass a single array with keys to achieve the same thing but that meant giving up support for things like IDE autocomplete, type checking, etc. -
Composer with different versions of PHP
kicken replied to NotionCommotion's topic in PHP Coding Help
composer install will install whatever the composer.lock file says to install. If there is no composer.lock file then it behaves like composer update does. composer update will parse the composer.json file and calculate all the dependencies based on the given requirements and current operating environment. Once it determines what versions to install and verified the platform requirements it will write the final configuration to composer.lock so that it's quick and easy to install all those dependencies again if needed. So if you move your project, including the composer.lock file, to a different platform and just run composer install then you could end up with compatibility issues because it might install the wrong dependencies for the platform. The reason for doing things this way rather than just always resolving dependencies is so you can install a known-good set of dependencies. If you just got the latest versions every time then 6 months from now when you want to install your application on a new server you might get an update version of some library that ends up not working rather than the working version you last tested with. Guzzle/promises for example recently released a new minor version that has a bug which broke one of my hobby projects. Had it been a real project being deployed somewhere it would have been very bad to have something like that happen. It's different in that you don't have to keep separate composer.phar files around. You can just have one that is kept up to date but run it with the appropriate PHP version. If you'd rather just have separate composer.phar files that's fine too. -
Composer with different versions of PHP
kicken replied to NotionCommotion's topic in PHP Coding Help
What's important is that your development environment match your production environment, at least to the minor level. If you do something like develop with PHP 7.4 but production runs PHP 7.3 then you might have issues. The reason is that when you update your packages with composer it will calculate the dependencies based on the running PHP version (7.4 in the example) then save those dependencies to the composer.lock file. When you then run composer install on production it will try and install the locked 7.4-based dependencies and potentially cause issues. If you have different projects using different versions of PHP then you'll have to have those different versions of PHP available for your development environment as well and make sure you use the correct version when working on a project. You don't (or at least shouldn't) need separate composer installs, just make sure you run it with the correct PHP version for that project. A simple wrapper script for each project could be used to help with this. If you don't want to deal with multiple PHP versions then an alternative solution is to fake your production platform using composer's config.platform option in your composer.json file: { "config": { "platform": { "php": "7.3.0" } } } That tells composer to calculate the dependencies while assuming that the current PHP version is 7.3.0 rather than looking at what's actually running. Whenever you update your production PHP version you'll need to update the version here to match. -
Some sort of version control is worth the effort to learn. Git isn't too bad, and is where most things are going these days. There's others available though you could try. I still make heavy use of Subversion and it's fairly easy to use on windows with a tool like TortoiseSVN.
-
How can I save and publish my changes in codepen.io item ?
kicken replied to mstdmstd's topic in Miscellaneous
-
In general and by default https is not required to use sessions. However, your host may have set session.cookie_secure to on in their PHP configuration which would make it so that https is required.
-
Sounds like you are doing unserialize($data, false); But what you need to do is unserialize($data, ['allowed_classes' => false]);
-
Nope, polling the process status is what I was speaking of. With proc_open such a thing is possible where as with exec you're just stuck until the process is done. Polling for the status isn't an ideal solution, but it's a solution if that timeout command isn't available (ie, on windows). function run_with_timeout($cmd, $timeout = 30){ $handle = proc_open($cmd, [['pipe', 'r'], STDOUT, STDERR], $pipes); if (!$handle){ throw new \RuntimeException('Unable to launch process'); } fclose($pipes[0]); $start = time(); try { do { usleep(500 * 1000); $status = proc_get_status($handle); } while ($status['running'] && time() - $start < $timeout); if ($status['running']){ proc_terminate($handle); throw new \RuntimeException('Timeout'); } return $status['exitcode']; } finally { proc_close($handle); } }
-
The timeout command would be the easiest way to handle this probably. I didn't know that was a thing, good to know. If for some reason that wasn't an option, you should be able to use proc_open and proc_terminate to accomplish the same task. I don't have time ATM to try it and provide an example. If I do later I may post an example.
-
You can run procedures using either EXEC $sql = "exec exampleProcedure :data"; $stmt = $db->prepare($sql); $stmt->bindValue(':data', $data); $stmt->execute(); or ODBC style CALL $sql = "{call exampleProcedure(:data)}"; $stmt = $db->prepare($sql); $stmt->bindValue(':data', $data); $stmt->execute(); I've done them both ways without issue. The SQLSRV driver by default runs all queries as a prepared query (even when not using ->prepare) which has the effect of isolating state between queries. For example, you can't do something like: $db->query('CREATE TABLE #tmp (Id INT);'); $stmt=$db->prepare('INSERT INTO #tmp (Id) VALUES (?)'); foreach ($list as $id){ $stmt->execute([$id]); } You'll get an error on the INSERT that table #tmp does not exist. This is probably why your isolated SET NOCOUNT ON query did not work. I personally always put the SET NOCOUNT ON line inside the stored procedure itself, usually first line of the procedure. This way, it will always be in effect when the procedure is run.
-
Not really. Think about how much time you're spending now into trying to figure out why your destructor is not being run vs just re-factoring the code to do: $storage->detach($client); $client->cleanup(); My view on the matter is one should for the most part limit destructors to things that are good to do, but don't necessarily need to be done with specific timing/ugency. I rarely ever use a destructor in most of my code. When I do, it's usually for just cleaning up resources (file handles, curl handles, image handles, etc).
-
Trying to get data from form with Repeatable fields into MySQL...
kicken replied to Jim R's topic in MySQL Help
Yes, the variables map to the question-mark placeholders. You need one for each question-mark. So when you need to place the same value into a query in different places you need to use multiple ? and bind the variable multiple times accordingly. $stmt2 = $con->prepare("INSERT INTO wp_terms(name, slug) VALUES (concat(?,' ',?), lower(concat(?,'-',?)) ); ^ ^ ^ ^ | | | | +----------+ | | | | +---------+ | | | | +-------------------+ | | | | +------------------+ | | | | v v v v $stmt2->bind_param('ssss', $name, $slug, $name, $slug); The variables should probably be $nameFirst / $nameLast instead of $name / $slug as in this example you're combining the names and generating the slug in the query using the lower/concat functions. -
That's quite the rant there for something that wasn't really ever brought up. Nobody is saying that you need to make your site accept whatever someone whats to throw at it. You can absolutely have your laws/rules about what you will or won't accept. The point of the advice is that you shouldn't try and manipulate someone's input to conform to your rules. Either their input is valid, or it's not. Don't try and "fix it", doing so might just cause you a whole new class of problems. There are numerous cases of people figuring how to craft input such that it would be fine before the filter, but the filter then transforms it into something that's no longer fine. The problems introduced by such input fix-ups may not even be technical problems, they might be social/business problems caused by the person thinking they input one thing, but then the system ends up "fixing it" into something else. Say for example you had a field where someone was supposed to enter an amount rounded to the nearest whole dollar and you applied filter logic that just removed non-digit characters. Someone isn't paying attention and enters in 12.25 instead of and that gets filtered and interpreted by the system as 1,225 instead. Now your filter has created a huge problem. In your followup you say you only do this kind of thing on login attempts. That's somewhat more permissible, and apparently some places actually do this kind of thing out of convenience. Your original post did not have this kind of constraint, it instead suggested that one should just filter out any "bad characters" then save the result to the database. This is terrible advice in general.
-
Trying to get data from form with Repeatable fields into MySQL...
kicken replied to Jim R's topic in MySQL Help
Your query syntax is wrong. The syntax of an INSERT query is INSERT INTO table (ColumnNameA, ColumnNameB[, ...]) VALUES (ValueOfColumnA, ValueOfColumnB[, ...]) In your prepared query the question-marks represent the values you want to insert into the table. For the column name you currently have some dynamic expression which isn't right. Instead you need the names of the columns into which you want to store the name and slug values. -
If you want to track where someone has been on your site and what path the took, the simplest thing to do is just have some code that runs on every request which captures the current page and stores it into the current session. For example $_SESSION['history'][] = $_SERVER['REQUEST_URI']; Make sure you've started the session, either here or elsewhere. Then when you want to that information, such as in your error handler, just read the current $_SESSION['history'] value and it should contain a list of all the URLs they visited in order from oldest to newest. You might want to enforce a limit on the history so it doesn't grow too large, say maybe only keep the last 50 URLs or so. Personally I've never really needed this kind of information for troubleshooting. Knowing which page the error occurred on and the $_GET/$_POST values is generally enough to resolve any issues.
-
There's really only two conditions you need to be concerned with, thus you only need two rewrites. Both of these rewrites should send the user directly to the preferred URL so there's only one redirect. Someone requests the page via http:// Whether they used www or not here isn't relevant, you just redirect them straight to https://www. Someone requests the page via https:// (no-www) Redirect them to https://www. If you're just googling for code to drop in, you might find implementations that chain instead, ie: http://$domain -> https://$domain -> https://www.$domain. While that works, it's inefficient and unnecessary.
-
This concept has to do with Canonical URLs, you should choose which URL format you want and the redirect everything else to that one format through either your code or server configuration. In simple cases the search engines may figure it out, but it's best if you control the process yourself. There are a number of resources out on the web that can show you how to redirect your requests appropriately for whatever software you're using. For example: Apache force www.
-
There is a pre-release (5.9.0) that claims support for the 8.0 release candidate, you could try that. Otherwise you'll just have to wait or try building it yourself. I personally don't really plan on doing any PHP 8 work for a while yet.
-
Cooperative multitasking, promises, and future ticks
kicken replied to NotionCommotion's topic in PHP Coding Help
PHP is single-threaded*, so yea each tick callback is run in series and must finish before the next tick callback can be run. Using event-based processing lets you make things that seem like they can multi-task but they don't really. A promise can be used to do work as well, it doesn't have to just wait for a result. Conceptually they are something that is used to just wait for something to finish, but that doesn't mean it has to be some external process. You could create a promise that just does work within PHP, such as that emailer queue in the previous thread. That could be implemented as a promise if one desired. From what I've seen, one of the biggest things people generally don't understand about promises is that they are not something that can just magically wait until another thing is done. They need to periodically check if that other thing is done. Since PHP is single-threaded, they can't just start up a background-thread to do that. Even if they could, the main thread would still need to know not to exit while the promise is pending. What is needed then is a way to hook into the main loop so the promise can do it's work from time to time and this is what functions like futureTick and addPeriodicTimer are for. These functions provide a way for the promise to register a callback with the main loop so that they will occasionally gain control of the main thread and be able to do whatever needs done. For promises that are just waiting on something, that likely is just a simple check to see if the thing is ready yet and nothing more. For something like the email queue, that would be sending out a small batch of emails. If you don't need the extra functionallity of the promise interface, then futureTick / addPeriodicTimer can still be used to just hook into the event loop for whatever reason. That's what the original email queue does, it hooks the loop to allow processing of emails but doesn't go so far as to provide a full promise implementation. As for your task, the react project has a package for running commands that you can use: reactphp/child-process. I think using that would be your best bet to handle your image generation, the shell thing you linked seems unnecessary and I'm not sure if it'd handle running commands in parallel or not. With the child-process package you'd just have to create a small function to execute your process and return a promise. That promise would be resolved when the process emits a exit event. For example: function executeCommand($loop, $command){ $deferred = new React\Promise\Deferred(); $process = new React\ChildProcess\Process($command); $process->on('exit', function ($code) use ($deferred){ if ($code === 0){ $deferred->resolve(); } else { $deferred->reject(); } }); $process->start($loop); return $deferred->promise(); } With that, you can generate all your images in parallel using something like: $promiseList = []; foreach ($imageCommand as $cmd){ $promiseList[] = executeCommand($loop, $cmd); } React\Promise\all($promiseList)->done(function(){ //All images generated, send your emails. }); * pthreads and pht provide some threading capability, but I've never tried using them. -
If you only need HTML and CSS I like CSS Desk. Otherwise you can just turn off the console in jsfiddle to make your footer visible.
-
Well, this would be one of those times. You can't do simple math like $now - $last with the value returned by microtime(). You need to use microtime(true) for that. For example: <?php $last = microtime(); sleep(5); $now = microtime(); $diff = $now - $last; printf("%0.4f seconds have passed", $diff); One might expect since the script sleeps for 5 seconds to get a result like 5.xxxx seconds have passed but what you actually get is: Notice: A non well formed numeric value encountered in W:\expired.php on line 7 Call Stack: 0.0002 391984 1. {main}() W:\expired.php:0 Notice: A non well formed numeric value encountered in W:\expired.php on line 7 Call Stack: 0.0002 391984 1. {main}() W:\expired.php:0 0.0044 seconds have passed Due to how microtime returns it's result $diff would never be greater than 1, and could potentially be negative.
-
Since your dealing in seconds I would suggest using time() rather than microtime(). Microtime has an unusual return value by default.