-
Posts
4,705 -
Joined
-
Last visited
-
Days Won
179
Everything posted by kicken
-
Like Barand, it times out for me also. The remote host does not respond to the the connection attempt. Wireshark shows all the outgoing SYN packets but no replies. Sounds like you have a networking issue. Either your port forwarding/firewall configuration is not setup properly or your ISP may be blocking the port. If you're fairly sure your forward/firewall is correct try different port numbers.
-
Is your server that is running the code being hosted somewhere outside your network? Maybe you need to setup some port forwarding rules on your router or enable the port in a firewall somewhere.
-
Sounds like you may be calling your function too early, before the stuff gets added to the DOM. What's your Ajax function?
-
When it comes to the body content of your email, you just need to make it clear what timezone your referring too. There's no way to automatically change it to whatever the receivers timezone is. So instead of saying 'Your session starts at 6:00 AM' You should say: 'Your session starts at 6:00 AM Eastern Standard Time' Then the end-user can convert the time if necessary on their own. If you can get them to tell you their timezone on your site before sending the email then you can send the message using that timezone, but still include the timezone indicator in case they selected the wrong timezone or have changed time zones since the message was sent.
-
The size/quality variations come down to the different compression techniques and settings one uses. When the image is decompressed, which it will be when you load it with something like imagecreatefromXXX to do work on it (ie, resize it) then it will be using a certain amount of memory per-pixel so the image size in pixels will give you an idea of how much RAM you need to load it. A full color image for example may need 32-bits (4-bytes) per pixel when loaded into memory so a 1920x1080 image would need about 8MB of memory even though the compressed file size may only be a few hundred KB.
-
Doctrine\Common\EventSubscriber is the interface I am referencing. Doctrine\Bundle\DoctrineBundle\EventSubscriber\EventSubscriberInterface doesn't exist in my setup so not sure whether there's a difference. Might just be a change in the newer version. It's been a while so my memory is fuzzy. preFlush is there to handle the encoding process just prior to the changes being committed to the DB. The whole postLoad thing rather than preUpdate is because preUpdate doesn't get triggered if no mapped field is changed. Since the $newPassword field isn't tracked by doctrine it won't notice when it's been changed and will ignore the entity if that is the only field that has been changed. As such, something like this would fail: $user = $em->find(User::class, 1); $user->setNewPassword('test1234'); $em->flush(); The solution I came up with at the time was just to watch for any entities that get loaded which implement the interface and keep track of them. Then when a flush event occurs call their encodeOnPersist method so they can get updated if necessary. Specifically for user passwords another way around this might have been to set $password to null when setting $newPassword but I didn't think of that at the time and this seems like a better generic method anyway. The preFlush basically just does the encoding process. First it looks for any newly created entities and adds them to the list. Then it runs though all the known entities and runs their encodeOnPersist method. The last line tells doctrine to re-examine the entities for changes. The way doctrine handles flushing the data is it first examines all the entities and their mapped fields and determines which ones have changed. The preFlush event is run after that change detection process so if you make a change to any entities it won't be detected unless you tell doctrine about it by having it re-run the change detection.
-
The idea is to defer the actual encoding to the entity because different entities might want to encode different things. The user hashes the password but maybe something else needs to base64 something or generate a hmac signature or whatever. Passing the EncoderFactoryInterface as a parameter lets it tie into symfony's encoding configuration in the security.yml file easily. The entity can use it or not. My knowledge on that is probably outdated. When I started with Symfony the only real magic/scanning was that it'd scan the src/AppBundle/Command folder for *Command.php files and register them as console commands. A quick look at the documentation suggests that it doesn't do that anymore either. The nice thing about Symfony is most of it's "magic" is either stuff you explicitly configure or it's easy to override if necessary. For the most part everything is just driven with the YAML configuration files and not by scanning files. I know the newer versions have a feature where you can let it scan for services and register them, but it's not something I've looked into much. I use it to register my controllers in one project but my other services I declare manually. The feature is probably fine to use, but my code all comes from before that feature existed. Without digging into the code implementing that feature I'm not sure exactly how it works, but the manual says it just uses a standard glob pattern to locate files and shows the default pattern being essentially src/* so that'd imply every file regardless of extension. I'd guess it probably takes every file and transforms it into a class full class name then does a class_exists() on it which will trigger the auto-loading mechanism to try and load that class. If it can then it'd register that class. If it fails it probably just skips that file and moves on. You could check the code if your curious for the details and let me know if I'm right or wrong. The autoconfigure / autowire features are what rely on interfaces/type-hints to determine what to do. For example with the console commands now instead of scanning the Command folder it just checks if $class instanceof Symfony\Component\Console\Command\Command and if so registers it as a console command. These features I do take advantage of which means most of my service definitions end up just being a single line in the services.yml file: ### Full Auto-wire Services AppBundle\Form\UserRoleType: ~ AppBundle\Form\UserAccountType: ~ AppBundle\Service\Search\StudentSearch: ~ AppBundle\Service\Search\CourseSearch: ~ It's possible to implement your "magic" if you want. I haven't done it with the newer setup, but with the older versions (2.7) I had some code that would check for services with a tag of app.background_worker, gather them all up into an array and then pass it as a constructor argument to my app.run_background_workers service which was a console command. I then created a systemd service that would run that console command. The web app could then submit jobs via gearman which would be picked up by that service and run independently of the web app.
-
I do something similar for the password hashing. I'd probably have tied in the other account creation via a different method, but it's fine either way I imagine. Those are two separate tasks though so I would make them into two separate classes to make the code cleaner. The password hashing bit could easily be made more generic so it can be reused in other areas if needed. That is what I did for mine. It looks like this: class EncodeOnPersist implements EventSubscriber { private $factory; private $entityList; public function __construct(EncoderFactoryInterface $factory){ $this->factory = $factory; $this->entityList = new ArrayCollection(); } public function getSubscribedEvents(){ return [Events::postLoad, Events::preFlush]; } public function postLoad(LifecycleEventArgs $event){ $entity = $event->getEntity(); $this->addEntity($entity); } public function preFlush(PreFlushEventArgs $event){ $em = $event->getEntityManager(); foreach ($em->getUnitOfWork()->getScheduledEntityInsertions() as $entity){ $this->addEntity($entity); } foreach ($this->entityList as $entity){ $this->encodeEntity($entity); } $em->getUnitOfWork()->computeChangeSets(); } private function addEntity($entity){ if ($entity instanceof EncodeOnPersistInterface && !$this->entityList->contains($entity)){ $this->entityList[] = $entity; } } private function encodeEntity(EncodeOnPersistInterface $entity){ $entity->encodeOnPersist($this->factory); } } Each entity that needs a value hashed/encoded then just implements this interface: interface EncodeOnPersistInterface { /** * @param EncoderFactoryInterface $factory * @return void */ public function encodeOnPersist(EncoderFactoryInterface $factory); } For example: class UserAccount implements AdvancedUserInterface, \Serializable, EncodeOnPersistInterface { // [...] private $password; private $newPassword; // [...] public function encodeOnPersist(EncoderFactoryInterface $factory){ if ($this->newPassword !== null){ $encoder = $factory->getEncoder($this); $this->password = $encoder->encodePassword($this->newPassword, $this->getSalt()); $this->eraseCredentials(); } } } The same encoder class gets reused for the forgot password tokens and could be reused again for anything else down the road that might need it.
-
Doctrine's event system is specifically designed to let you deal hook into the entity processing/database management system. That's the only thing they are really good for. If that's what you want to do then you'll probably want to use doctrine's events to accomplish your goal. Symfony's event system is just a generic event system that lets a person define events in one area and then subscribe to them in another area. You'd use these whenever you want to design a system where you want or need to be able to extend the functionality in some way or just keep track of what's going on. For example, I have a system where a user creates a committee of other users which then has the be approved later on by an administrator. The code that handles the approval just sets the approved timestamp then dispatches a "Committee Approved" event using the Symfony system. I then have a separate class which subscribes to that event and handles sending email notifications that the committee was approved. I split these tasks by using an event for a couple reasons The code is cleaner this way Maybe someday I'll want to do other things "when a committee is approved" and can just add another subscriber rather than mess with the approval code. Just the way someone decided to do it I'd imagine. You could do it differently if you want. I don't use generic namespaces/folders like that and prefer to be more specific when possible. For example, that committee approval notification is in src/Notification/CommitteeApprovedEmailer.php. The doctrine documentation seems to suggest that you can listen for multiple events in a single class. How you'd accomplish such a thing via Symfony's configuration file I don't know. Symfony might have limit it to a single event, or maybe you can pass an array to event: to define multiple. One would have to experiment or dig into the source code to find out for sure.
-
Errors that make no sense to me. What am I doing wrong?
kicken replied to guymclarenza's topic in PHP Coding Help
A query can have more than one parameter (eg, multiple WHERE conditions), as such PDO::execute() needs to be able to accept more than one value to bind to those parameters. The way it does that is by taking in an array of values (even if it only needs one). Pass an array to execute with $site as an element of that array to fix that problem. $stmt->execute([$site]); Your foreach error is due to the variable you provide ($table) not being an array (or other Traversable object). You don't actually define $table anywhere in your code, it just magically appears. If you want to loop over the results of your query, then you have a couple options. PDO Statements can be used directly in foreach, so just loop over $stmt. If you do this, you do not call $stmt->fetchAll() first. Grab the results using $stmt->fetchAll() and loop over that variable ($fcid in your code). -
The change you need to apply is to change that line so it's setting that variable up as an array rather than a string. $V0f14082c['parser-options']=""; changes to $V0f14082c['parser-options']=array();
-
Yes, as indicated on the home page. The code is over on Github if your interested. Only the basics if you stick to what the library can natively do given it's options. The more JS you know the more fancy/custom stuff you can do with it, but for basic chart rendering you only really need to include the library and write: new Chart(document.getElementById('where'), jsonConfiguration); Here's something to make it even easier if you stick to the basics: window.addEventListener('DOMContentLoaded', function(){ var chartElements = document.querySelectorAll('div[data-chart]'); chartElements.forEach(function(div){ var configuration = JSON.parse(div.dataset.chart); var canvas = document.createElement('canvas'); div.appendChild(canvas); new Chart(canvas, configuration); }); }); That will look for any div tags with a data-chart attribute and initialize a chart in that element using the configuration in the value of the data attribute. Not really. For the most part you can just generate the configuration in PHP as an array and json_encode it. So you first would generate your configuration as a PHP array using the data from your database: $configuration = [ 'type' => 'bar', 'data' => [ 'labels' => ['January', 'February', 'March', 'April', 'May', 'June', 'July'], 'datasets' => [ [ 'label' => 'Group A', 'backgroundColor' => '#84999a', 'borderColor' => '#ff0000', 'borderWidth' => 1, //your values from the DB go here 'data' => [81072, 14498, 20460, 14651, 34036, 20056, 27270] ] ] ], 'options' => [ 'responsive' => true, 'legend' => [ 'position' => 'top' ], 'title' => [ 'display' => true, 'text' => 'Income' ] ] ]; Then, if your using the function above to initialize the charts just attach it to a div tag in a data-chart attribute using json_encode (and htmlspecialchars for protection). <div data-chart="<?=htmlspecialchars(json_encode($configuration));?>"></div>
-
Different types of returns using gethostbyaddr() and REMOTE_ADDR
kicken replied to ajetrumpet's topic in PHP Coding Help
More or less. A reverse DNS entry (IP to Name) is not required so it may or may not exist for any given IP. If it does exist, the IP owner can make it whatever they want it to be. For the majority of IPs the reverse entry is controlled by the host or ISP. An ISP will generally either not have a reverse entry or it will be something dynamically generated. A hosting service will generally be similar to an ISP, but some do allow the end-user to control the reverse entry if they have a dedicated IP. With the ubiquity of shared hosting these days though the reverse generally just is something generic rather than mapping to a domain similar to a forward lookup mapping to an ip. -
If you do decide to step things up a level I highly suggest just finding a good library to do your chart rendering. Chart.js is my tool of choice currently. It offloads the chart drawing to the client so all you have to do on the server side is generate a json structure.
-
You're going to have to spend time trying to debug it or contact the author. There's no obvious error that can be seen looking at the site. It makes the request to send messages just fine. You need to figure out what's going on after that and why it's either not saving the sent messages or not delivering them to the users.
-
Rule #3 is about avoiding something like this: function blah($list){ foreach ($list as $a){ if ($a['flag']){ foreach ($a['list'] as $b){ if ($b['flag']){ //something } else if ($b['flag2']){ foreach ($b['list'] as $c){ if ($c['x']){ //something } else { //other thing } } } } } } } It's hard to read, particularly when the //something's are longer than one line and it wastes horizontal space. It'd be better to pull chunks out into other functions, for example: function blah($list){ foreach ($list as $a){ if ($a['flag']){ processFlagList($a); } } } function processFlaglist($list){ foreach ($list as $b){ if ($b['flag']){ //something } else if ($b['flag2']){ processFlag2($b); } } } function processFlag2List($list){ foreach ($b['list'] as $c){ if ($c['x']){ //something } else { //other thing } } } Much easier to follow, especially if you only want a high-level overview of the code and don't need the details. This can be valuable in some big if/else chain like described above. Imagine you were just quickly trying to get an idea of what some function does and you see this: function blah($list){ foreach ($list as $a){ if ($a['flag1']){ //something 20 lines long } else if ($a['flag2']){ //something else 20 lines long } else if ($a['flag3']){ //yet another thing 30 lines long } else { //and finally the last thing 8 lines long } } } Remember, imagine all those //something's are big blocks of code that prevent you from seeing the whole tree like you can here. That'd be a whole lot of code to read through, figure out what it does and keep track of it all. What if instead you saw this: function blah($list){ foreach ($list as $a){ if ($a['flag1']){ sendToProcessor($a); } else if ($a['flag2']){ markCompleted($a); } else if ($a['flag3']){ notifyTheCEO($a); } else { setEverythingOnFire(); } } } Much easier and the names of the function let you know what's happening without having to dig deeper. If you decide you need to know exactly how the code sets everything on fire you can easily* go find out, but if you don't care then that code isn't cluttering up your view. * With a good IDE going and finding out what a function does is easy. In PHPStorm for example I could just CTRL+Click on (or CTRL+B with the text cursor in) the function name and it'll take me to it. Without a good IDE it might be harder.
-
The beginning would be when you click the submit button to start the upload. You can use Javascript to make your loading GIF visible at that point, it will remain visible while the browser upload the files to the server. Once the upload is complete and your script returns a response then it'll replace the current page and loading gif. document.getElementById('upload-form').addEventListener('submit', function(){ document.getElementById('loading-gif').style.display='block'; }); <img id="loading-gif" src="loading.gif" style="display:none;"> <form id="upload-form"> </form>
-
That's far too restrictive and either something that was misunderstood or came from a crazy person. I came across a rule of thumb some time ago that I try to live by which goes a little something like this: A function should fit entirely on screen (no vertical scrolling required). How many lines this is will vary based on resolution/fonts and such, but ends up being around 65 lines for my environment. A single line shouldn't be longer than the width of the screen (no horizontal scrolling required). That ends up being around 120 characters for me. A function shouldn't have statements nested more than 5 levels deep Use a descriptive name for the function even if it's verbose. As with many things those are rules of thumb and not everything will conform to them, but I find they are generally pretty easy to follow. If you end up breaking one of them, then it's time to analyze the situation and decide if you need to break the rule or if it can be refactored. For rule #1, some things I only consider to be one line because they can be easily code-folded away. For example I have many functions that are way more than 65 lines, but a lot of those lines are a large SQL statement that I can just fold down to a single line. Rule #3 helps make rule #2 easier to hit as you don't waste a bunch of screen real estate indenting things a bunch. Something nested 5 levels with a 4-space indent would be wasting 20 characters on the indentation. In my experience, refactoring old code to help fit the above rules is relatively simple. Start by just moving blocks of code, for example move the body of a loop into a new function then call that function from the loop. Alternatively, just move the entire loop and replace it with a function call. Having a bunch of functions in a single file isn't necessarily bad. It'd be best to at least group them in some way rather than just dump everything in one file. For example, I have a few functions that deal with manipulating arrays which are all grouped together in a file. Some other functions that handle validating input are in a separate file. If you want to do one function, one file like with classes that's fine as well but not generally necessary imo.
-
I'm guessing the gap you're referring to is the upload process. When you upload files PHP handles receiving the incoming data and saving it into temporary files on the server. That all happens internally before your code runs. Only after that upload process is complete does your script run and then have an opportunity to do something with the uploaded files. The files should be uploaded in the order of your file input elements. If you're using a single input element with the multiple attribute then I don't believe there is anyway to control the order. Not that any of that should matter anyway. By the time your code can do anything all the files will be uploaded and available so you can just do what you will with them.
-
For reference, the example code I posted in your other thread was able to crawl all 127 pages of my site in about 20 seconds.
-
Not very. If you want to go more OOP then you'd break things out into classes a bit more and get rid of your global variables. Each class would be responsible for some portion of the overall task and you then combine them together to accomplish the task. I can see at least three potential classes. Crawler which is responsible for downloading the URLs that are in the queue. LinkExtractor which is responsible for finding links in the downloaded document. LinkQueue which is responsible for tracking which links need to be downloaded. Linked above are some re-writes of your code in more of an OOP style. If you have any specific questions, feel free to ask. For the most part I just re-arranged your code, but your createLink function (now LinkExtractor::resolveUrl) needs some work to actually resolve a relative URL properly. I fixed it up a little, but it's still far from perfect. Once you get a basic version working, what you'd do is update your Crawler class so it can download several URLs in parallel using something like the curl_multi_* functions or library such as guzzle. Don't worry about this at all until you have your crawler working as you want it first though. Debugging a sequential process is much easier than a parallel process. Ultimately though crawling a site is something that is going to take time. If you want to have a service where users can submit a URL to crawl then you'd want to have the crawling task done by a background worker and notify the user when it's done either by sending them to a page that can monitor the progress or by sending them an email with a link containing the results.
-
Retrieving a request parameter from Symfony's Request
kicken replied to NotionCommotion's topic in PHP Coding Help
If your post body is json then it seems you need to either use getContent to grab the json and parse it manually or use toArray to grab then parsed array. $request->request->get() is how you would get normal post parameters but it seems that it doesn't process json documents and make the keys available via it. -
Uploading a file with JSON to a REST API
kicken replied to NotionCommotion's topic in PHP Coding Help
If you do a typical multipart/form-data post then certainly you can send both. It'd be just like submitting a normal HTML form with a file input. Most your frameworks and such should probably handle this just fine. If you want to do a two-part request where part 1 is just a JSON document and part 2 is the file content then you certain can do that, but you may not find built-in support for it in your frameworks because as far as I know it's not something that's usually done. As such, you'd have to spend time creating your own code to handle such a request in whatever way you need to. As far as I know the more common way things are done is to use separate requests so you'd first POST your metadata then PUT the file content to s certain URL. For example if you were adding a document to some project X you might POST /api/project/X/documents to create a new document and the response of that would give you a document ID number Y. Then you'd PUT /api/project/X/documents/Y/content with the binary content of the file. So I guess the question mostly is do you want to spend the time writing the code to make your nice-looking single-request or uglify your request to take advantage of existing code. -
Sports? 🤷♂️ I don't really follow football (or much of any sport for that matter) so meh. I'll be half-watching the game with friends but have no stake in it. You?
-
Can one user be both an owner of one project and a vendor of a different project? If so, it'd seem like a single user class with a quality access control system would be the best way to handle things. For example, rather than give ROLE_OWNER directly to the user give it to a ($user,$project) pair your code you can do something like $project->isOwner($user) to check (perhaps with a voter to integrate it into Symfony's default access control system). My Symfony system doesn't allow student's to login so there's only one kind of user. That decade old application does have separate student/staff users which if converted directly to a Symfony setup would result in two distinct classes. If I were going to move the system over I'd probably combine them as at the login/user account level there's really not much difference between the two. The things that are different I'd probably try and factor out into something separate. I'd have to spend a lot more time thinking about and maybe experiment some on that decision though.