Jump to content

JacobSeated

Members
  • Posts

    26
  • Joined

  • Last visited

Profile Information

  • Gender
    Male

JacobSeated's Achievements

Member

Member (2/5)

1

Reputation

  1. Wordpress websites can be very difficult to optimize. But this depends on what theme you are using. Some theme developers have no idea what they are doing and just load all their CSS in one file, using extremely inefficient selectors. I have a client that asked me to optimize his site, and what I basically established was that little could be done. He is relying on external JavaScript files, such chat widgets, Analytics, and Google Maps. Such things tend to slow down a site a lot. Luckily they typically load asynchronously, so users will not notice it much. The pagespeed tool might still complain about them though. Another thing you really should look into, is the size of your CSS. If you use Divi or similar themes, then the CSS tend to grow extremely large. A good CSS file size is around 10-15kb max. I have seen examples of CSS approaching 1MB in size! Best solution is to stop using Divi and make your own custom-coded designs. I do not know of any good plugins to optimize CSS. They are not "intelligently aware" of the CSS, and that means they might actually make things worse. Autoptimize will just take all of your CSS and combine it into one big file. This is very inefficient, and might cause other problems. Code included in the CSS file should generally only concern stuff that is used sufficiently often, on multiple pages. If you can identify your "essential" (shared) CSS, then you can embedded this in the <head> of your site to speed up the load time. Images will typically not be the hardest thing to optimize, as you can resize and compress those. There are even plugins that will auto-convert to .webp. But, try running your site through Lighthouse in developer tools or PageSpeed Insights, that is how the rest of us try to optimize our pages.
  2. Just a few tips if you are coding vanilla PHP. As others have said, you can create "pretty URLs" with mod_rewrite; But a better way to go about it would be to point all requests to PHP, for non-existent files, since it is much easier to prettify your URLs from PHP than it is with the horrific syntax of .htaccess. Here is an example, for .htaccess: RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^.*$ / [QSA] Then, from your index.php file: $parsed_url = parse_url($_SERVER['REQUEST_URI']); $routes = [ '/^\/(blog)\/([a-z0-1_-]+)$/', '/^\/(forum)\/([a-z0-1_-]+)$/' ]; foreach ($routes as $url_pattern) { if(false === preg_match($url_pattern, $parsed_url['path'], $matches)) { http_response_code(404); echo 'Page not recognised...'; exit(); } // If a requested path matched a pattern, try to call the related feature $requested_feature = $matches[1]; $feature_path = $matches[2]; if (is_callable('feature_'.$requested_feature)) { call_user_func('feature_'.$requested_feature, $feature_path); } else { http_response_code(404); echo 'Page not recognised...'; exit(); } } function feature_blog($feature_path) { http_response_code(200); echo 'Showing the blog'; exit(); } function feature_forum($feature_path) { http_response_code(200); echo 'Showing the forum'; exit(); } This is of course not an ideal way to handle it, but a pretty good starting point. Modern applications also uses templates, again, a pretty good starting point , if you do not use a template engine, would be to store your HTML in heredoc, inside separate .php files: <?php $template = <<<LOADTEMPLATE <!DOCTYPE html> <html lang="en"> <head> <title>{$tpl_content['title']}</title> <link rel="stylesheet" type="text/css" href="/my_css_file.css"> </head> <body> <article> <h1>{$tpl_content['title']}</h1> {$tpl_content['content']} </article> </body> </html> LOADTEMPLATE; // Comment to preserve required "\n" character after heredoc-end delimeter on editor saves You can easily load this file from whatever location you want, and then have it filled out automatically with the contents of the $tpl_content array; just remember to define the array elements to avoid undefined notices. To output the template, and have it filled out with contents, you could do like this: function feature_blog($feature_path) { // Define template content $tpl_content['title'] = 'Hallo World'; $tpl_content['content'] = '<p>Hallo World</p>'; // Include the relevant template, require_once('templates/default.php'); http_response_code(200); echo $template; exit(); } Keep in mind, this is just an example. But you could easily use this as a base for something more mature. A decent system would also allow you to set HTTP response headers, implement caching mechanisms, and allow you to restrict HTTP request methods on a per-feature basis. But, if you used a framework or a CMS like Wordpress, then some of this should automatically be handled.
  3. You do not need a database at all. In fact, it is sometimes faster to not use a database, since it takes extra time to establish a database connection (unless it is hosted on the same server). I already tried this. The code is placed on GitHub under Apache license, maybe you can use it in your project. It only uses build-in features of PHP and gd library. Available here: https://github.com/beamtic/php-photo-gallery The project has not been updated for awhile, but still got some features planned, including a Wordpress plugin.
  4. In my experience, .htaccess (HTTP based authentication) is actually pretty bad, because the htaccess syntax is so exotic that even experienced developers sometimes struggle to figure it out. But, I often use it on client test-sites to easily prevent access and indexing in search engines. For my personal projects, I do much prefer to just redirect all requests to PHP, even for files that exist. This is because I just find it much easier to do what I need from PHP. In a purely PHP based app, you could configure your VHOST like this: RewriteEngine on RewriteRule ^.*$ index.php [L,QSA] That should basically rewrite all requests to your PHP application. The down-side is that you need to manually handle caching headers, for static files, as well as the range header (range requests) if you want to properly support streaming of video files. I am working on a file handler for this: https://github.com/beamtic/Doorkeeper/tree/master/lib/file_handler The good thing about redirecting everything is that it is much easier to secure your application since it only has a single point of entry. This is nice when using form-based authentications with cookies. Alternatively, if you still want to support some public static files, and have them delivered by your web server, you can use the following configuration: RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^.*$ / [QSA] This would still allow you to keep some private static files outside of your web-root, and then have a file handler script serve them to the user after the user has authenticated. The configuration should also work from .htaccess, but I recommend just throwing it in your VHOST config. It should also be more efficient, although that's probably insignificant in must cases.
  5. That makes sense, thanks. I am now wondering if I should use preg_replace_callback or if it is safe to use addslashes. Guess I have to read up on those.
  6. I recently fixed a bug in my own website involving preg_replace, the thing is, each time I was editing an article in the front-end, preg_replace would be called on the HTML to replace certain HTML elements. What I suspect is that preg_replace has a bug that causes it to remove certain characters from the replacement string, leading to corruption of the output. Obviously only the HTML (haystack) should be modified, and the replacement string dropped in place of the needle, without modifying it in any way. This is not what happens if the replacement string contains backslashes. I have tried to figure out what exactly is going on here, and have come up with a fix by using str_replace instead. But, I wonder if there is a solution that would allow me to keep using preg_replace? I also wonder if there are other characters that might be removed doing the replacement operation? I know that you must escape backslashes when declaring variables, but the replacement string is obtained directly from a MySQL database, and I know the data is OK. The fact that you need to escape literal backslashes in PHP scripts makes it harder to debug the problem. For example, if you just try my solution directly, without addslashes, you will be missing backslashes. I guess you either have to escape those, or load the data from a file. This is my current solution (I do not use addslashes in the live version): $html = '<div>REPLACEMENT_ID</div>'; $replacement_id = 'REPLACEMENT_ID'; $replacement = addslashes('<pre>\\\\</pre>'); // $html = preg_replace("|{$replacement_id}|", $replacement, $html); // $html = str_replace($replacement_id, $replacement, $html); $pos = strpos($html, $replacement_id); if ($pos !== false) { $html = substr_replace($html, $replacement, $pos, strlen($replacement_id)); } print_r($html); If you comment out the substr_replace test, and instead uncomment the preg_replace one, then you will get an inaccurate number of backslashes, similar to the result I got when using data directly from my database. Hope someone can help shed some light on this 😄
  7. Hallo again, it turns out I was on a completely wrong track. If the Theme is object orientated, then it may have something like this as the top of each PHP class: if ( ! class_exists( 'TravShortcodes' ) ) : If that is the case, then it will only apply the parent class if it has not already been included. This means that you should just copy the file over to the child theme folder, and then include it from your functions.php: include_once get_stylesheet_directory_uri() . 'shortcodes.php'; This way you can easily modify any aspect of the parent that you want. It is important you remember to include the file, since simply copying it is not going to be enough. Thanks. This is not solved.
  8. I am making some progress in debugging this. Of course I did not mean "filter", as I am trying to override an existing shortcode. Anyway, I managed to verify that the shortcode has been added. I noticed the parent theme is using the init hook to add shortcodes. Then I tried print_r($shortcode_tags) again after using the init hook in my child theme, with a priority of 20, and I was finally able to verify that the shortcode is loaded. I.e.: add_action('init', 'replace_parent_theme_features', 20); function replace_parent_theme_features() { global $shortcode_tags; print_r($shortcode_tags); exit(); remove_shortcode('shortcode_search_group'); add_shortcode('shortcode_search_group', 'child_shortcode_search_group'); } Apparently the priority is also important. I have a feeling I am getting close, but I still can not get remove_shortcode to work for some reason.
  9. Hallo PHP geeks I need to override a parent-theme shortcode in Travelo from a child theme; I can edit the parent theme files just fine, but when I try to override the shortcode function from the child theme, nothing seems to happen, regardless of which hook I use to do so. I can tell that my filter function (add_new_child_shortcodes) is called, because when I echo directly from the filter, I do get my output. This fact indicates that the remove_shortcode function is called at the wrong time, and the parent function is never removed. From experimenting I know that the child-theme functions.php file is called before the parent, but according to users on stackoverflow, you should still be able to override the shortcode by using the right hook. I already tried various proposed hooks from stackoverflow and other websites, but none of them worked in my case. This includes the following hooks: after_setup_theme init wp_loaded Also, I can not seem to figure out how to get a list of the loaded shortcodes, to see if the shortcode I am trying to override has actually been loaded. I tried print_r($shortcode_tags) within my filter, but this is empty when called from within a function, and I am also not sure that is the right variable. It does output a bunch of stuff when used outside though, but obviously not the relevant stuff - presumably because it has not yet been loaded. This is my code so far: add_action('after_setup_theme', 'add_new_child_shortcodes'); function add_new_child_shortcodes() { remove_shortcode('shortcode_search_group'); add_shortcode('shortcode_search_group', 'child_shortcode_search_group'); } function child_shortcode_search_group() { echo 'hallo';exit(); } I specifically need to replace the shortcode_search_group shortcode, because I need to make modifications to the search HTML. If it is of any help, when I examine the relevant file in the parent theme (inc/functions/shortcodes.php), I can tell that shortcodes themselves are added with the following code: function add_shortcodes() { foreach ( $this->shortcodes as $shortcode ) { $function_name = 'shortcode_' . $shortcode ; add_shortcode( $shortcode, array( $this, $function_name ) ); } // to avoid nested shortcode issue for block for ( $i = 1; $i < 10; $i++ ) { add_shortcode( 'block' . $i, array( $this,'shortcode_block' ) ); } add_shortcode( 'box', array( $this,'shortcode_block' ) ); } The $this->shortcodes variable is an array of shortcode names, corresponding with the method names in the class. Help will be much appreciated!
  10. On second look, this pre-configured box is not usable to me, since I need to learn to use Bitnami as well, and the paths are also non-standard, so I have no idea where to find the files I need to edit. An old fashioned LAMP install is just so much easier to work with for me. I wasted a lot of time trying to just get it to work with a domain name instead of the bare IP address. Guess that settles my doubts.. Will go for the latest version, with a manually installed Elasticsearch. Strange that I find this much easier than all this auto-installer stuff...
  11. Hallo people, this forum has been a help and inspiration before, so lets see about this one.. I am in the situation that I promised my boss to try and learn Magento, possibly with the aim to help out with development in the future. Now I just depleted my mental energy from trying to get the Elasticsearch Service, as offered by AWS, to work. I was unable to connect from my EC2 instance (even with the WAN IP approved in the security group). Then I noticed today that AWS has a pre-configured 2.3 Magento option for Lightsail — which is really tempting for me at this point — but that's without Elasticsearch. My understanding is that Elasticsearch will be required from now on, so the question is if I should just go ahead and install 2.4 manually on lightsail, with a localhost ES? The idea was to launch a Lightsail instance and then install Elasticsearch on it, either as an external service, or together with Apache and MySQL. My existing EC2 instance already has too much stuff running on it, and I also fear it has too low memory for all these services to run reliably without swapping. For now, I will just try their 2.3 setup, as I am really just interested in getting something to work before tomorrow.
  12. I own a website/blog where I write about different technical subjects, mainly web-development and coding. No reason to post it here, since I am not trying to advertise. Over the years I have observed some really weird and interesting things that people will do relating to my site. One time, someone took my entire website and placed it in an iframe. The site that did this was total spam. Never found out why or what they wished to accomplish by doing it. Luckily I have not had major problems with plagiarism yet, but I do have scrapers stealing my content and posting it together with spam on other sites. The exact purpose of this is unclear to me, since they mainly post gibberish, and it seems they gain nothing from doing it. What is strange is that some of the sites is using AdSense, and it seems they are not even banned for these activities. It does not seem like this has any negative impact on my rankings; only recently one of my articles, ranked on first page, got de-indexed; and I had to request re-indexing — after which it quickly returned to same spot as before it was de-indexed. Perhaps the strangest thing, and something that is still happening, is that one of my articles is getting blasted with HTTP requests from countless of different IP addresses, and this has probably been going on for more than a year now. Most of these are simple GET requests, and I can tell it is not legitimate traffic, since typically the same IP will be using a lot of different user agent strings, and occasionally it might even post a spam comment (POST request); I have now started to ban the most aggressive ones, since they have been making hundreds of requests to the same article, and it was starting to mess with my internal statistics. If I look up the IPs, then they will usually have been reported for malicious activity, so I am fine with blocking them. I just do not want to be manually blocking IPs, so maybe it would be better to install a server module to deal with it. I am just really curious about these activities, since some of it has been going on for so long, and it just seems completely pointless. Maybe someone here has found out what some of this is about? I also get a lot of spam comments, without any links in them, so it is also a mystery to me why someone would want to post all this junk on someone's website without anything to gain from it. Presumably it is some sort of attempt to influence search rankings indirectly, maybe by getting people's website de-indexed because of spam comments — but that is just pure speculation on my part.
  13. I used h1's in my section elements for a while a few years back, on my tutorial site, without noticing any negative effect on my traffic. I changed back to h2-h6+section when it was made known that some assistive tech did not support it yet, and also the document outline go messed up since browsers also did not support it. I still hope that it is implemented, but I guess I have no real use for it right now. Interesting you actually tried doing that and it resulted in a 400 error. I was wondering about how to test that.
  14. I think those SEO plugins tend to cause more confusion than they add value. I have personally never used them, and I do not care if some of my title tags are too long, duplicated, or if my descriptions are too short. If some of my descriptions are short, then it is probably intentionally, and I do not want notifications about it. Having said that, chances are that you can enter in: https://example.com./ With a "dot" in front of the slash, this is also a useful trick to circumvent certain cookie-walls on websites. I think very few websites seem to normalize such requests, since developers are often not aware of it. Afaik. Search engines will perceive it just the same — dot or no dot.. But, I still prefer to redirect such requests. I think requinix is correct. Even if a misbehaving client leaves out the path part in a HTTP request, a leading slash will, as a minimum, be "presumed" by the server. But, I am not sure about that, or if it is even possible to forge such a request. Normally, the client should always add a leading slash, since an empty path is not valid. See this: http://tools.ietf.org/html/rfc7230#section-5.3.1 I have also encountered SEO folks/software that erroneously claimed you could not use multiple h1 headings, needed to include keywords meta, and so on.. But that is another story... The same goes for <a href=""> links basically. It has absolutely no significance if we include the slash or leave it out on the bare domain name. Another configuration flaw that most websites seem to suffer from, is the fact that we can use non-existent URL parameters. I.e. ?something=blah; ideally, doing that should simply result in a 404 error — I do not think these SEO plugins will inform you about these niche cases. Regardless, Google is quite good at choosing the canonical URL, so I will not worry too much about it, unless I start having problems — start seeing the main pages de-indexed, and the bugged pages taking their place. Another thing to keep in mind is that we also need to whitelist certain parameters, such as ?fbclid, since we might accidentally block people from sharing links on Facebook, and other social media sites. Personally, I prefer leaving out trailing slashes on article pages and files, since they indicate directory or index rather than a page; but it really does not matter; IMHO, it is just ugly to see URLs like this: https://www.example.com/some-article-name/ https://www.example.com/robots.txt/
  15. Ahh, it seems PHP is using "**" instead of "^" for some reason. https://www.php.net/manual/en/language.operators.arithmetic.php Somehow that slipped past me. Thanks anyway though 🙂
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.