Jump to content

gizmola

Administrators
  • Posts

    5,969
  • Joined

  • Last visited

  • Days Won

    147

Posts posted by gizmola

  1. On 12/11/2024 at 5:46 AM, oslon said:

    It's coming to my notice that powerful machine is required for practicing this big data stuffs like elasticsearch(ELK stack) on kubernetes. I've i5.7th gen laptop and I am only able to just start the cluster, the other stuffs justdon't run properly.

    Certainly you are pushing the limits on a workstation with a processor that was released 12+ years ago.  With that said, my guess is that you might have limited memory available?

    Rather than trying to use Kubernetes, you should try this image:  https://github.com/deviantony/docker-elk

    It comes with a docker-compose.yaml file, so you basically just "docker compose up".

    They indicate that you need 1.5gb available, which is not an unusually large amount of memory, considering you are running 3 orchestrated containers.  If your workstation is highly constrained on memory, then virtualization of any kind is probably going to be somewhat slow and potentially not very enjoyable.  With that said, I did an awful lot of work in vm's using vmware workstation, virtualbox, vagrants etc, on machines back in the i5/i7 era.  

  2. I don't see an issue, based on what you've shown. 

    The only other thing I can think is that the file /home/cycles/opcache-gui/opcache_blacklist.txt has permissions that don't allow it to be read by the php-fpm process user.  

    Just to be clear, this simple test script is in a webpage and not just a script run from CLI php, correct?

    Opcache is basically useless for php cli because the php process that includes the cache is disposed of each time.

    I'd be interested to know what you currently have in your opcache_blacklist.txt file, and I'd like to have you also add a call to https://www.php.net/manual/en/function.opcache-get-status.php

  3. When you are using opcache gui, does the "Ignored files" tab appear?  What if anything is in that tab?

    For entire directories you most certainly need a glob like this:

    /home/cycles/public_html/wp-admin/*
    

    With that said, it appears you are trying to make the blacklist file setting in an .htaccess?  The blacklist file is an INI SYSTEM parameter which means it can only be set in the php.ini or httpd.conf.

    A page with phpinfo(); would help you verify the settings.

  4. I would agree unless there is some objective you have in mind that javascript would facilitate that a plain html link would not.

    However, since you are on the topic, one often neglected aspect of links is the rel attribute.

    Of particular interest are "nofollow", "ugc" and "canonical".

    There's some solid information about these on this page

    Let me just say that they are very important to search engines and SEO.

    I'll start with "canonical".

    You might want to refer to this google page on the issue, and possible solutions.

    Let's say that you have a site where you have a shoe product, and the main way to get to that product page is to use the direct product url:

    <a href="/product/75">Rockport Men's Wingtip shoe</a> 

    Let's also assume that you have categories and search in your site.  You might have a category page for "shoes" like

    /category/shoes

    And when you can get to the same detail shoe page with a direct product link as in

    <a href="/category/shoes/75">Rockport Men's Wingtip shoe</a> 

    The search engine crawls your site and unless you took steps to stop it, will crawl the categories pages and find your wingtips shoe page, and finding it in the /product catalog page using  /product/75 page which is exactly the same content.  Search engines can thus downgrade your resulting score, on the basis of the duplicate content.

    One way to fix this is to make sure that the product page link uses the rel="canonical" attribute.

    <a href="/product/75" rel="canonical">Rockport Men's Wingtip shoe</a> 
  5. 14 hours ago, oslon said:

    Really nothing lol. That's why I am here. My job is to check logs of application and find out the errors. It's tough to automate it. I just deliver logs to developers. 

    Yes well that is often both a system administration/devops problem and potentially something you outsource (log analysis).  

    There are an incredible number of SAAS companies that offer log processing and analysis products. This general area of interest is often referred to as "telemetry", which can cover things like error/exception analysis.   Is your company using any of these products or services?

    There are tools/platforms like Grafana, Splunk or New Relic you should investigate, just as a starting point.  

    Grafana is a service built upon open source components, so I'd suggest you start by making a free Grafana cloud account and explore the product through their demos, sandboxes and tutorials page.

    Often these product provide a platform to develop reporting, dashboards and analysis.    If your company is not doing that, it is a great opportunity for you to lead them to that solution. 

    There are also options to self host you can explore as in for example the ELK stack ( Elasticsearch, Logstash, and Kibana).

     

  6. As an aside, you should evaluate re-building with Symfony.  Laravel and Symfony are by far the best PHP frameworks at this point, and have many similarities.   Both frameworks are Dependency Injection frameworks, so you want to spend some time getting comfortable with what DI is, and how you would utilize that pattern with the code you write.   It also allows you to make use of things like autowiring and lazy loading (intelligently loading of classes when needed rather than kitchen sink loading of classes you might never use in a request) which will be handled by the framework for you, so long as you understand it.

    Codeigniter is a fine, but very old and basic framework that is bare and simple in complexity.  Depending on the features of the app you may have had to write your own code to implement features that either of these frameworks may have provided support for.

    They are also built upon component libraries so there is a bit of mixing and matching you'll see, as for example some Laravel developers prefer Symfony's twig templating system, and will integrate that instead of Laravel's Blade.  I personally prefer the Doctrine ORM for working with relational database code, as it's designed around the "Data Mapper" pattern, rather than Active Record, which is what Laravel Eloquent uses.   In either case, CI doesn't come with an ORM so that is new territory.  You don't have to convert to the use of an ORM, but in most cases you will want to.

    5 hours ago, LaraDavies said:

     

    • Does Laravel offer advantages in scalability, modern features, or real-time functionality like chat and notifications?
    • How do they compare in terms of security and performance for high-traffic platforms?

    Scalability is achieved through architecture.  No monolithic framework is scalable.  With that said you have to make decisions as to how you will achieve scalability.  As both Symfony and Laravel have been used to develop high traffic consumer sites and applications that are architected to support high transaction rates and the features you listed, there is ample support for implementing scalable architecture.  On the flip side, experience in these areas is harder to come by, and entire books have been written.  These days scalability of required infrastructure typically involves the expertise of DevOps engineers who along with developers are creating deployment infrastructure and features that allow for this scalability.  For example, one of the first issues one can hit (outside of an uncached database that has too much data and too many queries hitting it) is having enough frontend application servers to handle the request load.  So you need frontend application code that was designed to be 1 - of - N, and that was not designed or configured with a monolithic configuration. 

    The simplest example of this is the question of session.  Using PHP sessions, the session files will by default be written to disk.  When user requests exceed the capacity of a single server what do you do?  Let's say you add a second server now.  This requires some form of load balancing or reverse proxying.  How do you handle sessions then?  There are a variety of approaches you can take architecturally.  The code will likely be the same, but configuration may need to be different to allow for scalability.

    As for security, both Symfony and Laravel have been built with security in mind, and have features that encourage and support it.  With that said, one can always go around best practices or features that enable additional security.  If however, you work with what the frameworks provide you have a solid foundation.

    Chat: Can be done different ways, but HTTP is inherently not designed for long running persistent socket connections.  Websockets is the leading alternative to support this, but it requires separate infrastructure.  For that reason, Platform as a service PAAS companies like Pusher exist to support this.  You could also self host something like a Mercure server (see https://mercure.rocks/) or you could use a company like Pusher (https://pusher.com/websockets/).

    Both Symfony and Laravel have community support options (component libraries) that make working with these websocket wrapper platforms, and have been used by many companies.  There is also Twilio which provides both text messaging and general messaging api's that can be used as well as telephony features like masked calling.  I feel like I'm getting far afield here, but I've worked on several projects for a consumer service company that made extensive use of Twilio, although that company was profitable and well funded, so the inherent costs were the subject of contract negotiations and not a concern at the time.  Pusher and hosted Mercure both have free development tiers and in my opinion are reasonably priced when you consider the costs you would incur to self host this additional infrastructure, so that is what I would recommend. 

    This should also tell you a lot about the possibility and the need for architectural planning and building to it.  Again I can only say that Symfony and Laravel have been used to create services and the backend for applications that serve large numbers of simultaneous users, while offering asynchronous processing of messaging, email etc.

    6 hours ago, LaraDavies said:

     

    • Is Laravel's code structure easier for maintenance and future updates?

     

    Yes, although Symfony has a release process that offers LTS.  See https://symfony.com/doc/current/contributing/community/releases.html

    With Laravel the window is 12 months (24 for security releases) so to be completely supported, you are looking at a fairly constant cycle of at very least minor version upgrades.  See https://laravelversions.com/en

    My last comment on Laravel:  

    What many people like about Laravel is that it has facades.  Initially facades are attractive to new developers because they are "magical" and appear to make things simple. They are a foundation for Laravel and probably one of the reasons it gained rapid popularity.  Essentially, what laravel facades do is add glue that makes any class look like its public methods are static, when in fact, this is an illusion and glue code that is part of Laravel.

     

    You can read more about Facades here:  https://laravel.com/docs/11.x/facades

    As I mentioned earlier, Laravel and Symfony are both Dependency Injection frameworks, so Facades are something different, as they plainly discuss in their documentation.  In practical use, facades and associated "helper" functions are ways to just use services that are built into Laravel.  So it encourages one not to use Dependency Injection, which makes the code less maintainable, and also harder to write tests for because you can't mock the parameters.  If you read the facades page you'll notice that their answer for this is that facades have a built-in method to get around this issue, so you basically have to write a test that works around the way you would have written the test if you just used the DI pattern and a constructor parameter in the first place.

     

    So in their documentation they have this example where a Route handler (also a facade being used here, but nevermind that) is in the first example, returrning a Response object using the json() method, and then an example using this global "helper" function "response()->json".  

     

    use Illuminate\Support\Facades\Response;
     
    Route::get('/users', function () {
        return Response::json([
            // ...
        ]);
    });
     
    Route::get('/users', function () {
        return response()->json([
            // ...
        ]);
    });

     

    So whipping something out fast, I guess might appeal to someone, because they don't need to understand the details of how this all works, although of course, every feature like this comes with a runtime cost.  But the longer term question that has to be asked is:  when you can write standard DI code, and just not use facades or helpers, and that makes understanding code easier, and writing tests easier, why not do that instead.  Since Symfony doesn't have facades, it is not something you ever would concern yourself with, although you can just use Laravel and inject the parameters.

    The reality is that if you go with Laravel, you will be encouraged through the documentation to make use of facades and or helper functions, and you should know that going into the project.  

    • Like 1
  7. 16 hours ago, oslon said:

    I mean I want project ideas.

     

    Scripting is typically employed to solve problems that you have.  What is something you do that is repetitive and you'd like to automate?  Is there information you would like to gather, or data to convert or visualize?  These are questions you need to be asking yourself, and project ideas will come to you.

  8. On 11/26/2024 at 1:38 PM, requinix said:

    :psychic:

    "White and orange" kinda sounds like Xdebug's output.

     

    Yeah, thought the same thing.

    Symfony provides a really nice package that makes it simple to add a debugging and profiling toolbar for development, however as far as I am aware, it does depend on the Symfony core framework.

    @PeterBubresko

    Laravel has something similar, that is built upon this component:  http://phpdebugbar.com/

    As a separate component, it can be used in any application you are developing, and has some support for a number of component libraries you might also benefit from like monolog or doctrine.

    You do need to know some of the basics of using composer to install php Debugbar, but it is otherwise quite easy to use, and has decent documentation.

  9. 5 hours ago, oslon said:

    I am looking to get into shell scripting writing scripts. And I was really out of ideas. So, I am looking into python and php shell scripting project ideas.

    Yes, well shell scripting typically is done with features that are built into the bourne shell or bash.  It is quite typical that linux packages come with scripts that utilize these features.  There are also the popular unix utilities of sed and awk, which both provide a modicum of programming features.  Long before Python, Perl was a popular language for scripting and is still highly performant and feature rich.   Entire sites of yesteryear (the original Slashdot being the prime example) were written in perl.  

    I do think that learning Python is a worthwhile pursuit, and in terms of its use in the scripting world, it is the foundation for the popular DevOps tool Ansible, that I use frequently to automate any number of DevOps tasks in my professional work.

  10. Did you mean XAMPP?  

    Quote

    In general, no one can figure out this PHP, as I understand it, errors pop up on their own on elementary things, is this solvable, or is there another language where this does not happen?

    I'm sure a competent and experienced PHP developer could figure out whatever issue.  

  11. You should be able to use the unique constraint in the validator.

    return Validator::make($data, [
        'fname' => ['required', 'string', 'max:255', 'alpha'],
         'username' => ['required', 'string', 'max:255', 'alpha_num', 'unique:User,email'],
    
    etc...

    What this does in Laravel, is that at validation time, Laravel will query the table to test for a row that exists with the same email that the user is trying to provide for this new registration.

     

    With that said, it is never a bad idea to enforce your desired database rules in the database, as Barand advised.  Assuming this is mysql, you would do this for the email column of the user table, would be to create a unique index on the email column.  Given that the system will be looking up users by the email column (as the username) there should be an index on that column anyways, but in this case the index should be unique.  

    With mysql KEY and indexes are two ways of accomplishing the exact same thing.  

     

     

  12. I am getting to this question late, and I see that people have been diligently working on it, but I do want to interject that the environment is very important here.

    Are you testing something locally, or even running a local app?

    Hopefully you are aware that the default behavior for php sessions is to set a cookie.

    Hopefully you are also aware that cookies are the responsibility of the client/browser to accept and once accepted, to return to the server in the HTTP header of every subsequent HTTP request.

    However, cookies also are relevant to a domain (and in some cases) to a subdomain.  

    For security reasons, domains must have at least one dot in them, or browsers should not set the cookie.  

    Because there is so much confusion regarding the use of localhost as an alias, and how it may or may not work as a domain, I never utilize it in development.  

    I will use the .test TLD as it (along with a few others) has been designated as a TLD that will never be allowed.  You are then free to add domains and subdomains as you like, which you can put into your /etc/hosts file.  

    I tend to use a project name, so I might set something like this up for project foo:

    // /etc/hosts file
    127.0.0.1	localhost
    255.255.255.255	broadcasthost
    ::1             localhost
    127.0.0.1	foo.test www.foo.test

    You can of course have as many of these aliases as you want added to each line.

    There are also tools people commonly use like dnsmasq to manage a home DNS setup and can even be used on your workstation, but that's far more complication than I want to bring up.

     

    In summary, again without full context here, this could be environmental, depending on where the server is running (and if this is a local/intranet server) .

  13. 19 hours ago, us66mo said:

    Excellent, thank you! Interesting that

    foreach (array_keys($tokenArr) as $val)

    I used elsewhere for similar code to be replaced did not work, but your code did.

     

    Well, yes, because array_keys will provide you all the keys in the array at the first dimension, which is probably not what you wanted.

    If you want access to the key in the array you can use:

    foreach($tokenArr as $key => $val) {
    
    }

     

    Example:

    $fruits = array('a' => 'apple', 'b' => 'banana', 'o' => 'orange');
    
    foreach ($fruits as $key => $val) {
        echo "The $key is one $val \n";
    }

    Output is:

    The a is one apple 
    The b is one banana 
    The o is one orange 

     

  14. On 10/20/2024 at 10:39 AM, forum said:

    Of course I tried, but for me what is written here is not all clear, in general I solved this problem in only one way, in the file ntpd-vhosts.conf I registered the root document and then everything worked.

     

    Right, well things in "web space" ie. relative to the document root, should always be referenced relatively to the document root "/" which is equivalent in webspace to the document root.

    This is because client browsers can only see what you make available to them in web space via url's.

    What this means is that things like css and javascript, or images will exist in web space.

    Document Root relative (web space)

    Assume you have a header file you want to reference in your html (even if rendered by php) at:

     /var/www/your_project/images/header.jpg

    Assume that your documentRoot for your vhost is /var/www/your_project

    Then your header.jpg file should be referenced relative to the document root:

    <img src="/images/header.jpg">

    The same goes for any css or js files, or any other assets you want available via a direct url.

    NOTE: that you should always include the beginning "/" equivalent to "document root".

    This also makes sure that your code is always portable and not hard coded to a specific file system structure.

    File System Paths

    PHP commands like require_once/include/fopen etc. work with files on the file system.  In other words "web space" is not relevant to PHP.  PHP works with files, so it must know where the files exist. 

    With that said, it is typical, that the way to make sure that you don't need to hard wire paths all over the place is to utilize one of the PHP magic constants, and then create a relative path to include files.

    The best way to do this, is to establish a base variable in a known location, and then to include that file in any script that needs to use it.  This can be problematic, depending on the way you have structured your project, but the general idea is to establish this basic variable using the __DIR__ magic constant.  

    Here is a pretty good blog post covering the functions available to implement these techniques:  https://www.gavsblog.com/blog/move-up-directory-levels-relative-to-the-current-file-in-php

    So let's assume the path is:

    /var/www/your_project/config

    In this directory you create a script named path.php

    <?php
    $baseDir = realpath(__DIR__ . '/..');

    The important thing to understand is that $baseDir will be /var/www/your_project

    It does not matter the location of the file that includes the path.php script, $basedir will be the filesystem path.

    So the only thing that is important to build off the $baseDir is to include it relative to the script that needs it.  At that point you can include all other paths, so long as you know them relative to your_project.

    So let's say that you have a bunch of scripts in a directory named "utility" one named file.php.  The real path of this would be:

    /var/www/your_project/utility/file.php

    Assume that I have a script that needs to include a function defined as  "function myFile($path)".  

    We'll also assume that you have some files stored outside the webroot that you want to deliver via PHP.  One in particular will be at the actual real path of:

    /var/www/your_project/files/pdf/info.php

    Assum you have an index.php file that is in a project subdirectory:

    /var/www/your_project/admin/index.php

    <?php
    // your_project/admin/index.php
    
    // this gets the $basePath variable
    require_once('../config/path.php');
    
    // Now we need access to utility/file.php
    require_once($basePath . '/utility/file.php');
    
    $result = myFile($basePath . '/files/pdf/info.pdf');

     

    What this all does is insure that you never have to configure or hard code paths into variables or within your scripts using relative paths where you have to.  For the most part, the major frameworks use techniques like these, along with front controllers to insure that you never have to hard code a specific directory path structure into your application.

  15. On 10/31/2024 at 6:34 PM, PHP5000 said:

    20000 thanks to each of you.  I am not sure if my hosting company allows access to such settings mac_gyver this would be very helpful.  I am yet to find a good hosting company with proper tech support and services.  Barand it is not just for being pretty.  On what planet we start the date with year?  It is like starting an address with the country.  That said, you been very helpful and you can start the date with seconds as far as I am concerned.

     

    You are free to display the date however you choose.  I want to go one step further with this, and explain that a date/datetime is relative to the locale setting for the server.  

    There really is only one right option in my opinion and that is for the server to utilize UTC/GMT, so when a value gets stored it is assumed to be UTC.  Most likely your hosting company has configured the mysql server to use UTC as its locale/timezone but you should probably check.

    What should you display to an end user?  What they should see is relative to THEIR locale.  

    Now you may very well want to hardcode this into your application if all your users are in one place (let's say they are in New York), which means that when you get date value from the database, you then need to convert it to the date/date time for the locale you want to present it to.

    The PHP datetime class has methods that do all of this for you nicely, and there are also some great PHP libraries out there that will do all sorts of fancy date to "some string" manipulations for you, depending on what you want.

    One I'm pretty familiar with, and is widely used is Carbon.

    At minimum you want to take a look at the PHP Datetime class and look at how to create and use datetime objects.

  16. With modern PHP sites, you don't want your php class and source files to be in the webroot.  For the most part modern websites use a "front controller" pattern where bootstrap, config and routing is done through one script (index.php).  There are many reasons for this, but even if you don't have a project that uses a front controller, you still want to move any config and include directories outside of the web root for security reasons.  It also makes finding the path to those scripts simpler, but I won't go into the techniques to do that.

    This is all (and should be hopefully) facilitated by setting up a composer.json file for each of your projects.  

    You can set one up for code you already having by running "composer init" in the project directory, and answering a few questions (most you will just answer no) so that it generates the composer.json file for you. You can then issue other commands or hand edit the composer.json file as per your project.

    For all these reaons, and as many people also use composer to generate the autoloader, you put your classes under project_name/src.

    Hopefully you understand PHP namespaces.  You can namespace inside the src directory if you want, or not.  Typically this is just a decision of having a subdirectory with some name for your company/organization etc. but you can also just forgo that and creates directories for individual types of components.

    So for a simple symfony project you might find in project_name/src

    Controller/
    Twig/
    Doctrine/
    Repository/
    Kernel.php
    Form/
    Entity/

    For any classes or function libraries you write, you should put those under the src directory, either in their own directory or (as in the case of this example, for the Kernel.php script) just at the root. The organization in there is up to you.  By default, initializing a composer.json file in the project directory, will configure the autoloader to load anything inside the src/ directory so that it can be referred to via the App namespace.

    This is via a composer.json setting like this:

     

        "autoload": {
            "psr-4": {
                "App\\": "src/"
            }
        },

      

    Another reason for doing this, is that when using composer, component libraries are placed in a vendor directory, but beyond that there might be tests and other artifacts.  If your site uses docker, even if it is just for development you want a place for project files and directories to go that is not in web space.

    So the defacto standard is to have a project/public directory, which is where files that should be in webspace go. Then you map your webroot to that. 

     

    There are different ways to run and configure PHP today, but the important thing is that the process that needs access to the php files be able to read them.  This might be apache, but it also might be a different user, if you are using php-fpm or perhaps not even using apache as the web server, as you might with nginx.

    This structure has been formalized and I found this helpful repo which documents the structure and the purpose of each directory.

    So when developing websites this is what you typically will want to use, or will see with a framework skeleton:

     

    project_name/
    ├── config
    ├── public (web root)/
    │   ├── js
    │   └── css
    ├── bin
    ├── docs
    ├── src/ (your classes here)
    ├── tests
    └── vendor (composer will create)

     

    So essentially, things in webspace go in the public directory.

    Anything else stays out of webspace.  That should be all files that are required/included.

    If you have any command line utility scripts put them in bin.

    For deployment on linux, often there's a default mapping of /var/www for apache, and what you can then do for any project is to move/copy/git checkout the project to /var/www/project_name.  

    You remove the default setup and for each site, you have a vhost that sets the documentRoot to be /var/www/project_name/public for each project.

    This standard structure works very well from local development to eventual production deployment.

  17. 7 hours ago, iexit said:

    Want Host Website but PHP8 & MySQL8 was not work, which is best, Xampp is out to date like many other same panels

     

    Your question is not clear.  

    Are you looking for a hosting company, or trying to do local development.

    Xampp has been updated to the latest PHP version.  It does not use MySQL at all, but rather MariaDB.  It appears that both are up to date.

  18. What I am saying is that this might be a data problem not a code problem.

    It is designed to show you events in the future.

    If the date of the events is not in the future, those rows will not be returned. 

    That is not an error or a bug, that is working as expected.

  19. Based on your github, I would agree that is the location of where things may or may not be working.

     

            <h2 class="headline headline--small-plus t-center">Upcoming Events</h2>
    
            <?php 
              $today = date('Ymd');
              $homepageEvents = new WP_Query(array(
                'posts_per_page' => 2,
                'post_type' => 'event',
                'meta_key' => 'event_date',
                'orderby' => 'meta_value_num',
                'order' => 'ASC',
                'meta_query' => array(
                  array(
                    'key' => 'event_date',
                    'compare' => '>=',
                    'value' => $today,
                    'type' => 'numeric'
                  )
                )
              ));
    
              while($homepageEvents->have_posts()) {

     

    So notice what is actually happening in this code.  There is a comparison being done '>=' to event_date meta data.

    So, what this indicates, is that even if you have rows in the table, and there is a meta key by that name, the values must be in the future, or there will be no results returned.

    Assuming you copied all this code from some source, the data still must qualify, or the query can return an empty set, which is not an error, and would explain a blank page, since the while loop will not be entered since $homepageEvents->have_posts() will return false.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.