Jump to content

gizmola

Administrators
  • Posts

    5,878
  • Joined

  • Last visited

  • Days Won

    139

Everything posted by gizmola

  1. The web is a complicated and confusing interdependent set of technologies, and is often underestimated. Most people get confused as to how specific things actually work. There are better tools now, like the chrome web tools for example, which are great aids to figuring out how a web application actually works. Javascript with its frameworks and SPA's continues to add to the complexity and role of client side code running in the browser, but I do believe that if you really understand the fundamentals, like where code actually runs, and the overall architecture of the various processes involved, you have a better chance of avoiding confusion. HTTP is very important to understand. Using the dev tools Network tab is a great way to explore this. A lot of times people just assume that something described in abstract, like cookies for example, are obvious, but if you don't understand where cookie data lives, and in what circumstances the server has access to cookie data, and what restrictions might exist for that data, it can just seem like magic, which leads to confusion. A good understanding of HTTP helps demystify things. When you look at HTTP it then helps to understand that HTTP is built on top of TCP protocol. So you can continue to delve into the intricacies of how things work, and gain a deeper understanding as you see fit.
  2. There is no reason to be writing obsolete/insecure php mysql code in 2022. Literally every veteran member of this forum will offer you the same advice: The mysqli_ interface to mysql is annoying to use. Learn/use PDO instead! There is a fast easy to read guide to using PDO that will teach you everything you need to know, as well as offer best practices. Whether it be mysqli_ or PDO, all variables should be prepared/bound, as Barand advised. It's just cleaner/easier to use PDO to write your code. Now every single person like yourself that is learning, when they receive this suggestion, has the same reaction which is "thanks, I will have to learn PDO ... sometime in the future (aka never) because they will have to learn PDO and rewrite some of their existing code. I understand that this is a natural reaction to a learning curve and the unknown. With that said, aside from understanding how to set up a PDO connection, you really will thank us all later, if you make the change now.
  3. You already have some useful answers, but I'll throw in my 2 cents which includes a bit of history. When the world wide web was first evolving, all HTML pages were static. The HTTP protocol, which enables the WWW, describes how a client (browser) can use a URL to reach a server, and request a resource (html document). I would hope that at this point you are familiar with the HTTP methods available -- GET, POST, PUT, PATCH & DELETE. So a browser made a get request via HTTP, and the first servers would resolve the location of a static html document and return that to the client. Again, somewhat obviously, an html document can specify a number of pieces, which again in the early days were images, as Javascript and css had not been invented/formalized yet. The job of the browser is to parse and assemble the html document and display it to the user, and handle hyperlinks. Eventually, people wanted to provide a more interactive experience. A simple example, would be a page that showed all the latest news stories of the day. Either someone had to manually update the index page, and add static html pages for all the stories, or a program running on the server could be enlisted to create html on the fly. Sophisticated software existed for a long time to do all of this, and in many ways the WWW was a huge step backwards. Client/Server and terminal applications had existed for years prior to the first browser and web server, and people were used to using online services like Compuserve and AOL, which had their own client/server protocol and content rendering client software. The WWW was an attempt to democratize this and provide standards that anyone could use. HTTP itself came from scientific organizations who wanted a way to easily share research papers. Much of what we know as the Internet also came from this process, where people would publish standards in the form of "Request for comment" ie RFC's. A few groups that had been involved in the creation of the first web servers got a working group together and published RFC 3875 which describes the "common gateway interface" (CGI) which standardized the way a web server could invoke a program, accepting input from the web server in the form of environment variables with specific names and purposes, and then return the results (in the form of html) to the web server, which would then return that "computed" html to the user. This was the birth of "server side" web programming, and the basic structure of that continues to this day. Web Server accepts an HTTP request with a URL If Web server detects that the URL requires a "CGI program" be run, it passes variables (user input and standardized CGI web server variables) to the program through the Operating system environment Local server program runs, returns a "response type" data structure + the actual data to the web server Web server returns html (or any other valid resource type) to the client I mention this, because the URL might have been an image, which could be generated on the fly by a server side program as in the case of a computed graph image So this brings us to the various popular server side languages. In the early days of the web, people would often write their CGI programs in C, as the primary server operating system of choice by most people involved in running servers in the early days of the WWW was Unix. As C is terse, and requires compilation, early web developers started looking for ways to simplify coding. CGI allowed any "program" to be used, which meant that early developers could use something as simple as a bash script which chained together various unix commands. The Perl scripting language was extremely popular at the time, due to its interpreted nature, advanced data structures and many available libraries. For example it had a relational database client interface (DBI/DBD) that allowed a developer to write simple code to make SQL calls to a relational database. Commercial databases like Sybase SQL server and Oracle were popular at the time, and the open source MySQL database was seeing a lot of adoption, as the commercial databases were expensive and hard to obtain. Perl, like other interpreted languages, is evaluated when the script is run (at runtime). Most developers found this highly preferable to developing in a compiled language like c or c++, which required a compilation/build process anytime you changed so much as a single line of code. It was in this time, when PHP first appeared, and it was intended to be a language/toolkit specifically aimed at making the development of serverside web scripting simpler. So one of its features as the ability to intermix HTML and php scripting blocks in an html script. This is somewhat in the spirit of a competing standard at the time, called "Server Side Includes" which had a similar objective of taking something that was mostly pure HTML and augmenting it with some markup that the server would act upon when the page was requested. With that said, PHP like Perl, is interpreted, and thus required a runtime process to parse the PHP code and execute it at runtime. Java also has a runtime, although the main difference in the case of Java is that the Runtime engine (the JVM) was designed to be a persistent server process/daemon. In the case of Perl & PHP, a web server running a Perl or PHP script needed to invoke the perl or php runtime program, which would then load the script code and run it. This script code could of course also load/include perl or PHP libraries, and do all sorts of things while running, but unlike java, it was not intended to have any persistence. This is very different from java where you can create objects and have them persist in the JVM (or in the case of Java EE, a Java application server). PHP programs are run when requested, and when the PHP script is done running, all the variables and objects that might have been created are disposed of when the script ends and has returned the response data to the web server. As PHP became popular, people wanting more performance and less overhead began to look for ways to reduce the overhead of starting up the PHP process each time. By this time, the open source Apache web server had become hugely popular, and apache was designed to be a collection of modules, with a specification for anyone who wanted to add a new feature to the server via development of a custom module. Developers within the PHP project community created an apache module (mod_php) which essentially made PHP a part of the Apache Web server. This meant that instead of the web server having to fork a child program and pass all the variables through the environment, a PHP script could receive that data and access web server variables directly from Apache's shared memory structures. The problem with this idea is complicated and has to do with how Apache services requests. I wrote about this issue on my blog if you want to read more about the underlying issues. Around this same time servers had appeared to challenge Apache, most notably the NGINX web server. NGINX has a fundamentally different architecture to Apache, and had no support for mod_php or something similar. NGINX was designed to be a highly performant proxy server, so it supports a variety of ways to proxy a request to a server process for handling. NGINX implements "Fastcgi" which was developed as an alternative to CGI. Essentially, fastcgi sought to get around the same problems that apache modules were designed to get around -- the overhead of having to create and destroy an os program for every request to a server side script. NGINX and other servers that support fastcgi (which even includes apache now, via an optional fastcgi module) assumes that there is a persistent operating system process that supports the fastcgi protocol. This was ideal for languages with a persistent runtime application server process like java. In order to make this work for php, a persistent php application server was developed within the PHP project, that supports fastcgi. This php application server is called php-fpm. Internally php-fpm maintains a number of persistent php processes which can be used to run a php script. A proxy web server like Nginx can be configured to send requests for php script to php-fpm via fastcgi protocol, and return the results to the client. I hope this somewhat long and verbose history of web servers helps answer your question and give you some background to clarify how PHP works. At the end of the day, a PHP script is a PHP script. There aren't a lot of different types of PHP files -- there is only .php. Frameworks have introduced template languages, and PHP can parse files of various flavors, but in all cases the end result is the execution of PHP code. Unlike Java, there was never an effort to define specific file formats to do different things as in the case of Java servlets/vs JSP vs JSTL. PHP was designed with the intention of being a language that would work with and be integrated with a web server via CGI. It has many design features focused on that, and is not generally considered a generic scripting language like perl or Ruby or Java, Nodejs or more recently, Python. Any one of those languages (including PHP) could be used to create a "socket server" that can run on a server and handle HTTP requests. There are other types of protocol servers (Websockets in particular) that have become popular as the web has evolved. The performance of PHP interpretation has been improved in recent years, and there are also projects that add some valuable features like "event driven non-blocking IO" which are beneficial if you are developing a server process in PHP, but that is not the typical reason people use PHP. For the most part they are using it to create server-side web applications.
  4. You need to provide some code for people to look at.
  5. Web pages are just a combination of html, css, static resources like images, video and sound, and javascript. Curl is not some magic scraper tool -- it is just a client library like many client libraries that can simulate the same conversation with a webserver (HTTP protocol) as a browser does. There are numerous PHP client libraries that can also do what curl does. Guzzle, Httpful and Httplug are a few of the most used ones. Depending on what you want to do with code from the pages you are trying to scrape, you might be better off using one of those rather than the wrapper around Curl. Your results depend greatly on what pages you are trying to scrape and the relative complexity of those pages, as well as what you want to try and do with the scraped results.
  6. And your account was made in 2005, and you made 4 posts in 2007. The forum member table was cracked in 2015.
  7. If you look at the actual code, it's pretty simple. This is the api call from google: $json = file_get_contents("https://www.googleapis.com/youtube/v3/videos?part=statistics&id=" . $videoID . "&key=xxxxxxxxxxxxxxxxxxxxxxxx"); Apparently, if the id = parameter includes a comma delimited list of videos, this api will still work (up to 50 videos). [[youtube_view_count id="UKuYgIBnqEA,xxxxxxxxx,xxxxxxxx"]] There is an entirely different analytics api you could use, which allows for statistics by "dimension" where your dimensions can be a channel, playlist, video or analytics "group". You maintain the groups through the youtube studio, and you can have up to 500 videos in a group. That would be easier to use and maintain long term. See https://support.google.com/youtube/answer/3529123?hl=en for more details on groups. Obviously you could get the statistics for the whole channel or add the videos to one or more playlists. This page lists some sample queries that could be used to get stats for a channel or a group: https://developers.google.com/youtube/analytics/sample-requests It looks to me like that is the way you want to go, but I don't have the time to look into it much further other than to suggest a look at: https://developers.google.com/youtube/analytics
  8. I'll add to Barand's suggestions: This is a great resource of best practices https://phptherightway.com/ Projects should utilize composer for dependency management and library autoloading: https://getcomposer.org/ If you like video courses, this guy has a channel of free youtube videos that are as good or better than a lot of online courses you might pay for: https://www.youtube.com/c/ProgramWithGio In terms of coding standards, this document is a community standard adopted in whole or part by most frameworks and components: https://www.php-fig.org/psr/psr-12/ Most professional web development also involves the use of a framework. The 2 most popular and active frameworks are: https://symfony.com/ https://laravel.com/
  9. This is the php function you are trying to use http_build_query. So as Barand stated, clearly you should not be json encoding the array. Instead: $data = array('currency' => 'USD', 'sort' => 'rank', 'order' => 'ascending', 'offset' => 0, 'limit' => 2,'meta' => false); I can't say for sure if that is going to work with your API, but at least you'll get past your error. Take a look at the php manual for a function whenever you have issues with one.
  10. You'll use the $_GET superglobal to get the url parameter. if (empty($_GET['id']) || (int)$_GET['id'] < 1) { exit; } $id = (int)$_GET['id']; // Add your code to make the mysql connection. Since this is a common thing, you would be better to have that code in a script you // require_once in any script that needs the mysql connection $stmt = $mysqli->prepare("SELECT * FROM users WHERE id = ?"); $stmt->bind_param('i', $id); $stmt->execute(); $result = $stmt->store_result(); $getRowAssoc = mysqli_fetch_assoc($result); if ($getRowAssoc) { // Display the user data }
  11. My educated guess is that requinix is right. I also think the code snippets reflect that the order object should be available in plugin2 via $this->object. So I would suggest you try adding this code to plugin2 $this->placeholders['{order_service_time}'] = get_post_meta( $this->object->get_id(), '_wfs_service_time', true);
  12. I want to 2nd what ginerjm wrote. It's very important to exit the script, because the sending of a location header does not guarantee the client will actually act upon it, so in some cases, this can be a security flas unless you exit. It is also possible for a PHP script to continue to run beyond the location header. For example, you could makes changes to a session variable or database.
  13. Condolences on your loss. I don't believe that script is a core part of wordpress. Depending on how old the wordpress install is, this could be something that was orphaned some time in the past, or it might have been something custom. Clearly it isn't used as the script it is attempting to include doesn't exist. You might consider just commenting that line of code out at line 210. Simply add two forward slashes at the start of the line, and save the file again. This will eliminate the error, and as the script doesn't exist, you won't experience any functional change. // include ABSPATH . etc.
  14. There is no reason to use parent child directories in pathing to web resources, and plenty of ways doing so can be the source of problems. If your webroot is /foo, then have a directory under /foo like /foo/js. Put your .js files in the /foo/js folder. In your example you have a .js named add.js. At that point you can refer to add.js inside your html via the relative path: <script src="/js/add.js"></script> Notice the leading '/' which indicates the webroot directory is the parent.
  15. In addition to Barand's comment, what you can do is inner join the result back to quote by quote.id in order to pick up the version, as well as any other data specific to the max quote row of the group.
  16. Actually that is rarely the case. If I understand you, part of the data you are looking up is "user" data from a user table. That data certainly is not always changing, and in most system designs, you absolutely know/control when it is changing. Let's say for example, there is user data + profile data perhaps in 2 related tables. What a memcache based cache would provide is --- your code checks for the existence of the data in the cache. You have to figure out what the cache key is going to be to be able to resolve it, and usually that is some sort of string like: '/user/{userid}'. If the key exists, you just read the data from memcache. This eliminates the need to query the database. If the key doesn't exist you perform the query, and save the result in a new memcache key. You have the option of specifying a ttl value for the key. Now this query could also be a query that joins to the user profile table. In your routines that change/update the user or profile data, you invalidate the memcache data for that user when you save it to the database. The next read/select of the data will create a new cache entry. The main trick that memcache does is to allow for it to be clustered, and this is why facebook originally used it, given their enormous graph. There are very few systems that have that level of scalability issues, but most applications get a significant performance and scalability boost out of using some sort of in-memory cache. A lot of projects I've worked on have used Redis. You do need to do some thinking about the type of data in your system, and whether or not portions of it are relatively static. It also tends to show you if you've made significant design mistakes. An example might be putting some sort of de-normalized counter in a table, such that you made something that is relatively static, like a user table, non-static. Putting a "last login" column, or summary columns like "friend_count" or "topics" all reduce database concurrency, and then cause people to move away from caching because they put these types of fields that require frequent updates into the main user table. So to conclude, let's say you have something that does get updated with some frequency but then requires a join to another table for context/data availability. An example might be a message system, where you need to join to the user table to get the to/from usernames associated with messages. Again, depending on your design, even though messages could be added with some rapidity, that doesn't mean there aren't users in the system who could have a cached version of the message data. It also doesn't mean that you can't use the cached user data to decorate the messages. You can also store a query that includes a join as in the example of a query meant to return "all private messages sent to user 5" That query can certainly be cached, and in doing so you will reduce load on your database. You just need to understand which queries like this need to be invalidated from the cache. So long as you have cache name schemes that make sense, it's not that hard to understand what caches you need to remove if data was added or changed. Most of those schemes are similar to way you might design a rest api.
  17. MySQL uses a scheme where every user has a name & host component. So for example: root@localhost. It is possible to have wildcards for the hostname component, but typically accounts will start out being constrained to localhost. It's important to understand how this works, because permissions are then granted to a specific user@host combination, and if later someone adds a wildcard account for the combination (for example -- 'user@%' that user will not inherit/adopt the permissions of the existing user@localhost account. There will be a "root" level account for mysql that has all rights. By default, this is usually root@localhost. This user can create new users, change passwords, create databases and grant permissions for new or existing users to that database. Typically what is done for security purposes is: root level account creates a database root level account creates/GRANT access to the database for that user. Application will use this user to work with the database. Some of the specifics of the syntax of the SQL used to create a new user or assign it a password have changed over time between mysql and mariadb versions. Tools that were dependent on the syntax or specific password scheme may no longer work properly given a specific version. What your screenshot shows is that laragon is logging you in a specific configured user, that was created and given access to a database. That user does not have access to create a new database. This is a typical setup. You need to login to mysql as the "root" user that laragon setup initially. I think it is probably still 'root@localhost'. Apparently by default the root password was unset. This SO thread talks about this, although it is a 4 year old thread: https://stackoverflow.com/questions/50214540/laragon-never-use-mysql-password
  18. I'm not that interested in looking at a defunct project, but the obvious question of where to start, would be -- have you for example, started with this line: $_list_archives = db_get_archives(); Have you dumped the result of that call and examined the data structure? Using this technique follow through the code, and anywhere that you expect archive or user processing to occur, do a print_r or var_dump and see what you can find. Alternatively if you have the option of xdebug setup, you could add a breakpoint and step through the code interactively.
  19. All the tables involved must be InnoDB. InnoDB is now the default engine for MySQL, although in the past myisam was the default engine. Make sure that all your tables are InnoDB, regardless of whether or not they are used in relations. The InnoDB engine has a number of other features that are important (data caching, row level locking, transactions, Clustered Indexes...).
  20. An inline style should be the last resort. Why are you looking at doing that rather than using css classes?
  21. I'm not sure what the benefit of admitting such a thing would be. I will say that there are people who first joined this site shortly after it was first registered, so you can take from that what you will.
  22. We are not psychic. Most likely you have a pathing problem with your code. Paste relevant portions or the script in its entirety so that we can see what you've done. Make sure to use the code button <> when you paste the code so formatting is preserved.
  23. To add a bit to this, Namespaces solve one major problem, which is class and function name conflicts. Here's a simple example: namespace Gizmola; function substr($string) { return \substr($string, 0, 3); } $test = '12345678910'; echo substr($test); With a namespace, I'm able to redefine a built in function, as well as use the built in function in my customized version of substr. I can then include my function and use it as a library in other scripts: use function Gizmola\substr as substr; echo substr('Now is the time for us to use namespaces'); This becomes even more valuable when you are dealing with class libraries. Without namespaces, the use of a library would mean that every single class across all the libraries you might want to be using, would need to have a unique name. Namespaces solve this issue, and allow you to organize your code, and refer to other classes in an unambiguous way. I can reference any class definition by including its namespace. // Logger.php namespace Gizmola; class Logger { public function log() { // ... } } Some other code that wants to use the Logger class. namespace MyApp; require_once('path/to/gizmola/Logger.php'); use Gizmola\Logger; $logger = new Logger(); $logger->log(); If for some reason I end up using a class library that also has a Logger class, like Seldaek\Monolog, I can do that via the ability to alias a class or function when I use it: namespace MyApp; use Gizmola\Logger; use Monolog\Logger as AppLogger; $logger = new Logger(); $appLogger = new AppLogger('main'); $logger->log(); $appLogger->warning('Problem with MyApp'); Alternatively, I could reference it directly via it's namespace to get around the conflict: namespace MyApp; $logger = new Gizmola\Logger(); $logger->log(); $appLogger = new Monolog\Logger('main'); $appLogger->warning('Problem with MyApp'); The other benefit of namespaces, is that by applying a convention to the way you map a namespace to a directory structure, a class autoloader can determine the location of a class and load it at runtime. This is where PSR-0 and now PSR-4 come into play. Library authors who conform the directory structure and location of their code to these standards will allow their library to be easily integrated into any project. These standards were designed to incorporate the organizational structure of as many pre-existing libraries as possible, so there is some interesting code in there, but for most people, creating your own class is as simple as putting it into a directory structure that more or less maps to the namespace. Composer is able to install a component library, and make that available to your app with its own libraries, and provide an autoloader and static class map for you to use if you want it, relieving you of having to require classes or be concerned about placing them in specific includable directories on your server, as you had to do in the olden days before namespaces were introduced.
  24. I've written about using the Timestamp type. It would be appropriate for your application, and a timestamp only requires 4 bytes. You want to turn off the basic timestamp features when you define the column, but that is covered in my article: https://www.gizmola.com/blog/archives/93-Too-much-information-about-the-MySQL-TIMESTAMP.html As Barand stated, a Datetime is not much worse at 5 bytes as of MySQL version 5.6.4 when some storage mechanics were changed. Previous to that a Datetime used 8 bytes, but now it's much more efficient, so long as you don't need fractional seconds. For a scheduling app either one is fine and they are basically interchangeable as far as PHP is concerned, and the SQL statements and functions you can use them with.
  25. Do you have an Entity Relationship diagram we can look at? Every table needs a primary key. The key is used to tell the difference between rows, so it must be unique across all rows in the table. I'm assuming you are using InnoDB for all the tables. You need InnoDB for referential integrity constraints, row level locking and transactions to work in MySQL. With InnoDB, the data is actually stored in primary key order, which means that when you read a row by PK, whether that be from a direct query or join, the entire data of the row is also read. This is because the table itself is used as the PK index. Sometimes you will see this referred to as a "clustered index". This design adds efficiency to many normal operations because MySQL only has to read the data directly via the indexing process and doesn't have to read an index first and then use that to locate the data. Not all searches are based on keys, but many are. When you say "MySQL would not allow me to create Table3" what error are you receiving?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.