Jump to content

gizmola

Administrators
  • Posts

    5,945
  • Joined

  • Last visited

  • Days Won

    145

Everything posted by gizmola

  1. The "Best way" requires a subjective opinion. Hopefully you have looked at the Twitter API documentation. I'd suggest you take a look at this PHP component Library which wraps the authentication details in an easy to use PHP class and provides helpers to easily make calls to the twitter api and get back the result in a PHP script. The documentation literally has an example of getting the list of followers for a user. There are numerous other libraries you can find by googling or searching github that go even further in wrapping the twitter api's, for example this library. I would certainly start with a library to avoid reinventing the wheel, but you will also learn a lot about how to interact with Twitter or for that matter any REST api by looking at the code of any one or two of these libraries.
  2. Your topic is a bit misleading. You don't need a "listener" script. A web server is already a listener, that listens for connections. It is also possible to utilize websockets, but then you need a websocket server, which comes with a lot of baggage and system administration setup which also complicates your environment. For a couple of message counters, I wouldn't recommend going down that road. You simply need a PHP script that takes some GET or POST parameters and returns the required numbers. My "pro tip" is to return the data in a json structure, as this makes it easy to work with in your client javascript. On the "client javascript" side, you should probably use fetch. It's also possible to use jquery.ajax if you already are using jquery in your project, but the modern way of doing ajax calls is to use fetch, as it works with Promises. I'm not going to regurgitate the examples but to quote from the link I provided: Start with writing the script and returning the json data, based on the provided parameters. You can utilize session variables to establish user identity just as you might in any other PHP script, or pass parameters from the client side. Test that it returns the correct json structure (and make sure that your php script sets the http header content type to be JSON. Once the script works for one request, you simply need your client to do some sort of "polling". This SO question has an implementation you can look at for writing a polling routine where you can call your fetch function. You want to carefully consider the typical number of concurrent users you might have, as well as the periodicity within which you want to call your routine. Keep in mind that you will generate those requests so long as a user sits on the page which has these counts. Given the overhead, you might want to combine the requests into one php script that returns both counts at the same time. Look at the answer from allenyilee: let cancelCallback = () => {}; var sleep = (period) => { return new Promise((resolve) => { cancelCallback = () => { console.log("Canceling..."); // send cancel message... return resolve('Canceled'); } setTimeout(() => { resolve("tick"); }, period) }) } var poll = (promiseFn, period, timeout) => promiseFn().then(() => { let asleep = async(period) => { let respond = await sleep(period); // if you need to do something as soon as sleep finished console.log("sleep just finished, do something..."); return respond; } // just check if cancelCallback is empty function, // if yes, set a time out to run cancelCallback() if (cancelCallback.toString() === "() => {}") { console.log("set timout to run cancelCallback()") setTimeout(() => { cancelCallback() }, timeout); } asleep(period).then((respond) => { // check if sleep canceled, if not, continue to poll if (respond !== 'Canceled') { poll(promiseFn, period); } else { console.log(respond); } }) // do something1... console.log("do something1..."); }) poll(() => new Promise((resolve) => { console.log('Hello World!'); resolve(); //you need resolve to jump into .then() }), 3000, 10000); // do something2... console.log("do something2....") You can run this script on the SO page and debug it, so that you are clear you understand how the pieces work. It is a bit more complicated than need be, due to the implementation of a cancellation capability, which you might not need, however, you might want to have a feature that stops the polling requests after some reasonable period of time. Using these routines or ones that are similar in combination with your fetch function AND whatever javascript routines you need to update the relevant count(s) links should be everything you need. The nice thing about this example is that the console.log statements show you where your routines should be that make the fetch(ajax) request(s) you will want.
  3. The original query would have worked if the proper mysql column name delimiter was used. For column or table names, you delimit them using the backtic character, not the single or double quotes. Wrong: $SQL_INSERT = "INSERT INTO 'Answers' ('QuestionID', 'PlayerID', 'Answer', 'Option') VALUES (?,?,?,?)"; Right $SQL_INSERT = "INSERT INTO `Answers` (`QuestionID`, `PlayerID`, `Answer`, `Option`) VALUES (?,?,?,?)"; Of course you do not need to delimit table or column names in most cases, but there are cases where you must delimit them, in particular if the name is a mysql keyword. Example: CREATE TABLE USER_LIST (id INTEGER UNSIGNED, CURRENT_USER varchar(40)); This won't work, because CURRENT_USER is a MySQL keyword. This works: CREATE TABLE USER_LIST (id INTEGER UNSIGNED, `CURRENT_USER` varchar(40));
  4. A "Key" is not a constraint. There are 2 types of keys. "Primary Key" or "Foreign Key". In a table the Primary key is one or more columns in combination, that can be used to uniquely identify a single row in the table. A "Foreign Key" is the "Primary Key" of another table, that establishes a relationship to a single row in the Foreign table. Constraints are restrictions to the overall "Domain" of acceptable values allowed in a particular column. For example, you could have a constraint defined for a "gender" varchar that allows ("male", "female", "unspecified") only. In practice there are inherent constraints for primary and foreign keys. For a primary key, a "Unique" constraint is assumed, as no 2 rows in a table can have the same primary key. For a foreign key, there is an implied "referential integrity constraint" that enforces the rule that any value for the foreign key column (or set of columns) must have a corresponding Primary key in the foreign table. The association of these implied constraints is typically referred to as "Declarative referential integrity" in that when you specify the primary and foreign keys, the constraints are created automatically. With MySQL referential integrity requires a supported engine, typically InnoDB. Pluggable (optional) engines are one of the things that sets MySQL apart from the other major RDBMS. Indexes provide performance for relational joins and searching In practice, all RDBMS use indexes for performance AND some constraints. For example, when you define a key, an associated Index is automatically created for you. This is true of all the major RDBMS, so MySQL is not different from Oracle, Sybase etc. You certainly can index a column or set of columns without that index being associated with another table as a foreign key. So in that way, I agree that Keys and Indexes are not the same thing, so long as you understand that if you define a key an Index is being created. The RDBMS is inherently going to use the key indexes it creates in numerous ways, for performance, constraints and primary key uniqueness. The authority for this terminology is E.F. "Ted" Codd, who was the IBM research fellow that invented the Relational Model for databases, rules of database normalization, SQL etc., upon which all RDBMS are based.
  5. +1 for Let's Encrypt. Much better solution and a *real* cert that browser's recognize and accept.
  6. When looking for help for a problem, it is important to include the diagnostics you already have. You stated you have errors that you know of, but you didn't provide them in your post. Let's start with something a little more standard for an edit script: <?php session_start(); $id = $_SESSION['id']; if (!$id) { die('Invalid Session'); } include("connectdb.php"); $lyricsId = (int)$_POST['lyricsId']; $lyrics = trim(strip_tags($_POST['lyrics'])); if ($lyricsId && !empty($lyrics)) { $query = "UPDATE lyrics SET lyrics = ? WHERE lyricsId = ?"; $stmt = mysqli_prepare($dbc_form, $query); mysqli_stmt_bind_param($stmt, 'si', $lyrics, $lyricsId); mysqli_stmt_execute($stmt); # Should go back to your lyrics display page, to show the updated lyrics for that $lyricsId header("Location: index.php"); } else { $_SESSION['error_msg'] = "No Lyrics provided"; header("Location: index.php"); } Some things to note: Check that there is a session id cast lyricsId to an integer to keep people from screwing around with id parameters trim and strip html tags from the posted lyrics. Properly use bound parameters. Do not interpolate strings, as it opens you up to sql injection. User input should Never be trusted, and that includes all get and post parameters. Binding parameters helps so that you do not need to escape input when used in SQL statements, AND prevents SQL injection. When the edit works OR fails, you should go back to the lyrics detail page, not back to index.php.
  7. In my opinion, your issue is easily solved by allowing null on your sequence column. Null is also theoretically correct, in that null indicates the absence of value. MySQL will allow multiple columns to be null on a uniquely constrained column or index.
  8. There are any number of things it could be. Your post is the equivalent of going on a Car mechanic's site, and making a post that says: "There's a weird sound coming from the right front corner of my 79 Subaru and the car won't stop properly now. Oh yeah I drove by the DMV and they said that only people who exclusively work on 79 Subaru's could fix it, assuming it could be fixed. Tell me how to fix this, and btw I'm not in any way mechanical!" What do you actually expect us to do here, other than make the educated guesses that have been made? We have some standards and practices here, and pretty much anywhere else where people help others with code problems or questions, and that is that we need to see some relevant code.
  9. Relational databases were designed for normalization and joining of tables. Don't make the mistake that far too many software developers have of jumping to optimization conclusions that lead them down a path prior to doing any actual tuning or profiling. You don't know what your bottlenecks will be until you have a working application. Don't assume that joining tables together will perform poorly.
  10. I would certainly have the likes/dislikes in a table as Barand suggested, along with an FK to the user who made the like/dislike rating. This allows you to do a number of important things, like for example, preventing a person from liking or disliking the same person repeatedly. There is a cost to summarizing data like that repeatedly in a sub-query so frequently people will implement a de-normalized summary, or better yet, put the summary in a caching system like Redis. The summarized(denormalized) totals could be in the person table if you really wanted them to be, but you should still have the detail person_rating table to store the individual like/dislike rows.
  11. Agree with chhorn: put in the person table, unless you want the historic record of Avatars. Hopefully you are using the InnoDB engine for all your MySQL tables. If not, you can run ALTER TABLE on them to change the engine from the default MyISAM. With InnoDB, when you SELECT a row from an InnoDB table by the primary key, the read activity includes all the data anyways, since InnoDB has clustered indexes. This type of micro-optimization isn't really that useful to consider when thinking about DB design, but you might as well sleep well at night knowing that there's no significant performance cost to putting it into the person table.
  12. There are many different ways to handle this. Personally I would probably use Redis hashes for this, or a mongoDb collection, but if you don't have those options, then one of the simplest and cleanest options is to use SQLLite3. There is full support for Sqllite via a node library, and it has essentially no footprint other than the underlying c libraries. Tables can be persisted to the filesystem or used in memory. I see no reason not to persist them to a file. You would need a simple table keyed by robot name with the following fields: robot (text) user (text) from (integer) to (integer) The from and to columns would be timestamp values. Personally, it would be nicer to use websockets rather than polling, but either way, the client code will query the db for a particular robot where the current timestamp > from and < to. If any rows come back then, you check if the user is the current user. If not, then display to the user that "user x currently has this robot". The entire table could also be queried to show which robots are currently assigned to which users. If a user wants to assign themselves an available robot, then then simply insert a new row with the current time as from, and some reasonable future time that syncs with your client "checkin" ajax code. 5 or 10 minutes might be a good starting point. The "checkin" code would query current user's robot row, and update the from, extending it by 5 or 10 minutes at each checkin. The advantage of this is that any allocations will expire if the user doesn't release the robot properly or their page dies. Sqllite will handle concurrency and locking for you via transaction support (you wrap your select and insert/update activity in a transaction). To release an allocation, you'll just need a routine that deletes the row for that robot. You might also want another table for the robots themselves, and even use a standard relational 1-M model with an id for key, but that is up to you. Somehow the code needs to know the fulll list of available robots, and you might as well use sqllite for both problems. Just to keep the database pruned and small, I'd suggest writing a small program that runs from cron, and cleans up any expired robot allocation rows ( current timestamp > to). You could run this in whatever iteration you would like. There might be some value in having a historic record of robot allocations, so cleaning out old rows removes that, whenever you actually run the pruning.
  13. What's missing from your list of tables is the table that describes the relationship between a member/user and the plan version. Barand is a SQL expert bar none. Do what he said with the only caveat being, that the table relating a member to a plan version would be required to determine if a user should be able to access an article. That 100% should be done with a single query that involves JOINS. As we discussed previously, with an adequate InnoDB buffer pool, repeated queries will come from the buffer pool cache, so there would be essentially no reading of data by the MySQL server.
  14. All the DB's I listed use the same basic Indexing technology. Indexing = query performance. In general, no traditional business would do what you plan to do, even if that is very common for small businesses running internet sites. Monolithic servers just aren't done for any of the commercial DB's. The RDBMS world assumes that the RDBMS will run on a dedicated server, with attached storage or perhaps a NAS or SAN. The majority of available server RAM is allocated to the DB, and nothing else runs on it. You do not want to experience a scenario where the database engine itself might be swapped to disk so that some other set of programs can run. In your monolithic (one server LAMP?) setup, you will not be insulated whatsoever from that happening without careful tuning and configuration. Most probably it will happen at some point. The other rule of thumb for system tuning is that initially, your scalability issues will bottleneck at application server RAM. With PHP that is either just Apache/Mod_php, or php-fpm. In other words, the client connections/php threads will bottleneck before the DB queries will. This is typically why you want a cluster of application/web servers and a dedicated DB server. If you find you need to add application server#2 does your application even work? How will load be balanced? This effects a number of application level configuration and code decisions you might make. For example, to start with, how will you scale sessions? It is much easier to have a basic architecture that has a degree of scalability to begin with, than it is to try and figure out how to fix things when your business no longer works, or is inaccessible due to load that's crashing it, and your only option is to try and move things to a larger server with more resources. Often business system design has historically depended on moving some portion of business rules to the database as trigger/sproc code. Oracle and Sybase/MS Sql Server (Sql Server is a licensed fork of Sybase) built there reputations on the capability and performance of sprocs & triggers. Since you haven't mentioned those this seems like a non-factor. One of the other things important to you, since your codebase is in php, is client library support in PHP. MySQL connections are very lightweight and performant when compared to something like Oracle. This works very well with typical PHP models where connection pools are not important. The Oracle connection process is not lightweight. Once an Oracle connection is made, typically one or more sessions are used, so it's a very different paradigm that doesn't match PHP very well. This is why you usually see Oracle paired with enterprise java server technology where a pool of db connections will be maintained in the java application server(s). I don't think your assessment of PostgreSQL is correct. It is certainly very active and growing. ISP's prefer MySQL because of the simplicity of installation and small footprint. This is the same reason it was used by Facebook and Wikipedia initially. With that said, I'm by no means suggesting you should switch to PostgreSQL without a strong driving reason to do so. I would stay with MySQL, in whatever flavor you would like. The 3 that people are using (Oracle MySQL, MariaDB or Percona) will all work for you. If it's the latest/greatest of each there isn't a huge advantage right now for one over the other, from a performance or technology standpoint. The other things that often concern businesses are backup/recovery and fault tolerance. Another important reason to use InnoDB involves recovery from crashes. What happens when MySQL with MyISAM crashes? How do you get things back up and running if it does? I recommend scanning the answers to this dba.stackexchange question. The key fundamental to understand is the transaction log: something that Oracle, SqlServer, PostgreSQL and InnoDB share. How much data loss can your business tolerate? Hosting companies often provide you limited fault tolerance. I have had for example, an entire AWS EBS volume die with no recovery possible. Over the holiday weekend, a Linode VPS had it's storage die and the entire server had to be migrated. If this was important business data, there could very well have been data loss. From what you've alluded to, you have a SAAS business of some sort, with access to features via paid memberships. I personally would not advocate launching this on a monolithic server. Instead, I would build it on a small cluster. You could start with the the smallest 2 machine cluster available. Rather than running on one "large" server with 8gb you could instead start with 1 medium server allocated to mysql and 1 medium application server. You can start with DNS round robin for load balancing to additional application servers as needed and migrate to load balancing in the future. Monolithic servers are ok for hobby or marketing sites, blogs and other non-transactional sites that are 99% read only. If you have transactional systems with a mix of read/write transactions you are starting out with essentially a non-scalable architecture. The only way to know for sure how much capacity you have or how your system will break is to load test, but rarely is this task performed.
  15. MySQL/InnoDB is highly performant which is one of the reasons (along with simplicity) it continues to be used. All RDBMS's depend on indexing, and the fundamental index type they all provide and implement is the Btree index. FWIW, I have extensive experience with Sybase/Sql Server, Oracle and MySQL. MySQL is unique in that it has alternative engines, as well as forks (MariaDB, Percona). PostgreSQL is often preferred by people coming from Oracle, as it is very similar down to database objects, datatypes and the stored procedure/trigger language (PL/SQL) that are foundations of Oracle use through the years. Sybase/SQL Server were also popularized through use of the stored proc/trigger language T-SQL. Comparatively, MySQL didn't even have Stored Procedures/Triggers until version 5.x beta'd in 2005. The bigger question in the open source world is not MySQL vs the commercial DB's but rather MySQL vs PostgreSQL. Even more recently many RDBMS alternatives have come into the mainstream under the NoSQL or Document DB monikers, most notably MongoDB and CouchDB. This has to do with concerns about scalability, which you don't have reasonable expections will be an issue for you that can't be solved with a bigger server.
  16. With Unit testing, mocks are really the foundation of unit testing with the ultimate goal of improving code coverage. The individual tests themselves have intrinsic input/output, or in some cases a mock is loaded with essential input. I guess the bigger question you want to ask yourself is what is being mocked and why?
  17. In phpMyAdmin, when you look at the list of tables for a database the "type" column shows you the engine. If you did not explicitly set the engine to use the InnoDB engine you will likely have all tables using the default MyISAM engine. You'll need to alter that for each table using a SQL statement: ALTER TABLE TableName ENGINE = InnoDB; MyISAM is not ACID compliant. It also will have thrown away any foreign key constraints, so if you had or wanted those you'd need to recreate them individually after you altered the tables to use InnoDB. Correct. For InnoDB tables. What that means from a performance standpoint, is that if there's a query that uses the primary key index, the entire data for the row gets read in the same operation that searched the index. I only bring this up since you asked about performance, and is one of the many reasons that MySQL DBA's are fairly unanimous in favoring the use of InnoDB.
  18. If your structure is normalized correctly you should be fine. Make sure that your tables are using the InnoDB engine. Any queries that can be satisfied by using the primary key index return all data in the record by reading the index itself. This is because InnoDB has "clustered indexes" which is a fancy term meaning that the data is actually stored in the primary key index. All primary and foreign keys are inherently indexed. So the main concern in regards to other required indexes would be queries that have where clauses that aren't keys in the table definition. This is the single most important detail. Your queries need to use a relevant index. An obvious example would be username & password columns in a user table. Essentially you just need to focus on the WHERE clauses of your queries and make sure that you have an index to handle the query. For example, if you always query for user by username & password, have an index on username, password, and not individual indexes on username and password. If you have a monolithic server (webserver, php, mysql running on one box) then you have to plan for contention. MySQL needs some tweaking to provide a sufficient InnoDB buffer pool allocation. If it's a small DB as you describe your entire dataset can be in the buffer pool cache the majority of the time.
  19. Probably one of the most common misunderstanding for people starting out with the concept of "seo friendly" url's. You are definately not alone, but kuddos for figuring it out on your own.
  20. Phi11w provided an excellent and thorough analysis. In particular consider this data for $admin: <?php $admin = ['fullname' => 'Sally Smith', 'gender' => 'F']; function getAdminName() { global $admin; return ( $admin['gender'] ? 'Mr' : 'Mrs') . ' ' . $admin['fullname'] ; } echo getAdminName(); Do you get what you would expect for a gender 'F' person named 'Sally Smith'? While all of Phi11w's points are valid, I think this is the main thrust of the issue. Even without knowing the scheme to be used, it is unlikely that gender would be a boolean value. Another issue being glossed over, is that 'Mrs' has the connotation of marriage, as well as Barand's point about 'Dr' and other sultations, but that makes things more complicated, and I don't think that was factored into the question, because the existing code has an obvious data typing issue. So an improved function (assuming php7): <?php $admin1 = ['fullname' => 'Sally Smith', 'gender' => 'F']; $admin2 = ['fullname' => 'Randy Jones', 'gender' => 'm']; $admin3 = ['fullname' => 'Pat Samuels']; function getAdminName($admin) { $admin['fullname'] = trim($admin['fullname']) ?? 'unknown'; $admin['gender'] = $admin['gender'] ?? ''; $pre = ''; switch (strtolower($admin['gender'])) { case 'f': $pre = 'Mrs '; break; case 'm': $pre = 'Mr '; break; default: $pre = ''; } return $pre . $admin['fullname']; } echo getAdminName($admin1) . PHP_EOL; echo getAdminName($admin2) . PHP_EOL; echo getAdminName($admin3) . PHP_EOL;
  21. First query, as ginerjm pointed out: SET status = ?, comment = ?, department = ? WHERE id = ? Four '?' , but your bind string is 'ssi'
  22. Another valuable free tool that can be used to test your serverside api calls is Postman. Here's an introduction video I scanned that can teach you the basics of using it effectively in a short amount of time. There are complete courses available on Youtube that cover all aspects of using it, but for your purposes, an investment of 30 minutes or less should have you up and running and testing your login.php script independently from the Unity client code.
  23. Absolutely. The primary options for getting up to date versions of php and the various extensions are: Extra Packages for Enterprise Linux (EPEL) Remi's RPM Repo I have used one or both at various times over the years. One other Repo that has had a focus on PHP is https://ius.io/. The other option is to run your setup in containers using Docker/Kubernetes/rkt etc.
  24. I'm fine with disabling replies. It fits our intentions perfectly. Only thing it doesn't allow is the poster to reply to the original post stating the opportunity is filled, but people rarely if ever have done that anyways. The constant flow of dummies who don't read the instructions and reply anyways is annoying.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.