Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


gizmola last won the day on February 21

gizmola had the most liked content!

Community Reputation

174 Excellent


About gizmola

  • Rank
    Prolific Member

Contact Methods

  • AIM
  • Website URL

Profile Information

  • Gender
  • Location
    Los Angeles, CA USA

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. There are any number of things it could be. Your post is the equivalent of going on a Car mechanic's site, and making a post that says: "There's a weird sound coming from the right front corner of my 79 Subaru and the car won't stop properly now. Oh yeah I drove by the DMV and they said that only people who exclusively work on 79 Subaru's could fix it, assuming it could be fixed. Tell me how to fix this, and btw I'm not in any way mechanical!" What do you actually expect us to do here, other than make the educated guesses that have been made? We have some standards and practices here, and pretty much anywhere else where people help others with code problems or questions, and that is that we need to see some relevant code.
  2. Relational databases were designed for normalization and joining of tables. Don't make the mistake that far too many software developers have of jumping to optimization conclusions that lead them down a path prior to doing any actual tuning or profiling. You don't know what your bottlenecks will be until you have a working application. Don't assume that joining tables together will perform poorly.
  3. I would certainly have the likes/dislikes in a table as Barand suggested, along with an FK to the user who made the like/dislike rating. This allows you to do a number of important things, like for example, preventing a person from liking or disliking the same person repeatedly. There is a cost to summarizing data like that repeatedly in a sub-query so frequently people will implement a de-normalized summary, or better yet, put the summary in a caching system like Redis. The summarized(denormalized) totals could be in the person table if you really wanted them to be, but you should still have the detail person_rating table to store the individual like/dislike rows.
  4. Agree with chhorn: put in the person table, unless you want the historic record of Avatars. Hopefully you are using the InnoDB engine for all your MySQL tables. If not, you can run ALTER TABLE on them to change the engine from the default MyISAM. With InnoDB, when you SELECT a row from an InnoDB table by the primary key, the read activity includes all the data anyways, since InnoDB has clustered indexes. This type of micro-optimization isn't really that useful to consider when thinking about DB design, but you might as well sleep well at night knowing that there's no significant performance cost to putting it into the person table.
  5. There are many different ways to handle this. Personally I would probably use Redis hashes for this, or a mongoDb collection, but if you don't have those options, then one of the simplest and cleanest options is to use SQLLite3. There is full support for Sqllite via a node library, and it has essentially no footprint other than the underlying c libraries. Tables can be persisted to the filesystem or used in memory. I see no reason not to persist them to a file. You would need a simple table keyed by robot name with the following fields: robot (text) user (text) from (integer) to (integer) The from and to columns would be timestamp values. Personally, it would be nicer to use websockets rather than polling, but either way, the client code will query the db for a particular robot where the current timestamp > from and < to. If any rows come back then, you check if the user is the current user. If not, then display to the user that "user x currently has this robot". The entire table could also be queried to show which robots are currently assigned to which users. If a user wants to assign themselves an available robot, then then simply insert a new row with the current time as from, and some reasonable future time that syncs with your client "checkin" ajax code. 5 or 10 minutes might be a good starting point. The "checkin" code would query current user's robot row, and update the from, extending it by 5 or 10 minutes at each checkin. The advantage of this is that any allocations will expire if the user doesn't release the robot properly or their page dies. Sqllite will handle concurrency and locking for you via transaction support (you wrap your select and insert/update activity in a transaction). To release an allocation, you'll just need a routine that deletes the row for that robot. You might also want another table for the robots themselves, and even use a standard relational 1-M model with an id for key, but that is up to you. Somehow the code needs to know the fulll list of available robots, and you might as well use sqllite for both problems. Just to keep the database pruned and small, I'd suggest writing a small program that runs from cron, and cleans up any expired robot allocation rows ( current timestamp > to). You could run this in whatever iteration you would like. There might be some value in having a historic record of robot allocations, so cleaning out old rows removes that, whenever you actually run the pruning.
  6. What's missing from your list of tables is the table that describes the relationship between a member/user and the plan version. Barand is a SQL expert bar none. Do what he said with the only caveat being, that the table relating a member to a plan version would be required to determine if a user should be able to access an article. That 100% should be done with a single query that involves JOINS. As we discussed previously, with an adequate InnoDB buffer pool, repeated queries will come from the buffer pool cache, so there would be essentially no reading of data by the MySQL server.
  7. All the DB's I listed use the same basic Indexing technology. Indexing = query performance. In general, no traditional business would do what you plan to do, even if that is very common for small businesses running internet sites. Monolithic servers just aren't done for any of the commercial DB's. The RDBMS world assumes that the RDBMS will run on a dedicated server, with attached storage or perhaps a NAS or SAN. The majority of available server RAM is allocated to the DB, and nothing else runs on it. You do not want to experience a scenario where the database engine itself might be swapped to disk so that some other set of programs can run. In your monolithic (one server LAMP?) setup, you will not be insulated whatsoever from that happening without careful tuning and configuration. Most probably it will happen at some point. The other rule of thumb for system tuning is that initially, your scalability issues will bottleneck at application server RAM. With PHP that is either just Apache/Mod_php, or php-fpm. In other words, the client connections/php threads will bottleneck before the DB queries will. This is typically why you want a cluster of application/web servers and a dedicated DB server. If you find you need to add application server#2 does your application even work? How will load be balanced? This effects a number of application level configuration and code decisions you might make. For example, to start with, how will you scale sessions? It is much easier to have a basic architecture that has a degree of scalability to begin with, than it is to try and figure out how to fix things when your business no longer works, or is inaccessible due to load that's crashing it, and your only option is to try and move things to a larger server with more resources. Often business system design has historically depended on moving some portion of business rules to the database as trigger/sproc code. Oracle and Sybase/MS Sql Server (Sql Server is a licensed fork of Sybase) built there reputations on the capability and performance of sprocs & triggers. Since you haven't mentioned those this seems like a non-factor. One of the other things important to you, since your codebase is in php, is client library support in PHP. MySQL connections are very lightweight and performant when compared to something like Oracle. This works very well with typical PHP models where connection pools are not important. The Oracle connection process is not lightweight. Once an Oracle connection is made, typically one or more sessions are used, so it's a very different paradigm that doesn't match PHP very well. This is why you usually see Oracle paired with enterprise java server technology where a pool of db connections will be maintained in the java application server(s). I don't think your assessment of PostgreSQL is correct. It is certainly very active and growing. ISP's prefer MySQL because of the simplicity of installation and small footprint. This is the same reason it was used by Facebook and Wikipedia initially. With that said, I'm by no means suggesting you should switch to PostgreSQL without a strong driving reason to do so. I would stay with MySQL, in whatever flavor you would like. The 3 that people are using (Oracle MySQL, MariaDB or Percona) will all work for you. If it's the latest/greatest of each there isn't a huge advantage right now for one over the other, from a performance or technology standpoint. The other things that often concern businesses are backup/recovery and fault tolerance. Another important reason to use InnoDB involves recovery from crashes. What happens when MySQL with MyISAM crashes? How do you get things back up and running if it does? I recommend scanning the answers to this dba.stackexchange question. The key fundamental to understand is the transaction log: something that Oracle, SqlServer, PostgreSQL and InnoDB share. How much data loss can your business tolerate? Hosting companies often provide you limited fault tolerance. I have had for example, an entire AWS EBS volume die with no recovery possible. Over the holiday weekend, a Linode VPS had it's storage die and the entire server had to be migrated. If this was important business data, there could very well have been data loss. From what you've alluded to, you have a SAAS business of some sort, with access to features via paid memberships. I personally would not advocate launching this on a monolithic server. Instead, I would build it on a small cluster. You could start with the the smallest 2 machine cluster available. Rather than running on one "large" server with 8gb you could instead start with 1 medium server allocated to mysql and 1 medium application server. You can start with DNS round robin for load balancing to additional application servers as needed and migrate to load balancing in the future. Monolithic servers are ok for hobby or marketing sites, blogs and other non-transactional sites that are 99% read only. If you have transactional systems with a mix of read/write transactions you are starting out with essentially a non-scalable architecture. The only way to know for sure how much capacity you have or how your system will break is to load test, but rarely is this task performed.
  8. MySQL/InnoDB is highly performant which is one of the reasons (along with simplicity) it continues to be used. All RDBMS's depend on indexing, and the fundamental index type they all provide and implement is the Btree index. FWIW, I have extensive experience with Sybase/Sql Server, Oracle and MySQL. MySQL is unique in that it has alternative engines, as well as forks (MariaDB, Percona). PostgreSQL is often preferred by people coming from Oracle, as it is very similar down to database objects, datatypes and the stored procedure/trigger language (PL/SQL) that are foundations of Oracle use through the years. Sybase/SQL Server were also popularized through use of the stored proc/trigger language T-SQL. Comparatively, MySQL didn't even have Stored Procedures/Triggers until version 5.x beta'd in 2005. The bigger question in the open source world is not MySQL vs the commercial DB's but rather MySQL vs PostgreSQL. Even more recently many RDBMS alternatives have come into the mainstream under the NoSQL or Document DB monikers, most notably MongoDB and CouchDB. This has to do with concerns about scalability, which you don't have reasonable expections will be an issue for you that can't be solved with a bigger server.
  9. With Unit testing, mocks are really the foundation of unit testing with the ultimate goal of improving code coverage. The individual tests themselves have intrinsic input/output, or in some cases a mock is loaded with essential input. I guess the bigger question you want to ask yourself is what is being mocked and why?
  10. In phpMyAdmin, when you look at the list of tables for a database the "type" column shows you the engine. If you did not explicitly set the engine to use the InnoDB engine you will likely have all tables using the default MyISAM engine. You'll need to alter that for each table using a SQL statement: ALTER TABLE TableName ENGINE = InnoDB; MyISAM is not ACID compliant. It also will have thrown away any foreign key constraints, so if you had or wanted those you'd need to recreate them individually after you altered the tables to use InnoDB. Correct. For InnoDB tables. What that means from a performance standpoint, is that if there's a query that uses the primary key index, the entire data for the row gets read in the same operation that searched the index. I only bring this up since you asked about performance, and is one of the many reasons that MySQL DBA's are fairly unanimous in favoring the use of InnoDB.
  11. If your structure is normalized correctly you should be fine. Make sure that your tables are using the InnoDB engine. Any queries that can be satisfied by using the primary key index return all data in the record by reading the index itself. This is because InnoDB has "clustered indexes" which is a fancy term meaning that the data is actually stored in the primary key index. All primary and foreign keys are inherently indexed. So the main concern in regards to other required indexes would be queries that have where clauses that aren't keys in the table definition. This is the single most important detail. Your queries need to use a relevant index. An obvious example would be username & password columns in a user table. Essentially you just need to focus on the WHERE clauses of your queries and make sure that you have an index to handle the query. For example, if you always query for user by username & password, have an index on username, password, and not individual indexes on username and password. If you have a monolithic server (webserver, php, mysql running on one box) then you have to plan for contention. MySQL needs some tweaking to provide a sufficient InnoDB buffer pool allocation. If it's a small DB as you describe your entire dataset can be in the buffer pool cache the majority of the time.
  12. Probably one of the most common misunderstanding for people starting out with the concept of "seo friendly" url's. You are definately not alone, but kuddos for figuring it out on your own.
  13. Phi11w provided an excellent and thorough analysis. In particular consider this data for $admin: <?php $admin = ['fullname' => 'Sally Smith', 'gender' => 'F']; function getAdminName() { global $admin; return ( $admin['gender'] ? 'Mr' : 'Mrs') . ' ' . $admin['fullname'] ; } echo getAdminName(); Do you get what you would expect for a gender 'F' person named 'Sally Smith'? While all of Phi11w's points are valid, I think this is the main thrust of the issue. Even without knowing the scheme to be used, it is unlikely that gender would be a boolean value. Another issue being glossed over, is that 'Mrs' has the connotation of marriage, as well as Barand's point about 'Dr' and other sultations, but that makes things more complicated, and I don't think that was factored into the question, because the existing code has an obvious data typing issue. So an improved function (assuming php7): <?php $admin1 = ['fullname' => 'Sally Smith', 'gender' => 'F']; $admin2 = ['fullname' => 'Randy Jones', 'gender' => 'm']; $admin3 = ['fullname' => 'Pat Samuels']; function getAdminName($admin) { $admin['fullname'] = trim($admin['fullname']) ?? 'unknown'; $admin['gender'] = $admin['gender'] ?? ''; $pre = ''; switch (strtolower($admin['gender'])) { case 'f': $pre = 'Mrs '; break; case 'm': $pre = 'Mr '; break; default: $pre = ''; } return $pre . $admin['fullname']; } echo getAdminName($admin1) . PHP_EOL; echo getAdminName($admin2) . PHP_EOL; echo getAdminName($admin3) . PHP_EOL;
  14. gizmola

    PHP update

    First query, as ginerjm pointed out: SET status = ?, comment = ?, department = ? WHERE id = ? Four '?' , but your bind string is 'ssi'
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.