Jump to content

falltime

New Members
  • Posts

    4
  • Joined

  • Last visited

    Never

Profile Information

  • Gender
    Not Telling

falltime's Achievements

Newbie

Newbie (1/5)

0

Reputation

  1. It is sort of a strange question, but I'm wondering what a few gurus might consider would be the best (efficiency-wise) way to write a DB abstraction class for MySQLI. I have a number of DB abstraction classes that I've written in the past that are loaded dynamically by a class factory. Although I've generally just used MySQL, its comforting to know I could switch to most other DB's without much of a problem. My original MySQL abstraction was pretty basic - abstracted common methods were used to call MySQL procedural functions. Nothing real fancy simply because there is only one real obvious approach. With MySQLI however, I'm a bit perplexed mainly because of the different approaches one can take when creating DB class: 1. I could create a class that simply calls the MySQLI procedural functions and stores the link identifier within the class - essentially the exact same way I created the MySQL class. 2. I could extend the MySQLI class and translate the methods so that the abstracted common methods basically recall the corresponding parent MySQLI class methods. 3. I could write a class that calls the OO methods instead of the procedural methods (similar to option 2 but no extension); For some reason options 2 and 3 just seem a bit off to me - I'm under the slight impression that the MySQLI class itself is in fact just doing what I would do in Option 1 and it feels like I'd be going through 2 layers.
  2. For several of my clients, I've always just reused a great session class that I wrote awhile ago and it has always served me well. It implemented standard PHP session management and set basic cookies for long-term persistent access. I've moved on to a much larger web app project that must be scalable, particularly in terms of performance. I been doing a bit of reading on the common problems with PHP sessions when scaling up to large server-farms. Additional expensive software is required to centralize persistent session data, and it adds quite a bit of performance overhead. So I've considered rewriting my session class to drop PHP sessions all together and just work with cookies. It sounds like this is really that real common alternative although I'm a bit wary of the obvious potential security issues (cookie theft, etc). I know you can set a cookie experation to 0 so that it expires as soon as the browser as closed, but from my experience, it doesn't quite work in the same way as a session - every browser window must be closed (at least in FF) for the cookie to fully expire as opposed to just the specific website browser window (in the case of sessions).
  3. I've been coding in PHP for over 6 years now, and I've always just instantiated the class object and assigned it to a variable within the same PHP file. I never had any issues with this approach and up until very recently, I never considered there to be any issue. However, after stumbling upon the Singleton design pattern I've become a bit confused, simply because while I understand the "what" and "how" of it, I've failed to fully grasp the "why." I guess all I am trying to ask is what are the advantages of implementing the singleton design pattern over my current method: Assume this is session.php: class Session { var $username; //Username given on sign-up var $time; function Session(){ $this->time = time(); $this->username = "falltime" } }; $session = new Session; I then use a "require_once("../include/session.php");" in scripts where I need it, and it is automatically instantiated and assigned. Frankly I don't see how and where I would ever have a problem with the multiple class objects being created, since require_once would (of course) only be called once.
  4. I have a database that is used to store information for a gaming league - particularly game-by-game statistics. The purpose of this database is to: 1. Store information and statistics for each game played in the league. 2. Store individual game-by-game statistics for each player in the league. Individual game stats such as: kills, deaths, hero, level, creep kills, etc. I have two tables for the statistics: A game table: CREATE TABLE `games` ( `HOST` varchar(16) NOT NULL default '', `GID` mediumint( unsigned NOT NULL auto_increment, `LOBBYID` int(10) unsigned NOT NULL default '0', `REALM` tinyint(2) unsigned NOT NULL default '0', `team1Kills` tinyint(3) unsigned NOT NULL default '0', `team2Kills` tinyint(3) unsigned NOT NULL default '0', `TYPE` tinyint(3) unsigned NOT NULL default '0', `GAMETIME` timestamp NOT NULL default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP, `lastReport` mediumint(10) unsigned NOT NULL default '0', UNIQUE KEY `GID` (`GID`), KEY `TYPE` (`TYPE`), KEY `HOST` (`HOST`), KEY `LOBBYID` (`LOBBYID`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=468 ; A player table: CREATE TABLE `players` ( `GID` mediumint( unsigned NOT NULL default '0', `PLAYER` varchar(16) NOT NULL default '', `kills` tinyint(3) unsigned NOT NULL default '0', `deaths` tinyint(3) unsigned NOT NULL default '0', `ckills` smallint(3) unsigned NOT NULL default '0', `cdenies` tinyint(3) unsigned NOT NULL default '0', `color` tinyint(2) unsigned NOT NULL default '0', `hero` varchar(7) NOT NULL default '', `level` tinyint(2) unsigned NOT NULL default '0', `REALM` tinyint(2) unsigned NOT NULL default '0', KEY `GID` (`GID`), KEY `PLAYER` (`PLAYER`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; When a game is created, a row is inserted into the game table with all of the pertinent game information. Then a row for each player in the game (generally 10) is inserted into the player table. The GID primary key is used to link the two tables. Initially this appeared to be the best way to structure the database. I didn't see any other way to easily track statistics for each player on a game-by-game basis for efficient queries later. However, it has occurred to be that the player table is going to balloon extremely quickly. I can assume that at least 100 games get played a day in the league. Generally there are 10 players in each game, so that comes out to about 1000 new rows in the player table a day - 7000 a week. 100 games a day is also a rather light estimate. I have a lot of experience designing databases for small companies to store information on a rather simple, small-scale basis. But never have I ever dealt with anything on such a large scale, and I have absolutely no idea how the database will hold up... Maybe it will be totally fine. Maybe the structure is as efficient as it can be. If anyone has any advice or pointers to effectively optimize this structure keeping my initial purposes in mind, I would love to read them. If the structure is fine, and 7000-10,000 new rows a week for a single table this is perfectly manageable, I would like to know that too. Thanks for any help!
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.