Jump to content

gizmola

Administrators
  • Posts

    5,945
  • Joined

  • Last visited

  • Days Won

    145

Everything posted by gizmola

  1. PHP has sessions. You can store an array in a session. In order to start using sessions, you have to start them at the beginning of a script. session_start(); You can then store data to a session variable by assigning it to the $_SESSION superglobal. An html form that makes a php script it's target with method POST will have any form variables available in the $_POST superglobal. So if you had let's say an input variable named "tableinput' then you will get the value of that from: $_POST['tableinput']. So your script could do something like this: // Output form here if (isset($_POST['tableinput'])) { $_SESSION['tabledata'][] = $_POST['tableinput']; } // Output table here: foreach($_SESSION['tabledata'] as $row) { // output $row inside a tr } That is the framework for what I'd suggest. Self contained, and each time you submit, it will add an element to the session variable which will then drive the output of your table.
  2. It's very simple: Warning: Invalid argument supplied for foreach() in /home/birdneto/public_html/rebelfrogv2/p_createaccount.php on line 169 That indicates that there is a foreach loop on that line that is expecting an array to foreach through, but it is not an array. This is not necessarily an error depending on the nature of your code. However, if you want to catch it in advance you could use is_array(). Also in a production environment people will typically set the errorlevel so that warnings are not triggered. In the future you should identify for people what line the error is pointing to. See error_reporting
  3. felito: Is it too hard for you to type in www.php.net/microtime? Maybe you don't realize that the php manual works that way --- you put in the name of the function you want to read about in the url. microtime There is a uniquid function. That function uses microtime to seed a unique id generation algorithm. Also... read the responses to your question. Xyph took the time to reply to you with an example that answered your question about formatting of a time value. Did you read it? If you want an id, then you are going to get something that has the property of uniqueness. It is going to be a string, not a date/time value. Do you want a date/time value or a unique string?
  4. Not clear on what you're asking for. You can change the value of the data in the $_POST $_POST['name'] = ucwords($_POST['name']); or you can just echo it out where you need it: echo ucwords($_POST['name']); What the best approach is, depends on what you're doing with the data.
  5. You never set $logged_in to be anything other than false. I'm not really sure why you are creating a class variable at all given the strategy of the session class get/set relying on session variables. You also have no login method. include("includes/functions.php"); class Session{ // set session key public function set($key, $value) { $_SESSION[$key] = $value; } public function get($key){ //getting session if(isset($_SESSION[$key])){ return $_SESSION[$key]; } else { return false; } } public function confirm_logged_in(){ //check if logged in if (!$this->get('logged_in')) redirect_to("login.php"); // a tailored method exit(); } public function login() { // Do whatever you need here to validate login -- if true $this->set('logged_in', true); } public function logout(){ session_start(); session_unset(); session_destroy(); } } $session = new Session(); ?>
  6. This thread is comedy gold.
  7. When you use an array with single quotes around the key name in an interpolated string you have to put a block around the variable: mail('memzenator@gmail.com', 'maybearobot: '.$_GET['organization'], "Name: {$_GET['name']} /n/n Return Email: {$_GET['email']} /n/n {$_GET['message']}");
  8. I'm basically with Requinix and KingPhilip on this. I don't recommend serializing data in most cases. People should only do it when they have a clear design objective. One example I can think of is in the Joomla cms world, there is an extension called K2 that basically graphs a custom article system onto joomla. They wanted to provide people with the ability to store user defined fields. Because it's a cms, they needed to avoid adding tables or altering table structure, so when you define custom fields, that simply drives the display of the forms, and any data that is actually recorded involving any of the custom defined fields gets stored as a serialized array. That's a specialized case where the tradeoff makes sense. They don't really make it clear to people that these user defined fields come with a price, but for most people, the application pretty much guarantees that the entire table will never be larger than a few hundred rows. I should also add that there are well known no-sql databases like mongodb that can offer you this same sort of flexibility. If serializing data is looking really good to you, maybe you would be better off with a nosql db.
  9. I have developed lots of cli programs that run from cron. There is also at. However, when you use the word background, this has a special meaning. I have lots of tasks that are started by cron and then use exec. The secret is to use *nix shell's built in ability to background a process using &. exec('/usr/bin/php -f somescript.php &');
  10. CMS's are designed for people who have a publishing application. There is nothing a cms is going to offer you that is useful given your description and requirements. If I understand you, this is a scheduling application? The 2 most used LAMP cms systems are Joomla and Drupal. They both have the ability to have user login systems and have commerce and subscription management modules available. For the scheduling of the machine pool, you're going to need something custom with either one. In other words, you'll need code, and understanding how to develope your own module in either cms is non-trivial even for experienced php developers.
  11. Serializing data and storing that is fine so long as you KNOW you will never query for any of the serialized data. So in other words, if you have a bunch of data you need to store, read out and display when you're presenting the data for a user. Here's an example when it would be really bad: Let's say you have a list of flags for user communication preferences: Newletter: 1 Weekly_Newsletter: 0 Can_PM: 1 Show_email: 1 So you store this as an array, perhaps serialized as a json string in a database column named "flags". Now your system needs to run the weekly email letter process. Oops, the query can't reference the Weekly_Newsletter column because it's packed into flags. So the database will have to tablescan (return every single row). The meta table design suggested by Requinix is advantageous in this case, because even with a seperate column, you will probably hit the "low cardinality" issue where even though you have an index on the column, because there are only 2 values (0, 1 or T, F) it's possible that mysql will tablescan because it decides use of an index is not efficient. The meta design makes it more likely that an index will be employed because there are ironically, many more rows in the table for each of the different meta types you're storing. I have found a meta design works particularly well when only a percentage of the total universe will have a particular meta type stored. So if you ultimately end up with 100k users, and only 10% actually update their facebook account, this is where the advantages of a meta design outweigh the disadvantages of having a seperate table. The other advantage of a meta type design is when you want to be flexible in terms of the information you will be storing. With a meta design you are able to develope a system where literally you can add a new metatype in the metatype table and instantaneously the application will be updated and can start storing that information without you changing a single line of code. Your form code becomes more complicated because the fields on the form have to be driven by the metatype table, but it can be done. With a traditional subtype table (user_profile) you have to alter the table structure and related queries when you want to change something. This does not mean however, that applying KISS doesn't make sense for you. You have to weigh the importance of this against your other priorities, and implementing a meta table design is more complex.
  12. select m.member_id from members m join purchases p on(m.member_id = p.member_id AND p.product_id not IN (14,42,29,86,43,9,44,12,45)) where level_id = 3 and is_active group by member_id;
  13. This topic has been moved to Miscellaneous. http://www.phpfreaks.com/forums/index.php?topic=345786.0
  14. People often do what you are proposing when they expect that they will have a large number of rows. A large number of rows is when you get up past 10's of millions of rows. Will you have 10 million users in your usertable? Yes Myisam uses table locking, however this table locking is extremely fast. There is however, nothing preventing you from using innodb instead, which does row level locking and also has performance related improvements like clustered indexes and results caching similar to what other well known commercial rdbms's like Oracle and SQL Server have. Innodb also has real recovery, so if your server locked up or lost power, you'd recover near instantaneously vs mysisam which has to check the tables and indexes for consistency in a process that can take hours. You need to be brutally honest with yourself about the prospects you have for the number of users you can expect.
  15. It comes with a config.sample.inc.php file. You can use that as the basis for a config by making a copy and naming it config.inc.php. Probably you will want to go with the cookie auth type, that prompts you for a pw and saves it as an encrypted cookie. This is most likely what you want to have in the config file: $cfg['blowfish_secret'] = 'some_phrase_here'; /* YOU MUST FILL IN THIS FOR COOKIE AUTH! */ //$cfg['ForceSSL'] = TRUE; /* * Servers configuration */ $i = 0; $i++; /* * First server */ $cfg['Servers'][$i]['verbose'] = 'servername'; $cfg['Servers'][$i]['host'] = 'localhost'; $cfg['Servers'][$i]['port'] = ''; $cfg['Servers'][$i]['socket'] = ''; $cfg['Servers'][$i]['connect_type'] = 'tcp'; $cfg['Servers'][$i]['extension'] = 'mysqli'; $cfg['Servers'][$i]['auth_type'] = 'cookie'; $cfg['Servers'][$i]['user'] = 'username'; $cfg['Servers'][$i]['password'] = 'userpw'; For security purposes I use the forcessl and host this under https but that may be beyond your abilities to setup. I commented out the relevant config line. You can use any mysql user you want, but I personally use the root user. The important thing is to have a valid username + password configured for this to work.
  16. phpMyAdmin is a set of php scripts. You simply need to put it into your webroot at some location, edit the configuration file and it is ready to use.
  17. Well yes I'm in the US, but "loss" would not account for slowness like that unless there are major routing problems, or problems with your isp. Do a bandwidth test using a site like dslreports.com or speedtest.net to determine what the situation is with your test connection. This sounds like something you'd need to take up with your isp. Also make sure your report is not in KBytes vs. Kbits. 60kBytes/sec is actually 480kbps. While not anywhere near what your server clearly can deliver, it's better. Regardless it doesn't appear to be an issue with your host's network or server, but something on the network you're using to connect to the server.
  18. Here's a wget as well [root@penny david]# wget http://74.63.79.124/games/files/Combat3.zip --2011-10-11 15:28:42-- http://74.63.79.124/games/files/Combat3.zip Connecting to 74.63.79.124:80... connected. HTTP request sent, awaiting response... 301 Moved Permanently Location: http://www.hardgamez.com/games/files/Combat3.zip [following] --2011-10-11 15:28:42-- http://www.hardgamez.com/games/files/Combat3.zip Resolving www.hardgamez.com... 74.63.79.124 Reusing existing connection to 74.63.79.124:80. HTTP request sent, awaiting response... 200 OK Length: 6664312 (6.4M) [application/zip] Saving to: `Combat3.zip' 100%[=======================================================>] 6,664,312 1.88M/s in 3.6s 2011-10-11 15:28:46 (1.75 MB/s) - `Combat3.zip' saved [6664312/6664312]
  19. I just downloaded your test file at about 700k/sec. You are on a wild goose chase.
  20. This topic has been moved to Other Libraries and Frameworks. http://www.phpfreaks.com/forums/index.php?topic=345560.0
  21. I used a tool called Dezign for databases to make the diagram. If you find the DDL useful, you can edit it to change the names to be whatever works best for you. Whether or not you use an event_id or not is really a pragmatic decision. Keys are needed to guarantee uniqueness. That is the first and most important thing. Whether or not you should use a defining relationship that results in a concatenated key or not depends on a lot of different things. Having done these types of applications a lot I can tell you that getting a key collision on Start_DateTime is not that valuable. What is important is that your application takes into account both start and end ranges and insures for a single venue that you are not going to overlap. Consider: Start_DateTime 1:00pm Start_DateTime 1:00:01 pm. No key problem there, but your application most certainly doesn't want to allow someone to add a 1:00:01 row be added. So you are going to have procedural code that determines where there are holes in a schedule, requring start+ end regardless. I'd suggest given that situation that you opt for keeping the nuts and bolts of generating a key simple. The other reason to do so is that if you end up adding related tables to an event row, you'll save a lot of space not having to repeat the entire key in order to make that relationship. The other practical matter is that you'lll be most likely be doing queries by venue and queries by show. Thus you will need indexes that cover your queries. One single index will probably not be good enough for everything you need. With that said, you can certainly opt for the dependent relationships and have those be a concatenated key. It won't stop you from inserting rows that are injected into the time holes improperly, and your code will need to be entirely different than the code you use to insert the rows into the auto_increment columns. You will need more complicated logic for the event table in either case, so my advice is to avoid the complexity of the concatenated key, but either approach is valid and frequently used.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.