Jump to content

gizmola

Administrators
  • Posts

    5,945
  • Joined

  • Last visited

  • Days Won

    145

Everything posted by gizmola

  1. Take out all of your suppressions (the @ sign before functions) and run it on your server. What error(s) are you getting?
  2. Yes there are many ways to do what you want. One of the simplest and at the same time, highly performant is to use a caching product. Seems like I've written volumes on this in the past, but the ones to look at are: APC (only if you're using php I'd probably recommend you start with Redis because it just has so many features that can come in handy, and it can use memory frugally. With the small amount of data you need to cache, it should be a very good fit for you. So-- in summary your code will become: 1. Request serverlist data from redis. 2. If found - display. (Coming right out of memory, this will be lightning fast) 3. If not found - get list of game servers. Store this data to cache with a time-to-live setting you find acceptable. You can experiment with this setting to see what the sweet spot is for you in terms of freshness of data vs. buffering the users. Start with 60 seconds probably. The only fly in the ointment (other than installation and configuration of redis on your server(s) is that you need a php client. Fortunately there are 2 options: a pure php version (predis) and a php extension written in C (phpredis). In your case, go with phpredis if you can, but that can become an issue if you don't have the environment or sysadmin experience to navigate pecl or any issues pecl might have compiling phpredis. I feel obligated to say that you can just dump this stuff in a "file" on the filesystem and use filesystem calls to look at the age of the file, and build your own cache that way as well, but that's old school and once you have redis available you may find that some of its other features enable you to quickly get new information up and running for your users.
  3. Thanks for letting us know about your problems. We pretty much are aware of this issue, but it doesn't hurt to get feedback from people like yourself about how it's actually functioning (or not).
  4. There is an entire product category ("Business Intelligence tools") with a number of products that will work with structured data sets, typically your SQL data. Just a few off the top of my head: Crystal Reports, Cognos, Tableux, Hyperion and Business Objects. Like Barand I've been involved in the design and development of integrated query tools, but these were for large corporations that had very specific business tools and the budget to accomodate the development of these tools. There are 2 fairly mature ORM's in the PHP world: Propel which is an active record pattern implementation and Doctrine2 which is a Data Mapper pattern. Each has its own query builder, which would provide a nice foundation for a custom query tool. At the end of the day, its typically the UI that offers all the complexity, as you're providing a thin layer above SQL. You really have to have a significant amount of time, some fairly good requirements, and an interest in investing in a sophisticated UI to pull something like this off successfully. Also as Barand pointed out in his typical pithy manner, if you're giving them the ability to query "everything" then why not just point them to one of the reporting tools and get them setup. I'd also point out that many a company has pointed some analysts at a production transaction database and lived to regret it, when a report they'd just designed takes down the entire transaction system. This is the reason Datamarts and Data warehouses exist.
  5. From a conceptual standpoint, you are showing us one table, when in fact there are several different related tables. It seems your main interest is not having to determine whether the permissions are stored for a group or a user. What you have solves that problem but you will still be differentiating those in code, so I'm not sure what the win is, as you've described your use case. What you have done is similar to acl's systems you can find around. The problem with all these systems is that inevitably you have the following scalability conundrum. Does this user have the ability to "EDIT" this object? If you have thousands of users who each could have hundreds of objects, pretty soon you have a table with lots of these rows, and that eventually becomes a scalability limitation. The alternative answer is that you have some code that does a series of checks. -Does this user have a ROLE that gives them super powers? -Is this user in a GROUP that has a ROLE that gives them super powers? -Is this user the OWNER of this item and possibly -has this user been DELEGATED OWNERSHIP of the item, so that they have similar powers? This can get as complex and nuanced as you want it to, depending on your application. Utlimately, it comes down to running some code that looks at a number of different possibilities, each possibility needing to be resolved in a minimum amount of time. This article talks about this conundrum and the confusion surrounding permission systems, acl and how people might solve the problem. It discusses this in the context of the symfony framework, but presents the way that symfony has an "acl" solution not too far from the one you're tinkering with as it happens (you would have to reverse the actual tables the symfony acl system creates, but they take a somewhat similar approach to yours), and then goes on to explain why acls are typically not the solution to the problems people are trying to solve. Instead, symfony offers the alternative of "voters" which is an engine allowing you to write simple rules for making these types of permission decisions. Article is here: http://knpuniversity.com/screencast/question-answer-day/symfony2-users-menu-cms
  6. Hey Rizmah. Why are you doing hosting at all? If you have a decent workstation, use virtual box, run Centos inside it and you have access to the same OS as you would at most hosting systems, not to mention, full control and the ability to install any software you want in there. Add vagrant and there are loads of pre-defined vagrant boxes out there which have the full LAMP stack already setup. Makes working on code very easy as well. Work in git and you will be doing what the pros do. Here's a link to Vagrant: https://www.vagrantup.com A really nice vagrant file with a full LAMP dev environment: http://r8.github.io/vagrant-lamp/
  7. It seems to me that you are making this needlessly confusing. Your script that handles the submission should already have the results of the INSERT query. Then, as Psycho already stated, you can simply SELECT the new user by ID and use that information to populate your response. You should not be coding new code using the mysql_ functions. Use either mysqli or PDO/MySQL. I prefer PDO as it's easier to use in my experience, but either one will do. This page should help you understand what you need to do after your INSERT. http://php.net/manual/en/pdo.lastinsertid.php
  8. Correct. I personally see no reason to force a computation there, when it's a constant, but what you have will work. 900 would be better (with a comment) above to explain the constant. You can also actually use constants for this, and eliminate magic numbers. Even better. define('OFFLINE_SECONDS', 900); // 15 Minutes // comparison if (time() - filemtime($filename) > OFFLINE_SECONDS) { // Show offline } else { // Show online }
  9. You really can't pick out the part that compares the time? 2 * 3600 look familiar to you? 3600 = seconds in an hour. You should be able to figure out how to come up with the number of seconds in 15 minutes right?
  10. MySQL is an odd database because it has pluggable engines that can work entirely differently. MyISAM is the base engine and of course InnoDB was a pluggable engine that became very popular. Just about all DB's have some form of caching for queries. In the case of MyISAM this extends to the parsing of the queries themselves but not the actual data. With the buffer_pool in innodb -- the actual data retrieved is cached. So, in terms of what you were asking, when you do a LIKE '%...' query, the table will have to be scanned starting at row 1 and continuing to the last row in order to return a result. With Innodb and a large enough buffer pool, essentially the data will already be in memory and the tablescan process will be much faster than it would be reading the data from disk. Unfortunately, with shared hosting, you are not in control of those parameters as you're sharing a mysql server your hosting company provides, either on your server or some secondary server they host. In the shared host scenario, I would advise you to implement Barand's suggestion.
  11. There are other reasons to utilize innodb besides the ones mentioned. I highly recommend you stay with it. You could use Barand's suggestion -- have a denormalized search table, where you simply replicate the values into a small myisam table for the benefit of the search. The coding and interruption required would be minimal. You'd simply need to write the replication script and schedule it in cron. To your original question one of the benefits of innodb is that it has a true data cache. It's called the "buffer pool". If you have sufficient resources on your server, you could increase the size of the buffer pool to insure that data is coming from cache. It is often possible with a small fairly stable database like the one you describe where you are almost entirely "READ/SELECT" based, to have a pool where the data and indexes for the entire table will be in the buffer pool. At that point a SELECT '%...' will be far less disruptive than normally because it will be coming from memory. An 8-10k row table is tiny in the database world. You would have to invest some time trying to figure out the size of your overall database and the tables in particular, and you'd need to understand your overall memory usage to determine if you could allocate more RAM to mysql that you currently do. You can start by looking at http://dev.mysql.com/doc/refman/5.5/en/innodb-buffer-pool.html The innodb_buffer_pool_size and innodb_buffer_pool_instances are the only params you really need to understand and possibly change. Whatever you do, these are good params to understand. Although it might be a stretch if you are a novice sysadmin, the free tool, innotop is fantastic for making it easy to monitor the effectiveness of your caching and figuring out your cache hit ratio. I would think based on your description that you should be aiming for close to 100% cache hit with your innodb tables.
  12. Many people utilize virtualization. It allows you to run a virtulized server on your workstation, and interact with that server in the same way you would if you were hosting a site. Even better, you can achieve rapid iteration using shared folders so your edits show up near instantly. There is also a popular wrapper system called Vagrant (see https://www.vagrantup.com) that offers portability and ease of environment setup. Vagrant boxes are pre-built base OS installations for all the popular unix distress, and are then supplemented by vagrant files can include provisioning setup that can do pretty much anything you can imagine. There are many already created that will use chef or puppet to do in some cases most, or all of the work to setup the development environment, starting up databases and services and configuring them for initial use. Out of the box vagrant works with oracle (formerly sun) virtual box, which is free, but it also has support for other provisioners like vmware. From the point that you have the ability to instantiate, replicate and bring up and down independent local server environments, you can be unplugged and still work away on your projects. You probably want to download the php documentation locally as you'll want to have that for reference, as well as having offline access to any other server or language development you want to use. I'm not going to get into IDE discussion because that has entire threads on it easily found. Zend, EclipsePDT, netbeans and what has been very popular in recent years - phpstorm amongst others, are all products that can improve your development experience. You can see a relatively recent poll done by site point here: http://www.sitepoint.com/best-php-ide-2014-survey-results/
  13. Make sure all the static resources (css, javascript, images etc) are using a fully qualified url, rather than a relative path and that will go away.
  14. First thing that I see is that you could improve the variation in the uniqid, by using the optional parameter to increase entropy. Change uniqid() to uniqid('', true) The more variation of input the better the distribution of hashes. There is always a chance albeit infinitesimally small that you will have a collision (2 different inputs hash to the same hash). The chances of a collision on random data requires something around 2^64 hashes. You will never approach that number of files in your system, short of your system being at google/facebook-like scale. You could decrease this probability even more by using a larger hash -- sha1 being the most likely alternative. With that said, and as everyone has already pointed out -- if this is really important to you, then you can simply take the random name generated, and do a simple filesystem check to see if a file of the same name exists, and keep iterating until one does not. Practically speaking, the chances of it happening in your system are close to nil.
  15. Smarty is primarily HTML markup with some logic blocks. Data gets injected and the logic blocks work and you get output. What else is there to it that is confusing? They have solid documentation as well. You need to clarify exactly what you are "stuck" on.
  16. Cronix is right. You should use firebug to determine if the ajax call is happening. In your code snippet, I don't see you doing a session_start() at the top either.
  17. Or better still, use a prepared statement and bind the parameters. See http://php.net/manual/en/mysqli-stmt.bind-param.php $stmt = $con->prepare("INSERT INTO mxit (ip,time,user_agent,contact,userid,id,login,nick,location,profile) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)") or die(mysqli_error()); $stmt->bind_param("ssssssssss", $ip, $post_time, ..... etc); $stmt->execute(); printf("%d Row inserted.\n", $stmt->affected_rows);
  18. And it's still 3 years late to the game, and way behind the alternatives.
  19. I don't think it helps people to make hyperbolic statements about sniffing, and raise the level of paranoia without a healthy understanding of exactly how and when your packets might be intercepted. Most people that hook up to their ISP will find that they can't sniff anyone's traffic. Now if we're talking about a cloud server, or something like that, then I would agree that it's a much more likely scenario. I'd actually encourage people to try out some of the common sniffing packages, as they have great utility when debugging network applications. Just to be clear, I'm not trying to say that people shouldn't implement TLS, or use VPN's or anything like that. But by the same token, they need to understand some networking basics and ideally, have explored the tools that facilitate the problem in the first place, so that they're clear under what circumstances they might be exposed.
  20. Richard, I am not sure I understand either your original point or your train of thought. To quote you: I agree that sniffing is a potential problem. It is far less of a potential problem than people think however. In order to sniff someone's packets, you need to be able to technically intercept their packets. With the advent of high speed switching, there are far fewer places for people to sniff -- although the pervasive use of wifi hotspots are a problem. What I don't understand is your assertion that the use of TLS/SSL doesn't secure your communication, when in fact it does --- via strong encryption. If there's some misunderstanding here, then you should probably respond with the specifics.
  21. @timneu22, You got a complete answer from both Kicken and myself. You never bothered to reply to those answers or ask questions or anything. Instead you seemed more intent on creating arguments with people for reasons I can't really understand. Shared cache? Yes, apcu, redis or memcache. They all have client support in php and I've used all of them in different projects in the past. They provide exactly what you were inquiring about. With that said, it seems pretty clear you don't have the application load or number of users where you are being forced to do something to insure performance. Nobody is twisting your arm or demanding that you stop putting a load of static data into sessions (which by default get stored per session, in files on the web server fs). Things that work fine for an intranet or small business system won't scale to significant user load, but if you aren't facing that scenario, then doing something inefficient and resource intensive may never be a problem you actually have to confront. That doesn't mean there isn't a better solution, and for some reason you seem to have just dismissed the suggestions made in this thread, with the implication that you never received a solution, when in fact, you did.
  22. Congrats on making the effort to figure this out on our own. Posting your final solution is good form as well, and helps give back to the community.
  23. I'm not sure what "awkward results" is supposed to mean. I'm interpreting your questions as --- "it doesn't appear in the contact email the script sends to me". This script is "self-posting" in that it has 2 parts: 1. If it was the target of a POST request, then it looks at the contents of the $_POST and if acceptable, sends the email. 2. If it was not POSTED to OR there was an error, display the html form. You added a form element to section 2 of the code. You did not make any changes to Section1, so even if the new form field is filled out, it's simply discarded. Section 1 of the code starts with this logical condition: if ($_POST['contact_form_submit'] ) { Hopefully you understand that this is checking to see if the $_POST super global includes the submit button. This will happen when the form is submitted. You might be wondering why the script submits to itself? It's because a specific target is not included in the form tag using the action= parameter. When this happens the browser assumes that the target should be the same URL as the form. <form method="post" action=""> In summary, you need to look at the code that comes directly after the 'if' statement I provided, and figure out where to account for the existence of the new form variable you created. Since it's a "required" field in your form, you actually need to change both the code that sends the email as well as the code that comes before it, which checks for the existence of all the required fields on the form. Needless to say that variable will be named: $_POST['contact_form_socialid']
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.