Jump to content

kicken

Gurus
  • Posts

    4,704
  • Joined

  • Last visited

  • Days Won

    179

Everything posted by kicken

  1. I'd go with a solution that identifies the site by domain, either fully or via a sub domain on your generic domain. There's no need to replicate the entire code base for this, just point all domains to the same folder and inspect the $_SERVER['HTTP_HOST'] variable to identify which domain is being used to access the site. You can map that to a configuration file that would store the information about which database to use or whatever other information you need on a per-domain level. $host = $_SERVER['HTTP_HOST']; $host = preg_replace('/[^a-z0-9.-]/i', '', $host); $configFile = 'config/' . $host . '.conf'; if (file_exists($configFile)){ $configuration = json_decode(file_get_contents($configFile)); } else { $configuration = []; //Some generic config. } This kind of setup would also allow your customers to have their own custom domain name if they want, which you could charge them for if you wanted to. With independent domain names the search engines would view them as independent sites and index them accordingly. If you're going to duplicate the database for each site then I'd consider just duplicating the entire thing rather than trying to duplicate only some of it and run the site with a mix of two different databases. Allowing your users the ability to have some control over the actual product list and not just pricing seems like it could be desirable, but without knowing more about your business and who you're marking this toward I can't say for sure. I also can't speak to much toward SEO as most of the stuff I work on is internal apps where SEO is irrelevant. I know having the same content at multiple URLs is generally a bad thing but having different pricing/contact info may help avoid problems with that.
  2. PHP will enter the first branch that is true. 344 is greater than 1 so that condition is true and that is the branch that will be taken. You either need to add a second condition to specify a range as shown or you need to put your higher numbers first. $gplus= 344; if($gplus >= 3000){ echo " Get Answer 7."; } else if($gplus >= 2000) { echo " Get answer 6."; } else if ($gplus >= 1000) { echo "Get answer 5."; } else if($gplus >= 500) { echo "Get answer 5."; } else if($gplus >= 200){ echo "Get answer 4"; } else if($gplus >= 100){ echo " Get answer 3."; } else if($gplus >= 50) { echo " Get answer 2."; } else if($gplus >= 1) { echo "Get answer 1."; } else { echo "We didn't find any answers."; }
  3. I think what you have is fine. Depending on the length of the keys or whether I re-use them sometimes I will store them in separate variables, eg: $phase = $row['phaseNum']; $task = $row['taskNum']; $projects[$phase]['tasks'][$task] = 'task information'; If the array starts getting deep I will alias via references. For example: $stuId = $row['studentId']; $sprId = $row['programEnrollmentId']; if (!isset($MasterList[$stuId][$sprId])) continue; $ses = &$MasterList[$stuId][$sprId]['sessionList'][$row['sessionName']]; if ($ses){ $ses['enrollments'] += $row['numEnrollments']; $ses['isStartSession'] = max($row['isStartSession'], $ses['isStartSession']); } else { $ses = [ //blah ]; }
  4. I was not a fan of Laravel when I tried it, particularly with their facades thing. It made PHPStorm's helpful features about useless unless you installed some stub library to define all the facades. I found their Eloquent ORM system to be not as useful as Doctrine as well. I was comparing Laravel side-by-side with Symfony at the time. Symfony won for me fairly early on so I never did get real far into Laravel. This was also a couple years ago (Symfony 2.x vs Laravel 4.x) so the experience is probably not as relevant. I find with a lot of projects, particularly large ones, you generally have to be willing to dive into the code and figure things out rather than just rely solely on the documentation. Many times the documentation is missing some small details that you can find by digging into the code. That's one of the reasons I preferred symfony because it was easy to dig into the code by utilizing PHPStorms code navigation tools.
  5. The options are supposed to come first: Putting the --prefer-dist after works fine for me but maybe your version of composer is more picky? Are you using the latest version?
  6. If you really want to use method_exists rather than create a map of methods, create an interface that lists the valid methods then check for the method on that. That way your implementation can have other methods without them being possible targets for your dynamic call and you can enforce the signature of each possible call. interface CommandsInterface { public function doAFunction($data); public function doBFunction($data); public function doCFunction($data); } class Commands implements CommandsInterface { public function execute($name, $data){ if(method_exists(CommandsInterface::class, $name)) { $this->{$name}($data); } else { echo 'No such method'; } } }
  7. I can't think of any reason why one would want some 60/30 split (or whatever) between objects and arrays. Would you mind explaining the logic behind why you'd want an array vs an object in different situations. Fetching a single column is occasionally useful. I doubt there's much difference performance wise using though I haven't tested anything. The main reason to use the default is just to keep things consistent across the code base. I've worked on some code bases before where it seemed like have of it fetched as an object and the other half as an array. It was damn annoying switching between styles. Even more annoying was some of the arrays were fetched numerically rather than associatively. As I mentioned above, I cannot think of any reason why you'd want to switch between the two. Pick a style and stick to it. Make it the default to save some typing. If you really really want to, you can always override the default for a specific result set. Whatever value is given to an if statement will be cast to a boolean value. The code then branches based on that typecast result. So when there's no explicit condition you just need to look at what values would cast to true and which would cast to false. In the case of an integer value zero is false, everything else is true.
  8. In your code you are attempting to use the variable $User_id but no such variable exists (you named it $id).
  9. You could try QuaggaJS. Their demo's didn't really work for me, but maybe you'll have better luck.
  10. If you care what the values of bl and context are, which it sounds like you do, then you need to query against them. It sounds like you're defining a "basic item" to be an item that exists where those two fields are blank. As such if you want to query for basic items you need to use the conditions WHERE item_id=? AND locale=? AND context='' AND bl='' If my understanding is incorrect then maybe try expanding on what the idea behind the table/data is and how things are represented (ie, what exactly makes something a "basic item" vs other types of items). P.S. bl is not a very descriptive name, I'd consider changing it.
  11. Based on a quick scan of the WordPress documentation having both may be unnecessary but it shouldn't cause any issues. Having the 5-minute one defined might cause WordPress to check for work every 5 minutes but since that script only specifies work on the 30-minute schedule(wp_schedule_event) nothing would be found (unless some other plugin has work to do) and the job would simply exit.
  12. I have no knowledge of Wordpress or whatever plugins you may be using so the best I can really offer is this: function my_cron_schedules($schedules){ if(!isset($schedules["5min"])){ $schedules["5min"] = array( 'interval' => 5*60, 'display' => __('Once every 5 minutes')); } if(!isset($schedules["30min"])){ $schedules["30min"] = array( 'interval' => 30*60, 'display' => __('Once every 30 minutes')); } return $schedules; } add_filter('cron_schedules','my_cron_schedules'); add_action('thirty_minute_event', 'update_avg_points'); function my_activation() { if ( !wp_next_scheduled( 'thirty_minute_event' ) ) { wp_schedule_event( current_time( 'timestamp' ), '30min', 'thirty_minute_event'); } } add_action('wp', 'my_activation'); function update_team_avg_points($teamName){ foreach (ecfit_get_teams($teamName) as $team){ $team_points = $team->get_average_points(); $teamId = $team->get_id(); update_post_meta($teamId, '_ecfit_team_avg', $team_points); } } function update_user_points(){ $users = get_users(array( 'blog_id' => 1, )); foreach($users as $user){ $userPoints = get_all_points_for_user($user->ID); update_user_meta($user->ID, '_ecfit_user_points', $userPoints); } } function update_avg_points() { // THIS UPDATES TOTAL USER POINTS. update_user_points(); // THIS UPDATES TEAM AVG POINT VALUE. update_team_avg_points('staff'); update_team_avg_points('2k2/Contractor/Intern'); update_team_avg_points('family'); } All your team updates use the same basic code so I split that code into a separate function that just takes the team name. I also split the user update into it's own function. Separating them out in this way allows the variables to go out of scope sooner allowing PHP to reclaim their memory and hopefully use less. For example the script previously required enough memory to hold all three team lists at once where as now it should only need enough for one at a time. You may be able to do some of this by just issuing some direct UPDATE queries to the database rather than going through the WordPress API. You'll need to spend some time looking at the table structure and see what can be done.
  13. The url parameter to the event source can be any url. Passing a get parameter is fine. Your processing script needs to guard against being executed multiple times, especially if it's web-accessible. You can do this by checking/setting a flag in the database before processing begins. You'll also need to guard against user disconnects if your script is going to be sending data directly to the browser. Use ignore_user_abort to let the script keep going even if the browser disconnects.
  14. Please surround your code in tags when posting it to preserve the indentation and make it more readable. Why do you believe memory usage is an issue? What sort of symptoms or problems are you seeing? You need to provide some specific details if you want help, not just a brief theory and code dump.
  15. It'd probably help if you provided queries, table definitions and sample data. It's not clear what you're attempting to do or trying to ask. Provide your CREATE TABLE statement, some sample data, and your SELECT queries along with your expected results.
  16. It's not ideal, but it's not that uncommon of a setup since cron is the best option generally in a shared hosting environment and it's relatively easy. If you have control over the server and want to put in the extra effort there are more efficient ways like I mentioned. However wasteful it might seem though, If there is no work to be done then your cron job wouldn't not take much processing power or resources. Just make sure to code it in such a way that the script does very little until it's determined if work is to be done. That way it can quickly start, check, and exit when there is nothing to do. You can also reduce the frequency but that'd reduce how quickly it notices new work. Being able to stream a response has been a possibility for a long time, but in the past it's been unreliable due to potential buffering beyond your control. I'm not sure how well it works these days. The event source api is something I've not heard of before. Years ago I create a progress bar type thing by streaming out a nearly full page then slowly outputting some javascript to update the display. For example: ... nearly complete page ... <?php for ($i=0; $i<100; $i++){ echo '<script type="text/javascript">updateProgress(' . ($i/100) .');</script>'.PHP_EOL; sleep(1); } From what I recall it worked in some browsers but others would not process the scripts as they came in. Since it was just a personal thing I didn't care about compatibility, it worked fine for my general use. I think the key thing however though is that you need to separate the processing into a background task. Whether you use polling or the newer events api for reporting the progress doesn't matter. If you try and do the processing at the same time as the upload the request won't end until the processing is complete meaning the user's browser will sit there acting like it's still loading the page the entire time. The exception would be if you're using PHP-FPM, you could use fastcgi_finish_request to end the request but let the script keep working. Separating it out as a background task also means your not tying up web server threads with work which would result in reduced ability to handle requests.
  17. You need to separate the processing from the browser request. Your progress table is one way to do it, but you need to take it a step further. Your browser request needs to handle uploading the zip and CSV files and stashing them away somewhere. You then create an entry into the progress table that indicates where the files were stashed and that they are in a "need to be processed" state. After that your script is done and gives the user their response. The response could be a page with some Javascript that periodically polls the server for the status of the job by checking back in with the progress table. You'd then have another separate script to do that actual processing of the files. The easiest way to do this is to set it up as a cron job and have it run every minute or so. Each time it runs it will check the progress table for work to do and if any is found do it. As it progresses through the job it can provide feedback by updating the progress table. Another more complicated way is to run some background tasks with the help of a job/message server such as Gearman, beanstalkd, or redis. In such a setup you'd have worker scripts connect to the server and wait for new tasks. Your upload script would then submit a task to the server after an upload is complete. You'd still use the processing table to handle sharing of status and other details. The advantage of this type of setup is you can kick off processing immediately rather than having to wait until the next cron tick.
  18. You can only initialize a static variable to a constant value. Initialize it to null then check in the function if it's null and if so create the object. function dm(){ static $logger = null; if($logger == null){ $logger = new \Monolog\Logger('my_logger'); $logger->pushHandler(new \Monolog\Handler\StreamHandler(__DIR__."/../logs/client.log")); } $logger->addInfo('Hello'); }
  19. Basically it just lets you prime the fetch style of the resulting statement object. I'd say it's really not that useful as generally you'd just use the same fetch style throughout the application and that should be set on the original PDO object using PDO::setAttribute with PDO::ATTR_DEFAULT_FETCH_MODE. If for some reason I want to use an alternate fetch style, I prefer to just call PDOStatement::fetch directly with the appropriate style.
  20. Nothing about ACID prevents duplicates, and if you have a UNIQUE constraint then there's no need to first check if the ID you generated exists. For example if you had two clients working at the same time and their requests get interleaved such as: Client A connects Client B connects Client A issues SELECT for ID#abcd (to check if it exists) Client B issues SELECT for ID#abcd (to check if it exists) Client A gets no result Client B gets no result Client A issues INSERT ID#abcd Client B issues INSERT ID#abcd Now you have two ID#abcd records if there were no constraint. A transaction does not prevent this from happening, only a UNIQUE constraint.
  21. Don't try and check then insert, use a UNIQUE constraint on the column for your ID. A transaction won't prevent two separate clients from potentially inserting the same ID. The purpose of a transaction is to enable you to rollback many changes if something fails and provide a consistent view of the data across multiple queries. Note that you don't necessarily have to generate your own IDs just to prevent having a sequential ID show up in the URL. You could still use an auto-increment ID but encode it in some way and use that encoded version in your URLs.
  22. The tools work essentially the same way composer does. You use them to install what you need via a few commands in a terminal. Once the libraries are installed you'd link them in your code like normal (script or link tags). Install NodeJS to get the npm tool. Install Bower using the command npm install -g bower Create a bower.json file in your project root such as: { "name": "example", "private": true, "ignore": [ "**/.*", "node_modules", "bower_components", "test", "tests" ], "dependencies": { "jquery": "^3.1.1" } } Then use bower to install your libraries via bower install. It'll stick all your libraries under bower_components directory which you can then link to, for example: <script type="text/javascript" src="/bower_components/jquery/dist/jquery.min.js"></script> Gulp is not necessary, but can be useful to combine/copy/minify the library files. I'll generally use it to combine the libraries into a single vendor.css / vendor.js file and copy them to my web root.
  23. I use Bower and Gulp. The tools are based on NodeJS rather than PHP but work well and are fairly easy to use.
  24. define creates constants which you reference without quotes elsewhere in the script, such as: require(MYSQL); You would have to define your MYSQL constant to include the specific file you need however, or concatenate it to the constant in the require, as you cannot require an entire directory. define('MYSQL', 'd:/wamp\www/includes/Connect_login.php');
  25. The server listens on the address specified listen call. Since you didn't specify an address it uses the default which is 127.0.0.1. You can instead specify the address you want it to listen on, or use the special address 0.0.0.0 to mean all addresses. If you specify a specific address it has to be one that exists on one of your network interfaces.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.