Jump to content

thehippy

Members
  • Posts

    209
  • Joined

  • Last visited

Everything posted by thehippy

  1. Coming off Eclipse PDT then NetBeans to phpStorm has been a pleasure. NetBeans took a great deal for me to setup the way I wanted it to work, debugger, command line php tools, et cetera. phpStorm on the other hand is about as ready to go as an IDE can be, notably has integration with PHPUnit, phing, phpdoc and github, has good XML tools and works as a JS editor. If you find NetBeans a bit of a memory hog, phpStorm is a bit better and more responsive, not as good as the jump from Eclipse to NetBeans, but noticeable none the less. The code inspection is much more robust in phpStorm, not only will it autocomplete down to the array key but it does a fantastic job in mixed code situations (PHP, HTML, and JS all in the same view for instance). Another thing that not everyone will take into account is that the devs are fairly responsive with bugs, their tracker is public and things tend to get attention quickly. Feature requests on the other hand seem to take a major version release or someone to write a plugin as a work around. The downside is the extra functionality phpStorm provides might not be worth it, NetBeans is free and phpStorm is $100-200 USD, you'll really have to give the trial a shot and see if its worth it to you. Having my IDE be less of a headache made purchasing a license worth it to me, though I haven't upgraded to 3.0 yet which was only released a couple of days ago.
  2. I think some people have a misconception about quick websites. There are multiple distinct timings that come into play not just how fast the page is generated. While how fast you can generate the page is important, its just the first step, how fast a client can get all of the page elements and how fast a client's web browser can render the page are just as important. How fast you can generate a page, is usually the easy part for most programmers, with profiling and caching being the tools to getting that optimized. Getting the content to the client, for the most part means having your website hosted on a high-availability server, perhaps distributed through a content delivery network. But also getting the content to the client in the least number of network requests is also important. Having a page quickly render is where you end up having to do most of the work, using image sprites, optimized and minified javascript, html, cs and perhaps having to create multiple optimized views for each browser. The list goes on and on and on. A talk just about what goes into making Google's map service responsive. A very interesting talk. Another relevant talk from Google I/O yahoo dev network: Best Practices for Speeding Up Your Web Site A fairly extensive list of things you can do. html5 boilerplate A great starting point following many modern conventions for an optimized page. Google Articles Directory on ways to improve website speed ySlow firebug xdebug xhprof siege I'll stop there, this is a pretty large subject.
  3. Mobile browsers have much smaller limits, as little as 255 characters in some instances. A note, the HTTP/URI specifications have no set limit on length, the limits are in the various implementations, Apache has limit (4000), IIS has a limit (16000), IE has a limit (2000) and I'm not aware that Opera and Firefox even have a limit.
  4. This is a bit askew to your questions, but I found a 5% 10 net 30 policy helps a great deal in getting paid quickly. That's 5% discount if paid within 10 days, full payment due in 30 days on my invoices. Business owners love discounts, perhaps even more then coupon toting grandmothers. You can vary the discount to suit your needs and possibly how much you like your client. And also this is a great talk about getting paid, protecting yourself and your work with contracts. Phone up a couple of business lawyers in your area, and ask them about service contracts, employment contracts, their experience with the industry, their fees. Many of them will gladly sit down with you for a half hour for free to discuss what they can do for you.
  5. Your script will eat up memory as it currently sits, one small botnet sending requests will have that script firing off, reading that file, writing to that file and as the file gets bigger your script reads the entire thing in memory and it won't be long until you're getting out of memory errors. Its admirable to try to fix this problem as you see it, but the solution is just not suited to a php script. Even if your script didn't have that problem I would suggest moving your flood protection elsewhere. PHP sits atop Apache, Apache sits on HTTP and HTTP relies on the network layer, its better to nip the problem down in the network layer. DDoS (typically SYN flooding) protection is best done by your provider with a packet inspecting firewall appliance. But sometimes that's not available or the service is too much, the server's own firewall could be a solution but its not ideal, a proxy server or the switch its connected to would be better.
  6. OO design is not something you can instantly adopt, its much more iterative process. I started with simple encapsulation, separation of concerns and slowly integrating in and adopting design patterns, architectural design patterns and so on. Since code will speak louder than words, have a look at the source of Phing and Zend Framework as an example good OO in PHP. Phing is well put together and fairly compact while Zend Framework is complex and broad. They both have their flaws to be sure, but are still good educational sources. On a small scale this can be acceptable but on a larger scale is often impractical, so the answer is to find what will work for you and the application.
  7. Random number generators are algorithms, if you just call random random number generator function without a seed it'll fall back on its 'defaults,' typically values pulled from the system that constantly change (ex. time, pid #, mouse position, etc) those can be different from implementation to implementation, but if one knew what the default might be or a range of possible values they could greatly reduce the number of guesses they would have to make if for instance the random number was part of an encryption scheme. Depending on the implementation, the algorithm will use the provided seed instead of the defaults or it will use the seed in addition to the defaults. This is a slight distinction but seeds also used to refer to the 'set' of numbers a generator would use (ex. 1, 4, 7 and 9), so there may be some cross usage in older documentation. Pseudo-random numbers stem from the fact that through computation you cannot create a real random number, its a purely procedural process. Probably a bad way to explain it, but I can't come up with better at the moment. My cryptography knowledge comes from Bruce Schneier's Applied Cryptography he co-authored a newer book Cryptography Engineering which looks like a good updated alternative.
  8. The link given is pointing to bitbucket a mercurial-based code hosting provider, similar to github. Mercurial is a revision control system (or software code management) tool, similar to git. If no package such as an archive is provided you'll need the appropriate revision control software to copy/clone/export a local copy of the code. As with most software, reading the README file or related documentation will lead you as to how to install it.
  9. thehippy

    print

    From what I understand the XPCOM interface (Firefox) for printer settings isn't sophisticated enough. You'll notice most of the print options that affect the document are visual, things that can be done before its passed on to the underlying OS print facilities.
  10. Are you talking about prepared statements (having bound parameters)? Prepared statements are pre-parsed by the SQL server and so if you're say INSERTing a lot, cuts down on the overhead by the server of parsing each individual INSERT that would normally happen. I was doing Java before PHP and it was common practice with jdbc, PHP didn't have any support for prepared statements until ADOdb implemented it then the database extensions slowly began implementing it, its very handy. PHP Manual PDO Prepared Statements ... Adam beat me to it.. oh wells
  11. There look to be a few open tickets regarding this name resolution behavior in the tracker already. But please add another or bump/vote up an existing one. I wonder if Zend_Form[_SubForm]::setElementsBelongTo() shares the same problem.
  12. Project management like redmine, trac or jira would allow a lead developer to create a workflow where tickets/issues/tasks are assigned to developers or allow the developers to tag an issue as being worked on by them thus removing the concurrent duplicate work.
  13. freelance84, you need to change the way you're doing things. First you setup a git repository with your project, what you've been using as your master files that both developers have been editing at the same time thus far. From now on your developers will no longer edit those 'master' files directly, instead they will clone (create a local copy of) the master repository, edit those local files and when they are done with their edit will commit their change to the local repository and to update the master repository will push the edit to the master repository. To make sure the developers have the most up to date version of the master repository and without grabbing the whole repository with clone again they will issue a pull against the master repository. I'm sure the git documentation does a better job explaining. Disclaimer, I'm still a subversion user so I may not be totally accurate or sane.
  14. PHP Manual Getting Started PHP Language Reference PHP Features phpfreaks tutorials PHP Security Debugging in PHP
  15. If you want AJAX style dynamic and since you're using dojo, you could use the href attribute of ContentPane and point to an action/view. If you just want to insert sub-views, you can use the Zend_View's helper function partial().
  16. Lucene is designed around the idea that its searching and indexing documents. Lucene indexes fields that you choose in the document to base its index on. Say I have a website with recipes, so my pages consist of the fields with the name, date added, ingreditents list, instructions and another field with the location/url. I have to feed Lucene that data in the form of a document (Zend_Search_Lucene_Document) in order for those to get added to the index. Given that you probably already have a website, create a simple crawler to iterate through your existing data, create Lucene documents with the fields you wish added to the Lucene index. You'll also want to create a class/script/plugin to trigger after data has been added or updated to your site to keep the Lucene index current. There is no update in lucene, you delete the document and then re-add it to the index. Its largely up to you where you add to the index. Zend Presentation on Implementing Zend_Search_Lucene
  17. Account creation is pretty simple in any unix environment. It'll be all the configuring of services to read per-user configs that will eat up your time as most services while they support it tend not to enable or have it compiled in by default. Also you'll need to create some management scripts to handle restarts/rehashing of service configs, throttling number of reashes/restarts. And many more things I can think of, its really just a huge exercise of unix administration and scripting. Some open-source panels for reference: Webmin ISPConfig VHCS GNUPanel There is also ticketing and billing to flush out aswell. There are a couple good FOSS ticketing solutions but the only good billing solutions are commercial.
  18. UPS API has documentation online here
  19. Spamhaus' DNSBL is IP based, which is a nightmare for shared host mailing services. If you have shared services, request an IP for yourself, it'll save you some hassle. A colleague of mine ended up waiting three months to get off Spamhaus' blacklist, its a bit ridiculous, but hopefully you wont have that problem. Mail DNSBL are supposed to be setup to block incoming mail, if your host/server is blocking outgoing mail its being improperly used and many false positives are a result.
  20. Parental Control Software is what you want, Norton Family Online is Free (has some Premium services also). NetNanny is only $40/year and has been around for many many years. Both of these products have remote monitoring of some kind. OSX itself has some basic built-in parental controls Having an open dialog with your child of what is and is not acceptable will go a long way. Having them understand what they're doing on the computer is always public may be a change in perception. Also using a keylogger secretly and getting caught spying on what your child may understand to be a private use of the computer may not be what either of you want.
  21. At this point you need to figure out if that script was the only thing compromised. If you don't run your webserver you need to contact your hosting provider as well. Use the phone this is not an email type situation. You'll need to know if your account security was compromised. If your billing information was taken, you'll possibly need to contact your credit card company and/or your bank and so forth. Some insurance companies have identity theft coverage, talk to them about it. Your provider could possibly have additional information and may want to take steps to bump up security, check for other intrusions and possibly report the incident to the authorities. Make sure your provider tells you what they're doing about the situation, never accept the "we've handled it" line or if they tell you some such thing it may be time to switch providers. If you have customer information stored on your webserver, you need to figure out if it was taken, if so you may need to (by law in some place) notify your customers. Responsible security practices would have the site down until its fixed, then notify your customers when you know the extent of the data breach. eval is one of those things I tend to disable with the suhosin patch. base64_decode shows that the payload is encoded in base64, so its not hex. You could switch the eval to echo to see what the payload actually is. But please do so in a secure environment, a non-networked virtual machine is handy for this kind of analysis. Until you know the extent of the intrusion you need to go into tinfoil hat mode. You can start with running checks on your personal computer and afterwards password changes need to happen as well, with anything related to that account, billing login, ftp, mysql, panel, web services that your site may use, contact email account.
  22. Unix based OSs use the Owner-Group-Other file permissions system. Your example of 644 is a three digit code in octal (0-7) to represent the permissions on a file. The first digit '6' is the owners permission on the file, the second digit '4' is the group permission on the file and the third '4' is everyone else's permission on the file. The permission is an octal number representing the sum permission, the read permission is represented in the number by the value of 4, the write permission has the value of 2, the execute permission has a value of 1 and no permission has the value of 0. Again in your example (644) the owner has permission value of 6, from this we can derive that the permissions are the read permission (with the value of 4) and the write permission (with the value of 2). That's the basic stuff, there is more of course, this is the best tutorial I could find for you.
  23. With filesystem performance you really need to break it down to its core, the filesystem commands which vary wildly from filesystem format to format. ZFS, UFS, EXT3, NTFS, FAT have all different core filesystem commands but those are all kernel-land. Most of the time the operating system provides a common file IO API for file and directory operations in userland. Then there is the file stream API that most of us are used to built atop the IO API. So there are multiple layers you're dealing with in regards to performance. What I'm trying to get at here is that if you wish to truly optimize filesystem performance it'll be more than an application level solution. If you want to worry about performance you need to understand what the operating system is doing when you open a file or browse the filesystem. If you're a Windows user, im giving you some homework. Download Sysinternals Process Monitor (MS bought them and has their utils on their site now). Only monitor Filesystem activity, filter out the php process or the web server that may be loading php as a module, whatever is executing your PHP code. Write PHP scripts to: Open and close a single file using an absolute path. Open and close a single file using an relative path. Get the filesize of a file. Get a directory listing. Search a few directories recursively. You'll find there is a great deal more activity going on than you think. And now you can put the filesystem performance into some context. Uncached file reads vs Uncached DB SELECTs should be a little more apparent. You also mentioned that you wanted to reduce the load on the database. This to me is a bad sign, its by far too general a statement. Benchmark - take a measure of your queries. what's slow? what's fast? mysqlslap for direct mysql benchmarking ab for web server benchmarking. mtop is also useful Profile - where, what, when, why, how - yay profiling! Dissect what is going on, identify bottlenecks particularly. MySQLs Slow Query Log, EXPLAIN, SHOW STATUS, SHOW PROFILE, Section 7 - Optimization of the MySQL Manual are great sources of information. mysqlsla and myprofi are tools that are worth a look aswell. Proper Indexes - know them, use them, update them. Read the manual, give it a week to sink in and read it again. Print off a copy and have it near the porclien throne, indexes are fundamental to decent database schemas. Other recommended tidbits mysqltuner perl script percona server, percona toolkit (percona is a custom/patched build of mysql, built with added "instrumentation" and other stuff). Its great to put on a dev or testing server to optimize there and get profiling data early. Commercial Tools that could help you Jet Profiler for MysQL Webyog's MONyog/SQLyog MySQL Enterprise Monitor - had to add it to the list for a laugh All too often most new (and not so new) web developers without DB knowledge of some kind blame slow performance on the DB, where they haven't utilized the built-in mechanics of the DB itself. If a DBA ever comes onto a project at some point, hes gonna be frowning at you a lot and rolling his eyes and possibly laughing. Education is the means to answer our own ignorance, let it be the fire of our mind. Such a long reply to a short question.. *sigh*
  24. Two basic introductory tutorials Zend Framework Quickstart Rob Allen's Getting Started with Zend Framework Tutorial The first one being the official documentation and the second from the author of the Zend Framework in Action book. Frameworks give you the pragmatic tools to help you as the developer implement your design, Zend Framework more than others sticks to that philosophy quite rigidly. So in Zend Framework there are no drop in modules for an admin console or a CMS implementation, its for you as the developer to design and implement that solution.
  25. I'm kind of guessing you intention that you want a cron service for Windows, but Windows already has a cron-like service called "Task Scheduler," there is a command line interface accessed via the at command and you'll find a "Task Scheduler" MS console in Control Panel > Administrative Tools which allows a bit more finesse. If you find yourself with lots of queue tasks you might want to setup a Message Queue Server like ActiveMQ to take in tasks and have your cron script process the queue every so often. If I'm totally off, you can use popen, here's an example.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.