Jump to content

vinny42

Members
  • Posts

    411
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by vinny42

  1. There is no point in hashing (SHA is not encryption) the password before sending it to the webserver for two reasons. one: the hacker can see the javascript and so he knows what you do to create the hash and can reverse-engineer it. two: yes a hacker cannot see what the password was, but he doesn't have to because your webserver doesn't know the password either, it just needs the right hash which the hacker can still sniff. the hash becomes the password. If you want to be more secure you should use SSL because that encrypts the entire connection, making it impossible to sniff anything that is sent back and forth. (and then again it doesn't matter if you hash the password or not
  2. Functions enable code re-use, so in the most strict sense that is the criterium; no re-use: no function. But in modern programming you always have at least two uses: once in your application and once in your unittests. Logging-in of course ican be re-used all over the place; on the main login form, but also in login popups that appear when you visit a bookmarked page inside the restricted area, in datafeeds, basically in every situation where you ned to access restricted data while you're not logged in.
  3. Thirteen seconds of googling returns: \x[0-9A-Fa-f]{1,2} the sequence of characters matching the regular expression is a character in hexadecimal notation http://ch.php.net/manual/en/language.types.string.php#language.types.string.syntax.double
  4. The design does look a bit weird, a user is not an improved databaseclass. A userhandler or usermanager can use SQL to fetch data from a database and polulate userobjects with that data, but a user does not know where it came from or where it should be saved. Don't extend database classes, use composition. The userhandler class sends hardcoded queries to a database-access class that executes them and returns raw data that the handler processes into userobjects. Also note that there is no one-to-one relation between PHP objects and database tables. This idea forces you to create separate objects for the user's basic data, his accessrights, his addresses, preferences etc. Before you know it, you'll have dozens of objects and dozens of queries to get the most simple pieces of information about a user. So just write queries that get what you need. As a final note; why do you put the tablename in a property, are you planning to change the tablename at runtime? :-) if you prefix the tablename properly you will never have a reason to change it ever.
  5. That's because a URL usually cannot be longer than 2000 characters. If you need to send more, you should use the POST method. so you can create an HTTP body that has as much data in it s you need, hunderds of megabytes if you want.
  6. If you put everything else in functions then why make a n exception for the login? And are you sure that the login code will only ever be used on the login page? But think about testing also; you cannot easily automatically test PHP code that is embedded in a piece of HTML, while it's very easy to execute a function and see if it returns true or false.
  7. Perhaps I quoted the wrong sentince, I thought I was quoting a reply to this:
  8. In fact, even the superglobals are a "bad idea", but given PHP's loose structure of "start whichever script you like, where ever you like, how you like", there's not realy any other way of doing it. Anyway, the problem with globals is that they just exist. You never know where they came from or what other pieces of code have done to them before you got to them. So if your routine needs to parse GET parameters you might be tempted to take $_GET and start modfifying it with a loop, leaving it empty at the end. The next function in your script may also want to do something with $_GET and find it empty. You'll never notice that your second piece of code has stopped working because of something the other code did, because there is nothing in your code that suggests that the processed version of $_GET is sent to the second piece of code. So, think of functions and methods as black-boxes. All the data they need is given to them through the function parameters, they should *never* assume that something exists in the environment around them. That way you have complete control over what data goes where and it becomes much clearer what data is modified by which function.
  9. And did you understand what I posted? Because you are not asking questions about it, you just repeat what you've already said before, and coming to the same conclusion that what you are doing is weird and bad and doesn't work.
  10. Do you actually read what I post?
  11. Noscript means no script, so you cannot redirect if JS is not enabled. But javascript is not needed here. PHP can see if cookies were set and PHP can set cookies. If PHP detects that no cookies was set it serves a page with the extra content and sets a cookie. If the cookie was set, PHP leaves the extra content out and doesn't set a cookie. No javascript involved at all.
  12. Indeed, it's open to injection. What kind of help are you looking for? because right now you are just saying "people say my code is not safe, please help".
  13. 1. So either your cookie check doesn't work or you're not setting the cookies properly. Have you var_dump() 'ed the $_COOKIE var? 2. No.
  14. Then try that first. It doesn't matter to the visitor and you'll probably have a working solution much sooner than with PHP. Just make it intelligent; don't add all the brokers into the HTML and hide all but the current one; use ajax to only load the current one. That way the webserver will also register a hit for every broker loaded and you can prove to your brokers that the all had a equal share of hits, etc...
  15. What kind of issues are you having? Your code seems odd by the way, you have a huge switch with 30 cases that just return the value plus one, so what's the use of that switch?
  16. If you are planning to insert a few hundred recors using separate URL's then you'll force PHP to open and close it's connection to MYSQL for every record, that adds a delay of 20-30ms per record, which quickly adds up. If you have a dedicated server you can try using pconnect (do not *ever* use that on a shared server) which will keep the connection open when the PHP script ends, saving the time to open the connection again later. Again, do not use pconnect on a server that you do not control. Insertint records quickly works best if you can do it in batches. If you can send a POST request htat holds the dat for 1000 inserts at a time, you can use transactions to speed the inserts up. If all the records are inserted into the same table you can make it even faster by writing the data to a CSV file and using the LOAD DATA INFILE function of MySQL, which will copy the data from the CSV directly into the table.
  17. Thanks for digging that up, that should put all worries about performance issues through overhead to rest.
  18. Yeah, so how can you tell that the overhead is causing a performance issue?
  19. So you have hardly any overhead at all :-) Only if MySQL can re-use the empty space left behind from earlier DELETE operations which I'm not sure it can. But I really wouldn't worry about this overhead thing, if you're not running out of diskspace or experiencing an unexplained slowdown then the overhead is simply not an issue.
  20. How do you suggest to measure the performance effect of table bloating? :-)
  21. Excessive overhead, yeah it's a bit of a grey area. Small tables aren't bothered by overhead, and large tables can be too large to optimize anyway. I'd define "too much overhead" as hunderds of megabytes of wasted space in a table. That's purely a gut feeling, not based on any measurements. As long as you don't optimize to get a few dozen megabytes back, especially if the table is 2GB. Of course overhead doesn't prevent new records from being inserted :-) That would be weird. I'll try to explain: Databases store information, or "records" in a file on disk. Files are sequencial things, each record is written directly after the previous record, like words in a text. When a record is removed, the space where the record used to be is marked as 'empty'. When a new record is created the database needs to find an empty space in the file, large wnough to hold the new record. Most databases don't search for empty space inside the file, they simply append the new record to the end of the file, where there is always enough space. PostgreSQL and the like, maintain a freespacemap, that effectively remembers where all the empty spaces in the file are. When a new record has to be stored, the freespacemap is used to find a gap large enough to hold the new record. If there is such a gap, it's re-used by the new record, thus reducing the amount of wasted space. So again, overhead has no effect at all except for wasting diskspace. If you can live with the waste, you can live with the overhead.
  22. It's that "PDO is the new black" that I find worrying. That, along with the tendency to use prepared statements for everything "because it's safer". Examine the consequences of what you do before you tell someone else it's the right solution :-) In real life you'll want to write a wrapper around PDO anyway, so PDO will only be used in one class, which means you can just as easily stick with mysqli. But, back to the topic. Don't bother the database with the overhead of a prepare when all you are going to do is update a record. In fact, definately don't prepare this because the chance is significant that the database can't use an index to find the correct record, and waste time didn a sequential scan. The reason for using a prepared statement here is to escape the id value because that costs a roundtrip to the database, but preparing effectively replaces that call with an even slower roundtrip to prepare the query. I would not be at all surprised if preparing is much slower than escaping. Even more on topic; Does that mean the current time of day, or the time mentioned in the form? For the current time you can use the SQL keyword NOW() or one of it's equivalents. If you want the time from the form you'll need to agree on a format to use in the form, and a way to convert it to what the database understands. Fortunately MySQL has str_to_date for that.
  23. Moving to PDO requires a lot of rewrites and has no benefits, so I'd suggest moving to mysqli instead (and to be careful about using prepared statements, they are not supposed to be the default choice)
  24. In addition to what is said in that link: Overhead has very little impact on performance, do not optimize pre-emptively. Why: running optimize causes MySQL to completely rebuild the table. it does this by marking the old table as "no longer used", and copying all data to the new table. During this procedure you effectively have no working table. If that operation is interrupted in any way other than what MySQL expects, you loose both copies of the table. What kind of interruptions can you get? Well running out of diskspace is a very popular one. MySQL doesn't (or didn't, I haven't tried this for a while) check for the required amount of diskspace. So if you want to get more diskspace by running OPTIMIZE, make sure you have enough space to rebuild the table (which you probably don't, otherwise you wouldn't want to clear diskspace ) I don't know about MySQL but other databases have freespace mappers that keep track of this overhead and re-use it when the amount of space is enough to hold a new record.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.