Jump to content

Jacques1

Members
  • Posts

    4,207
  • Joined

  • Last visited

  • Days Won

    209

Everything posted by Jacques1

  1. cURL is a very popular client for all kinds of protocols (including HTTP). It's well suited for making HTTP requests with PHP, and it's much more powerful than built-in functions like get_headers(). For example, cURL can send multiple requests in parallel so that your script doesn't waste time by waiting for the responses. This massively increases performance.
  2. google.com is a bad example, because they're doing all kinds of magic. The question is: Do you have any cURL problems with the real target page?
  3. Use cURL with a multi handle and make HEAD requests to only fetch the headers. The reason why get_headers() is slow is because it only makes one request at a time and fetches the entire document (for whatever reason).
  4. The blue underlined text snippets in my reply are actually links. Click on them and you'll get plenty of examples.
  5. So that he gets a nice SQL injection vulnerability? C'mon.
  6. Read requinix' reply. There is no such thing as $http_response_header for cURL. You need to manually extract the headers from the response.
  7. You can't just drop some PHP value into a query string, especially when that value comes from the user. This makes your application wide open to SQL injection attacks and all kinds of bugs. Use prepared statements to properly pass values to the database system. Note that you'll need the PDO or the MySQLi extension. If you're still using the old mysql_* functions, it's time to switch.
  8. Strike “w3schools” from the list and replace it with the Mozilla Developer Network.
  9. w3schools is infamous for wrong information, bad practices and plain nonsense. In fact, there used to be an anti-w3schools site (aptly called W3Fools) which attempted to point out all their mistakes. I'd like to believe that w3schools has improved over the years. But when I look at their HTML tutorials, the markup isn't even valid. And the JavaScript tutorials still contain nonsense like document.write() and promote spaghetti code. I understand that w3schools is appealing for a beginner, because it looks fancy and has this nice “Try it yourself” feature. But if you're looking for reliable information from people who actually know what they're doing. w3schools is not the right source. Use the Mozilla Developer Network instead.
  10. And I think you're overestimating it. The concept is simply naïve, sorry. You have two options now: You can build some half-assed amateur solution with the help of PHP Freaks. Or you can adopt a professional solution from people who actually know what they're doing (like the Symfony team). Most PHP programmers go with the half-assed solution, which is why they stay amateur programmers for their entire life. But maybe you're one of the few people who takes their job seriously and wants to do it right. Your choice.
  11. A custom session handler is pretty much the worst starting point you can think of, because even experienced developers constantly get this wrong. And indeed this is a typical naïve implementation without any concurrency handling whatsoever. Using Ajax or running a slow script multiple times in quick succession is enough to break the whole thing and end up with strange errors. Proper session handlers are much more complex and require a deep understanding of how to handle concurrency at database level (locks, transactions etc.). For the time being, just stick to classical file-based sessions. Or if you absolutely must store the sessions in the database, use an existing implementation which actually works (there aren't many). For example, the Symfony framework gets it right. They're using advisory locks to make sure that concurrent requests won't overwrite each other.
  12. Prepared statements are indeed harder to debug, because there's no complete query at any point. The best you can get is a bunch of information about the parameters via PDOStatement::debugDumpParams(), but this doesn't even print the corresponding values.
  13. None of this is a valid reason for giving up a robust solution in favor of some homegrown “filtering” stuff. In fact, this makes absolutely no sense whatsoever. Where are the quotes? Why on earth would you replace non-alphanumerics with spaces? What is the mysqli_real_escape_string() supposed to do when you've already removed all of its target characters?
  14. The $HTTP_SESSION_VARS variable is obsolete since PHP 4.1.0 which was released back in 2001. When you're running around with code older than 13 years, it's time for an update.
  15. What is your goal, Slyke? What are you trying to achieve with this redirect stuff? The whole approach looks very strange, and that's generally a sure sign that you're using the wrong tool.
  16. You should create your own threads instead of digging out some old topic. I found this purely by accident. PHPass is obsolete and seems to have been abandoned by its author. While it does support bcrypt, it uses the old “2a” prefix which will lead to compatbility issues with modern implementations. And it has a rather questionable fallback to a custom MD5-based algorithm. For proper alternatives, see the links in #9.
  17. And how did you solve the problem? Note that this is a forum which may be read by other people who have the same problem. It's extremely frustrating to read a discussion only to find out that the last reply is “Fixed it. Bye.”
  18. This function is much more important than kicken seems to think. Since sessions get an exclusive lock when they're opened, that means all other processes have to wait until the lock is released again. They cannot run simultaneously. If you're using Ajax or a similar technique, that's a big problem, because it essentially breaks the asynchronicity: All requests are processed one after another, synchronized by the locking mechanism. To make this problem less bad, the session should be closed as early as possible. So you start the session, do what you need to do and then close it again with session_write_close(). This releases the lock and lets other processes access the session.
  19. This is an SQL injection, and you can be pretty sure that the next attempt did in fact succeed and gave the attacker direct access to your data. So you must shut down your site immediately and handle this problem in a professional way: If there's any user data in your database (password hashes, addresses, payment information etc.), consider it leaked. You must tell you users about that, because they may have a big problem now. For example, they might have used the same password for other websites. All password hashes must be deleted. I really hope you've used a proper hash algorithm. Check for injected data (new admin accounts, manipulated templates etc.). SQL injections can in fact compromise the entire server (via SELECT ... INTO OUTFILE, for example). Was the MySQL user account of your application able to write external files? If you've used the root account, then the answer is “yes”. Check your entire application as well as the system. Any strange files? Manipulations? Backdoors? This is only the recovery procedure. After this, you need to make sure that the problem doesn't happen again: Learn the basics of security. Seriously. You shouldn't even run a public website without a solid understanding of how to protect it. Since you didn't know how to escape data up until now, you probably have SQL injection vulnerabilities everywhere in your code. The safest solution is to rewrite everything and use prepared statements instead (as already mentioned). Make sure you understand how they work. If in doubt, ask. Secure the MySQL database itself. Keep permissions to the absolute minimum, check the connection settings etc.
  20. First of all, using a GET request to change data is wrong. The HTTP specification clearly states that GET is only for fetching a resource and must not have any side effects. To change data, you use POST. Fixing this does not solve the problem, but it already makes it less bad, because the user doesn't (accidentally) trigger an action merely by visiting URL. The attack you describe is called cross-site request forgery (CSRF). To protect the user against CSRF, you generate a random token, store it in the user's session and include it in every critical form as a hidden field. Upon submission, you check if the token from the hidden field is present and matches the token in the session. If it is, you accept the request, otherwise you reject it. The reason why this works is because other users cannot read the token, so they are not able to “forge” a request on behalf of that user. See the link for a more detailed explanation.
  21. It is not safe, and it's complete nonsense. First of all, please stop using the term “encryption” for a hash. Hashing is not encryption at all. In fact, when you see other people get those fundamental concepts wrong, that's a sure sign you do not want to follow their advice (besides the fact that the reply you've quoted is 4 years old). On the average server, sessions are pretty much the unsafest place you can think of: The session files are sitting in some temporary folder for an indefinite amount of time. They're completely unprotected and may be read by anybody (at least any web application). Sessions may unwillingly be transferred from one user to another, be it due to session hijacking, session fixation or whatever. Password hashes must be protected, not spread accross the system like friggin' Easter eggs. But what's even more important: The whole thing makes absolutely no sense whatsoever. Comparing the hash in the session with the hash in the database proves nothing. In fact, this check always succeeds, because if a session contains the data of a certain user, then of course the hash also belongs to that user (unless there's something very wrong with your application). The idea of storing the password hash in the session isn't new. I've seen many people suggest it. Unfortunately, I think none of them has actually thought it through.
  22. I'm not sure if you've understood my point. A proxy or WLAN hotspot doesn't need any low-level network tricks to intercept the traffic. It's a very simple scenario which anybody can understand: All traffic flows through one intermediary. And if you don't protect yourself, then this intermediary is free to sniff or manipulate the incoming or outgoing data. This simple example is enough to explain the necessity of HTTPS. You don't need to give people a 1000-pages book about TCP/IP (although education is never wrong, of course). I understand that you like to play devil's advocate and make provocative statements to keep the discussion going. But when you look at reality, I think the situation is a bit different. So people are paranoid and vastly exaggerate the risk of network sniffing? It's rather the opposite: While there may be a vague fear of state-level surveillance, you rarely find users who actively protect themselves against concrete risks (public hotspots etc.). When talking about HTTPS, a common reaction is: “Only large banks need that!” In fact, some of your fellow “gurus” spend a lot of time trying to convince users of that. I still see people use FTP to access their precious servers. Good luck finding somebody who knows how to encrypt and decrypt e-mails with PGP/GPG. Where is this terrible paranoia you're fighting against? Do you really think our problem is too much security?
  23. Yes. You can actually see it by inspecting the HTTP headers: The expires attribute is always “01-Jan-1970 00:00:01 GMT” (Unix time 1).
  24. Setting the expiration time to the past is a compatibility feature for Internet Explorer. What people miss, however, is that PHP automatically sets the expiration time when you give it an empty value. So the time() stuff doesn't do anything and is arguably a waste of space.
  25. No, you cannot. That's the whole point of HTTPS: You either get a secure connection, or you don't get any connection at all. Do you understand how HTTPS works unter the hood? As soon as you enter “https://” into the URL bar, your browser starts the TLS handshake. At that point, the best an attacker can do is make the connection fail. But they cannot forge the request or the response. Again, that's the whole point of HTTPS. Maybe you're confusing this with “sslstrip”-like attacks where an attacker replaces all HTTPS links within an HTTP response. Yes, that's possible. But nobody ever claimed that plain HTTP provides security. It doesn't. If you want security, you must use HTTPS on the entire site.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.