-
Posts
4,207 -
Joined
-
Last visited
-
Days Won
209
Everything posted by Jacques1
-
That's the fault of the forum software. You can deactivate smilies in the “advanced reply” view. I had to do the same thing.
-
Yes, those symbols mean “not a valid character”. UTF-8 uses a specific byte pattern, and most ISO-encoded characters don't comply to that pattern, so you get no character at all, not just a wrong character. If you try it the other way round (UTF-8 misinterpreted as ISO), you'll see cryptic characters instead, because UTF-8 is formally valid ISO.
-
utf8_encode() is poorly named. It actually transcodes data from one encoding (ISO-8859-1) to another (UTF-. So it only makes sense if your source data has the “wrong” encoding and can only be fixed at runtime. If the source data is already encoded with UTF-8, or if there's any chance you turn it into that, the function is not necessary. Transcoding data at runtime is obviously inefficient, so it should be avoided whenever possible.
-
Recommended internal encoding for PHP
Jacques1 replied to NotionCommotion's topic in PHP Coding Help
All those encoding settings are irrelevant unless you actually use them. Do you? For example, the internal encoding of the mb extension is only used as a default value for a couple of mb_* functions. If you don't use those, or if you always declare the encoding explicitly, then why care about the default encoding? -
Is a content-type header required when using a Twig template?
Jacques1 replied to NotionCommotion's topic in PHP Coding Help
If you declare the encoding at webserver level, that's just as explicit as emitting a Content-Type header in every single script. But the global declaration is clearly more convenient (and easier to change in case that's necessary). Where you set the encoding isn't really important. It's only important that you set it. -
If the random numbers are sufficiently strong, I don't see any major security problems. However, maintaining a long list of all current tokens and iterating through it on every single request isn't really a good approach. You should generate one token in the log-in procedure and then simply use this token for every form. Generating a new token for every request is acceptable and secure, but it has no advantage over a single token and is unnecessarily complex. Should an attacker manage to get one token (through cross-site scripting or network sniffing), then it's game over either way. Invalidating the token on first use won't help. So you might as well use a single token for the entire session. The expiration is also problematic. Let's say I write a long text, maybe leave the PC for a while and then try to submit the text, but the token has already expired. Will I lose the entire text? In your model, it's an “illegal” request. But of course there's no attack whatsoever, I just needed more time than you expected. You wouldn't have this problem if the token was valid for the lifetime of the session.
-
Is a content-type header required when using a Twig template?
Jacques1 replied to NotionCommotion's topic in PHP Coding Help
How do you want to generate the header with Twig? I'm not aware of any such feature. Personally, I define the encoding globally in the webserver configuration. Since I almost always use UTF-8, it wouldn't make sense to repeat the declaration per script. In the rare cases where I do need a different encoding, I can still override the defaults. -
What is the ideal / proper way to deal with SESSIONs ?
Jacques1 replied to moose-en-a-gant's topic in PHP Coding Help
The session ID is transmitted via cookies. While it's theoretically possible to use the URL instead, this is rare and insecure. So if the sessions don't work properly, it's either a server issue or a problem with the cookies. -
Whirlpool? Wow, you must be the only person on this planet who actually uses that algorithm. May I ask why you picked it? Unfortunately, it's a very poor choice for password hashing. A standard PC can easily calculate millions or even trillions of Whirlpool hashes per second, so this algorithm doesn't provide any serious protection against brute-force attacks. Even worse: Since the same input always leads to the same hash, Google probably knows the plaintext passwords of many hashes already. So, no, this doesn't work. You need an algorithm which was specifically designed for password hashing. A common choice today is bcrypt, and PHP actually has it built in.
-
Is mb_check_encoding really required for all inputs?
Jacques1 replied to NotionCommotion's topic in PHP Coding Help
As to the encoding declaration: You should always declare the encoding in the Content-Type header and use a meta element. This is explicitly recommended by the W3C. While the two declaration may sound redundant, they're not: The HTTP header is for the client which directly receives the server response. But then the document may be stored, in which case the HTTP headers are of course lost. Now the meta element takes over. -
There are two possible approaches: You keep the files within the document root, but you randomize the filenames (and of course turn off any directory listing). If the random numbers are sufficiently strong, it's impossible to find an image until your PHP scripts hands out the filename. You keep the files outside of the document root and make them accessible through a PHP script which checks the permissions on each request. Both approaches have advantages and disadvantages. The first one is by far the easiest and most efficient approach, but it's somewhat fragile, and you can't have fine-grained access management: Once the filename is known, it's known. You cannot revoke the permission. On the other hand, anybody who had the permission once might as well save the image, so that's not really a big difference. The second approach is more robust, because all requests will go through the PHP script. You can grant and, theoretically, revoke access at any time. But of course you'll have the overhead of running PHP on every request. Both approaches can be optimized with the “sendfile” mechanism: Instead of literally reading the file with PHP and sending the whole content to the webserver which in turn sends it to the client, you just tell the webserver to serve the file.
-
Is mb_check_encoding really required for all inputs?
Jacques1 replied to NotionCommotion's topic in PHP Coding Help
Checking if the input is valid UTF-8 is neither necessary nor particularly useful. Personally, I've never done it. What are you trying to achieve with this? It does not increase security, because an invalid string is simply an invalid string. The worst that could happen is that the characters aren't displayed correctly. So what? It's also doesn't increase usability, because the browser already takes care of the encoding. An error is very unlikely. It's really only possible if the client uses some homegrown bot which somehow doesn't understand encodings. -
What is the ideal / proper way to deal with SESSIONs ?
Jacques1 replied to moose-en-a-gant's topic in PHP Coding Help
Create a minimal example which uses plain PHP sessions with no fancy stuff whatsoever (no custom save path, no playing with the garbage collector). For example, create two scripts like this: <?php session_start(); ?> <h2>The cookies:</h2> <?php var_dump($_COOKIE); ?> <h2>The current session content:</h2> <?php var_dump($_SESSION); ?> <?php $_SESSION['visited_page_1'] = true; // in the other script, it's "visited_page_2", of course ?> Now you can systematically debug the problem: Is the cookie set and transferred correctly? Does PHP open the right session file? -
XSS Injections with htmlspecialchars($string, ENT_QUOTES, 'UTF-8');
Jacques1 replied to Monkuar's topic in PHP Coding Help
First of all: Great reaction. I understand that you've probably invested a lot of time and effort into the filtering approach, so you'd have every right to stubbornly insist on it. But you don't. That's pretty rare these days. Note that I didn't suggest HTML Purifier alone. It's actually a three-layer approach: Use an established BBCode parser which has already proven itself in real applications. This greatly reduces the risk of “stupid” mistakes. Since there may still be subtle bugs in the parser, it's a good idea to keep it in a kind of “sandbox”. That's what HTML Purifier is for: It makes sure that the parser is restricted to specific HTML tags like <b> or <a>. In addition to the server-side protection, you also tell the browser that it shouldn't execute arbitrary inline scripts. This is what the Content-Security-Policy header does. The combination of all three layers provides maximum security, and it's still pretty lightweight given the complex task. The only way to be even more secure is to not allow user comments in the first place. -
XSS Injections with htmlspecialchars($string, ENT_QUOTES, 'UTF-8');
Jacques1 replied to Monkuar's topic in PHP Coding Help
The point is that the approach is already wrong. It's a classical anti-pattern which has a long history of causing security issues. The bugtrackers and vulnerability databases are full of this never-ending break-fix cycle: A developer writes an XSS filter, thinking that they've finally figured out how to do it. Somebody else breaks the filter. The developer fixes the problem, hoping that now the filter works. Somebody else breaks the filter. ... and so on. Sure, you could say that the filter gets better each time. But I'd rather say that it was garbage from the beginning and should be thrown away. Because each time the filter breaks, it puts all users at risk. If you want to make this experience yourself, well, go ahead. But this has nothing to do with proper programming. At all. -
XSS Injections with htmlspecialchars($string, ENT_QUOTES, 'UTF-8');
Jacques1 replied to Monkuar's topic in PHP Coding Help
Security doesn't work like this. If you assume that your code is secure until somebody proves the opposite, you're doing it wrong. The absence of evidence is not the evidence of absence. Even if none of the persons you ask is able to come up with a fancy attack (which may very well be the case), that doesn't mean anything. Maybe they're not smart enough, maybe they're not good at attacking applications, maybe they just don't care about those silly break-my-code challenges. Do you really want to base the entire security of your application on that? Then obviously your website isn't important at all. Otherwise you wouldn't gamble with it. Security is about thinking ahead. You don't wait until somebody breaks into your application so that you can fix this specific hole. You make sure that here are no holes in the first place. We can definitely help you with that. But if you're more interested in entertainment, this is probably the wrong forum. // For the sake of completeness: The reason why the example attack vectors don't work for you is because your parser doesn't understand URLs. It blindly prepends “http://”, even if there's already a scheme. That's a bug. -
XSS Injections with htmlspecialchars($string, ENT_QUOTES, 'UTF-8');
Jacques1 replied to Monkuar's topic in PHP Coding Help
If you need this to be secure, then don't write your own BBCode parser. You will end up with vulnerabilities. Even if we told you that everything is fine (which we don't), that wouldn't prove anything. We're a bunch of random programmers who've spent maybe 5 minutes on your code – that's laughable. The only way to be halfway sure is to use an established parser, run the result through HTML Purifier and add Content Security Policy on top. The htmlspecialchars() function is fine for preventing classical injections, but your task is much more complex. You do want the user to generate HTML, you just want to limit their abilities. Simple escaping won't help you with that. For example, CSS contexts are inherently unsafe, especially in older browsers (e. g. dynamic expressions). Even seemingly trivial contexts like the href attribute can be used for attacks: <?php header('Content-Type: text/html; charset=utf-8'); $xss_1 = 'javascript:alert("XSS 1")'; $xss_2 = 'data:text/html;base64,PHNjcmlwdD5hbGVydCgnWFNTIDInKTwvc2NyaXB0Pg=='; ?> <!DOCTYPE HTML> <html lang="en"> <head> <meta charset="utf-8"> <title>XSS test</title> </head> <body> <p> The strings are properly escaped, yet still they can be used for XSS. </p> <a href="<?= htmlspecialchars($xss_1, ENT_QUOTES, 'UTF-8') ?>">Click for XSS 1</a><br> <a href="<?= htmlspecialchars($xss_2, ENT_QUOTES, 'UTF-8') ?>">Click for XSS 2</a> </body> </html> Trying to fix those vulnerabilities is futile, because it will just be an arms race between you and the attackers: You fix something, they come up with something new. You need a fundamentally different approach: an established library and multiple layers of protection. -
The first code was never acceptable in any PHP version. You just had the error reporting turned off, so PHP didn't tell you about the issues. Whether you want to actually fix the coding style or simply silence PHP again is up to you.
-
Multiple users reading and writing to same table - advice
Jacques1 replied to enveetee's topic in MySQL Help
The credentials should be stored outside of the document root. While it's generally OK to rely on .php files being executed rather than served, this kind of protection is fragile and may break due to a misconfiguration. If the webserver isn't properly set up (even if it's just for a few minutes), it may very well serve the plain source code with all the credentials in it. Keeping internal stuff out of the document root is also a matter of clean design. The docroot is the public API of your website. Why on earth should users be able to execute some connect.php script? That's clearly none of their business. They should only see the actual pages, everything else is a “404 Not Found”. -
changing prepared statements from MYSQL to MYSQLI
Jacques1 replied to blmg2009's topic in PHP Coding Help
First of all: Why did you pick MySQLi instead of PDO? It's fairly complicated and cumbersome, and of course it's limited to the MySQL database system. PDO is a nicely designed interface for all mainstream systems. In fact, PDO may make the entire class unnecessary, because it has most of those features built in. Check out this tutorial. -
The middle ground would be to put the token into the URL but start a timer as soon as the user has visited the page. For example: After the password reset mail has been sent, the user has to click on the link within, say, half an hour. But once they've clicked on the link, they have to complete the reset within, say, 5 minutes. This way the token won't be rotting in some log. If you do this, make sure to put a big notice on top of the page. Otherwise users may not understand the logic.
-
All non-EV certificates are equally (in)secure. Since any CA in the trust store can issue certificates for any website, it doesn't help you one bit to pay extra money or go through special validation. An attacker can just pick the weakest CA of all and try to get a “fake” certificate from them. It will still be accepted by browsers. So the security of the entire standard certificate system is equal to the security of the weakest CA (there are a few exceptions, but this is the overall situation). Whether you pay $500 or nothing at all, whether you go through an DNA test to prove your identity or just reply to an automated mail – it doesn't really matter. What matters is things like customer service and how the CA handles exceptional situations. If you need to get the certificate revoked, will they do it quickly and for free? Or do you have to pay an extra fee like with StartCom? And of course some of us generally prefer serious companies over, say, GoDaddy. As I already said, there are a few exceptions. If you're dealing with very experienced users, you can benefit from a good CA: There are tools like Certificate Patrol which warn the user when the certificate changes. So if you always get your certificate from a particular CA, an attacker can't just use a different CA. Your users will notice. It's also possible to manually clean up the trust store and throw out the shady CAs. But the general public neither understands nor cares about the various CAs. It only understands the difference between EV (green bar) and non-EV (no green bar).
-
Once again: JSON-encoding has absolutely nothing to do with XSS protection. Nothing. Zero. It's not the job of a JSON encoder to fix your XSS vulnerabilities. Any kind of protection is just an implemention detail which may change at any time. Yes, PHP currently has a “magic quotes” feature in its JSON encoder (as I already said), because the core developers are well aware that PHP programmers don't really understand XSS. So you may sometimes get away with insecure code, just like you may sometimes get away with SQL injection vulnerabilities if magic_quotes_gpc is on. So should we just write insecure code and rely on “magic quotes” to fix it for us? Good lord, no. As a developer, it's your job to take care of the security. Why is this so hard to understand? Why are we having this discussion over and over again?
-
Note that StartCom uses extensive personal verification (passport, photo ID, driver's licence) and will store this data for at least 7 years. At the same time, you just get a standard certificate which isn't better than any other certificate. You also have to pay an extra fee of $ 25 if the certificate needs to be revoked. Either way, make sure you get a SHA-256 certificate. SHA-1 has (theoretical) weaknesses and is currently being phased out.
-
Multiple users reading and writing to same table - advice
Jacques1 replied to enveetee's topic in MySQL Help
You can implement a “locked” flag in MySQL, yes. What's important is that getting the current status and obtaining the lock must be done in a single atomic step. For example, you might do an UPDATE query to change the status to “locked” and then check the number of affected rows. If there are no affected rows, then the entry was already locked, and the user may not proceed. If there is 1 affected row, then the lock has been obtained successfully. Do note, however, that this kind of locking is problematic: The entries will be locked even if the editor doesn't change any data. In a web application, it's fairly difficult to determine whether a user is still active. So an entry may be locked even after the user has already left the PC. Unlocking the entry may fail, in which case it will be locked permanently until you change the status by hand. An alternative approach is to let anybody access the entry and postpone the check until the user has actually changed the data (optimistic locking). So user A can open the page even if user B has already opened it. But when user A tries to save the changes, the application checks if there were other changes in between. In that case the changes will be rejected, merged or whatever. Of course this also comes with problems: If the records are edited very often by different people, then the changes will often be rejected. Merging changes may be difficult. The best approach might be a combination: You do lock the row, but you don't actually enforce the lock. You merely show a symbol and let the user decide if they want to change the data nonetheless. The data itself is protected through optimistic locking.