Jump to content

Jacques1

Members
  • Posts

    4,207
  • Joined

  • Last visited

  • Days Won

    209

Everything posted by Jacques1

  1. Your code above is wide open to SQL injection attacks, so this needs to be fixed first. Stuffing variables into query strings should be avoided altogether, because it's way too easy to make a mistake (as you can see). Instead, use a modern database interface which allows you to safely pass parameters to the database system via prepared statements. I'm no MS-SQL expert, but SQLSRV looks like the right approach. See the sqlsrv_prepare() function for examples how to use prepared statements. As to bcrypt: I assume you've already set up your database table? A bcrypt hash is exactly 60 bytes long, so you'll want something like CHAR(60) to store the hash. The API is pretty straightforward: password_hash() creates a new hash from a plaintext password. This is what you use in the registration script right so that you can store the hash in the database. password_verify() checks a plaintext password against an existing hash. This is what you use in the log-in script to see if the user has entered the correct password. One special feature of bcrypt is that it's an adaptive hash algorithm, which means you can adjust its strength according to your current hardware and security needs. This is what you should do first. The strength is controlled through a cost parameter: $hash = password_hash('test', PASSWORD_BCRYPT, array('cost' => 10)); Write a test script with the above line and increase or decrease the parameter until the hash calculation takes roughly one second. Then put this value into a configuration file so that you can change it later: // hash parameters for bcrypt define('PASSWORD_HASH_COST', 10); // just an example; you may have a different value define('PASSWORD_MAX_LENGTH', 56); // bcrypt has a maximum input length of 56 bytes In the registration script, you simply call password_hash() with the above parameters: <?php /** * Registration */ // ... check input, connect to database etc. // check password length if (strlen($_POST['password']) <= PASSWORD_MAX_LENGTH) { $password_hash = password_hash($_POST['password'], PASSWORD_BCRYPT, array('cost' => PASSWORD_HASH_COST)); // insert user data including $password_hash into the database } else { // password too long, display error } In the log-in script, you fetch the hash for the given username and then check it with password_verify(). If the password is correct, you'll want to update the hash in case the cost parameter has changed: <?php /** * Log-in */ // ... check input, connect to database etc. // load password hash from database, put it into $stored_password_hash // check password if (strlen($_POST['password']) <= PASSWORD_MAX_LENGTH && password_verify($_POST['password'], $stored_password_hash)) { // the password is correct; check if the hash needs to be updated if (password_needs_rehash($stored_password_hash, PASSWORD_BCRYPT, array('cost' => PASSWORD_HASH_COST))) { // update the hash in the database } // the user is now authenticated; store the user ID in the session etc. } else { // password incorrect, display error }
  2. Letting the database handle the bcrypt hashing is a poor approach, because it's complex, inflexible and potentially insecure: Passing the plaintext password to the database system increases the risk of leaking it and forces you to put extra effort into database security. For example, database systems typically log all queries, so you may find the precious passwords piling up in some log file. If the database is on a remote server, the passwords may also be intercepted in transit in case the traffic isn't properly encrypted. Should you ever decide to give up MS-SQL in favor of a different database system (which isn't too unlikely), you have to go through all the trouble again and find a database-specific bcrypt implementation. Instead, do the hashing in the application. If you have at least PHP 5.5, there's a native password hashing extension which currently uses bcrypt as the default algorithm: http://php.net/manual/en/function.password-hash.php If you have at least PHP 5.3.7, there's an external library which emulates the above API: https://github.com/ircmaxell/password_compat
  3. Please read my reply as well. If you left this forum with a gaping file inclusion vulnerability, that would be a sad outcome for everybody.
  4. At the risk of getting my post deleted again: You can't just include an arbitrary file your user has asked for. What if they ask for your password file? You don't want to give that to them, do you? No, a group of radio buttons with predefined filenames does not help against this, because this only affects the GUI of your website. The user can still send arbitrary paths to your server, and if you blindly send back the corresponding file, you've unknowingly given them direct access to all of your files (at least the ones readable by the webserver). Not good! You need a whitelist of specific files that the user may see. Only include those and reject any other request. Don't just rely on users being nice.
  5. The code works just fine for me (using the test data you've provided). So what exactly is the problem? Have you inspected the variables with var_dump? What's the content of $file_lines and $line_elements? By the way, PHP can parse CSV files, so no need to mess with file() and explode().
  6. This is not what include is for. The include statement executes a PHP script. Since you want plain HTML, there's absolutely no reason for evaluating it as PHP code. In fact, this has serious consequences: Anything that looks like PHP tags will be executed, even if the HTML designer just meant to write down literal text. For example, the term “<?=” or even just “<?” immediately triggers the PHP parser. The included files may be used to purposely inject malicious code. I'm not saying that your designers would do that, but an attacker who has gained access to the templates is also able to execute arbitrary PHP code. HTML designers should write HTML markup, not PHP code. So either use readfile() to print the file content as plaintext. Or give your designers a proper template engine like Twig which allows them to use additional features in a controlled manner.
  7. You're both missing the point. First of all, I fear those are rash actions. I understand that you're worried about the HTTPS issue, but that doesn't mean you should accept any risk only to have a fancy HTTPS certificate. It doesn't make sense to remove all locks from one door only to have twice as much locks on another door, because now the system as a whole is actually weaker. Yes, HTTPS is important, but XSS protection is just as important, if not more important. “Why not just put the admin area behind a path like /admin?” Because then you lose all XSS protection. Any vulnerability in the public area immediately compromises the admin area as well. Since the public area is generally the weakest link of an application, this is a very bad idea. You don't have this problem with separate domains, because the Same Origin Policy isolates one domain from another. You misunderstood the point I made about cookies. This is not about your cookies. This is about cookies set by an attacker to do a session fixation attack, break your CSRF protection (if you use cookies for that) etc. The fact that a microsite can set cookies for your entire application is a serious problem, and there is no easy fix. This needs to be approached with an HMAC or session isolation, but that's a different discussion. HttpOnly doesn't do much and is a rather naïve approach (although I do recommend it for defense in depth). An attacker doesn't necessarily care about the actual content of a cookie. Often times they just want to use the cookie. For example, it doesn't matter that you don't know the session ID as long as you can use it to gain the victim's privileges.
  8. PHP scripts can actually be compiled to byte code (e. g. with ionCube). This destroys the original source code and only leaves the low-level functionalities.
  9. Well, simply protect the core application with your current wildcard certificate. This isn't perfect, but it's still much more secure than stuffing all microsites into a single domain or something like that.
  10. Protecting multiple subdomain levels requires a certificate with one SAN (Subject Alternative Name) per level: yourdomain.com *.yourdomain.com *.*.yourdomain.com etc. Technically, this is perfectly possible. But it's rare and will be expensive. It seems DigitCert offers this, but you should double-check that they indeed support multiple wildcards in a single SAN and not just multiple wildcard SANs. It might also be a good idea to do a test run with a self-issued certificate to see how browsers react to multiple wildcards. What exactly do you need the subdomains for?
  11. As to the second question: SSLCACertificateFile is used for client certificates. This is where you put your private CA in case you're using public key authentication (you probably don't). The StartSSL CA certificate wouldn't make sense in this context. What exactly does the trust error in your browser say?
  12. Besides this concrete case, one point I'm really trying to get across is this: When dealing with security, don't just do what you think is the bare minimum. Chances are you've overlooked some tiny little detail, and then you may be screwed. Don't assume that you know everything about HTML or SQL or whatever. Don't assume that things will always go as planned. Don't assume that a technique is secure until somebody has proven the opposite. Use robust security and established solutions (like those from the OWASP). Avoid risks instead of fighting with vulnerabilities.
  13. Since there have been some debates about how to safely pass PHP values to JavaScript, I hope I can clarify a few things. One suggestion that kept recurring was to simply run the value through json_encode() and then inject the result into a script element. The JSON-encoding is supposed to (magically?) prevent cross-site scripting vulnerabilities. And indeed it seemingly works, because naïve attacks like trying to inject a double quote will fail. Unfortunately, this approach doesn't work at all and is fundamentally wrong for several reasons: json_encode() was never intended to be a security function. It simply builds a JSON object from a value. And the JSON specification doesn't make any security promises either. So even if the function happens to prevent some attack, this is implementation-specific and may change at any time. JSON doesn't know anything about HTML entities. The encoder leaves entities like " untouched, not realizing that this represents a double quote which is dangerous in a JavaScript context. The json_encode() function is not encoding-aware, which makes it extremely fragile and unsuitable for any security purposes. Some of you may know this problem from SQL-escaping: There used to be a function called mysql_escape_string() which was based on a fixed character encoding instead of the actual encoding of the database connection. This quickly turned out to be a very bad idea, because a mismatch could render the function useless (e. g. the infamous GBK vulnerability). So back in 2002(!), the function was abandoned in favor of mysql_real_escape_string(). Well, json_encode() is like the old mysql_escape_string() and suffers from the exact same issues. Any of those issues can be fatal and enable attackers to perform cross-site scripting, as demonstrated below. 1) The entire “security” of json_encode() is based on side-effects. For example, the current implementation happens to escape forward slashes. But the JSON standard doesn't mandate this in any way, so this feature could be removed at any time (it can also be disabled at runtime). If it does get disabled, then your application is suddenly wide open to even the most trivial cross-site scripting attacks: <?php header('Content-Type: text/html; charset=UTF-8'); $input = '</script><script>alert(String.fromCharCode(88, 83, 83));</script><script>'; ?> <!DOCTYPE HTML> <html> <head> <meta charset="utf-8"> <title>XSS</title> </head> <body> <script> var x = <?= json_encode($input, JSON_UNESCAPED_SLASHES) ?>; </script> </body> </html> 2) In XHTML, a script element works like any other element, so HTML entities like " are replaced with their actual characters (in this case a double quote). But JSON does not recognize HTML entities, so an attacker can use them to bypass json_encode() and inject arbitrary characters: <?php header('Content-Type: application/xhtml+xml; charset=UTF-8'); $input = "";alert('XSS');""; ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>XSS</title> <meta http-equiv="content-type" content="text/html; charset=UTF-8" /> </head> <body> <script type="text/javascript"> var x = <?= json_encode($input) ?>; </script> </body> </html> 3) json_encode() blindly assumes that the input and the output should always be UTF-8. If you happen to use a different encoding, or if an attacker manages to trigger a specific encoding, you're again left with no protection at all: <?php header('Content-Type: text/html; charset=UTF-7'); $input = '+ACIAOw-alert(+ACI-XSS+ACI)+ADsAIg-'; ?> <!DOCTYPE HTML> <html> <head> <meta charset="utf-7"> <title>XSS</title> </head> <body> <script> var x = <?= json_encode($input) ?>; </script> </body> </html> (This particular example only works in Internet Explorer.) I hope this makes it very clear that json_encode() is not a security feature in any way. Relying on it is conceptually wrong and simply a very bad idea. It's generally not recommended to inject code directly into a script element, because any mistake or bug will immediately lead to a cross-site scripting vulnerability. It's also very difficult to do it correctly, because there are special parsing rules and differences between the various flavors of HTML. If you try it, you're asking for trouble. So how should one pass PHP values to JavaScript? By far the most secure and robust approach is to simply use Ajax: Since Ajax cleanly separates the data from the application logic, the value can't just “leak” into a script context. This is essentially like a prepared statement. If you're into micro-optimization and cannot live with the fact that Ajax may need an extra request, there's an alternative approach by the OWASP: You can JSON-encode the data, HTML-escape the result, put the escaped content into a hidden div element and then parse it with JSON.parse(): <?php header('Content-Type: text/html; charset=UTF-8'); $input = 'bar'; ?> <!DOCTYPE HTML> <html> <head> <meta charset="utf-8"> <title>XSS</title> <style> .hidden { display: none; } </style> </head> <body> <div id="my-data" class="hidden"> <?php $json_object = json_encode(array( 'foo' => $input, )); // HTML-escape the JSON object echo htmlspecialchars($json_object, ENT_QUOTES | ENT_SUBSTITUTE | ENT_HTML5, 'UTF-8'); ?> </div> <script> var data = JSON.parse(document.getElementById('my-data').innerHTML); alert('The following value has been safely passed to JavaScript: ' + data.foo); </script> </body> </html>
  14. I understand what you mean. The problem is that adding security later rarely works out and is even more frustrating and time-consuming. So I think it's a good idea to learn the basics at a very early stage and do things correctly right from the beginning. When I learned C at university, we talked about security whenever we encountered a new feature, and I found this very helpful. It was still a lot of fun. In fact, it's nice to know that your code is decent and not just a quick hack that wouldn't survive in reality. The OP also said this is a task, so I assume it's not just for fun.
  15. Yes, some of those warnings may be hidden somewhere in the PHP manual. But the OWASP does a good job at getting straight to the point and also mentioning the lesser-known issues. As to PDO-specific tutorials, there's an excellent one at wiki.hashphp.org.
  16. Have you guys ever wondered why the rest of the world makes fun of PHP and thinks of it as a toy language for kiddies and clueless amateurs? This isn't even about “security”. It's programming 101: How to think ahead and avoid bugs. It's OK for a newbie to not understand those concepts, but you've been doing this long enough. requinix, you've actually seen those vulnerabilities right here at phpfreaks. You've personally observed them. But for some reason you just keep making the same mistakes. It's not enough to write code which works under lab conditions. It needs to work in real life, and real life includes bugs, misunderstandings, unexpected situations, carelessness and sometimes plain incompetence. If your application cannot handle this, then it's garbage. You may try to blame the mistake on somebody else, but it's actually you who hasn't thought ahead. Why is this so hard to understand? Applications need to be robust. It doesn't matter if a variable is currently “trusted” or “kinda-sorta trusted” or “untrusted”. You simply make sure that it cannot cause any damage. You choose an approach which works at all times, even if things go wrong. That's why we use prepared statements instead of relying on our good eye. That's why we use bcrypt instead of relying on the user to provide a strong password. The variable in the original code is not hard-coded. It's set somewhere in various parent scripts, and now you're hoping that it's indeed hard-coded and doesn't contain any unexpected value (Hi, Steam) and hasn't been overwritten and will be safe now and forever. You cannot be serious. If you consider this good programming, then you have a problem. Oh noez, an HTTP request! Yes, that's the biggest problem of a guy writing his first PHP script. Thank you for you smart comment. And it's also fine to waste dozens of requests for stupid ads, tracking garbage and GUI gimmicks. But you can't have an Ajax request to improve security. That's evil. Anyway, there's hope for you: You can also JSON-encode the value, HTML-escape it, store it in a hidden element and parse it with JavaScript (see the OWASP).
  17. You always have to make sure that the input is safe, even if you think it comes from you. The general approach is also rather poor, but more on that later. Contrary to requinix' assumption, hard-coded variables are not inherently safe. For example, PHP developers love to inject user-defined variables through mechanisms like Register Globals or extract(), and this can enable the user to overwrite your variable with a new value. That's obviously a problem. It's also common that hard-coded variables sooner or later turn into dynamic variables, because that's easier to manage. Are you sure that you or your successor will remember that there's now a security vulnerability they have to fix? I wouldn't bet on that. So don't do this. As I already said, the approach itself is bad: Your included script pulls some global variable from “nowhere” (hopefully the main script), and then you dump that variable straight into a JavaScript context. C'mon, this has nothing to do with proper programming. There's already a well-defined interface for safely passing values from a server-side script to JavaScript: Ajax. Use it, and the problem will go away.
  18. This is a resource. There's a parent resource (e. g. the collection of all blog posts), and there are specific subresources identified by IDs (e. g. the blog post #321). So the ID clearly belongs into the URL. On a site note: If you have any chance to implement “clean URLs” (like /blog-posts/321 instead of /some_script.php?controller=blog_posts&id=321), do that. It will greatly improve the readability for your users as well as the technical clarity for yourself. We probably wouldn't even have this discussion if there weren't all those parameters flying around. The approach is inflexible and unnecessarily complicated. Instead of writing methods specifically for Ajax, keep them generic. Use the Accept header if you need a specific response format. Use response codes instead of custom status texts. If you need additional information in a certain format, again check the Accept header. Searching is just retrieving with an additional query. This could be unified. As you already said, logging out has nothing to do with the blog page, so it shouldn't be there at all. Put it into the user controller (or any other appropriate location). Purists would probably disagree with the whole idea of having different tasks, but I think this is OK. You could get rid of those special methods if you create separate resources for comments etc. and send standard requests to them, but this might be overkill.
  19. I'm sorry to say, rex_2012, but there's a lot more wrong with the code than just the mysql_* functions. First of all, there's no security whatsoever. It seems you haven't even thought about the possibility of users sending malicious input. The whole script is essentially one big vulnerability allowing anybody to upload malware to your server, steal sensitive data, manipulate your database and possibly take over the entire server. So before you do anything, you need to learn the basics of security and go through your entire code to fix the current vulnerabilities. The Internet is not Disneyland. There are a lot of people who do break into applications, be it for money or just for “fun”. I understand this is all new to you, but that doesn't mean you'll get away with security issues. In fact, easy targets are very popular for obvious reasons. Seriously, think about it. You can't just insert user input directly into your queries. You can't just let anybody upload arbitrary files. Besides that, the fact that you haven't gotten any errors for the mysql_* functions means that you either had the error reporting turned off all the time, or you've used some ancient PHP version. This is likely to bite you as well. As soon as you turn the error reporting on (which you should), your screen may be flooded with bugs that were hidden before. Again, I understand that you're new to PHP, and this reply is probably not what you wanted to hear. And maybe the whole task was a bit too much. But it is what it is.
  20. Because it wouldn't make sense, neither semantically nor practically. The fact that the URL specifies the target of the request is one of the most fundamental principles of HTTP. This is how the protocol works, and this is how everybody and every program expect it to work (including servers, browsers, proxies, caches etc.). If you instead invent your own variation of HTTP where the URL of some methods is always “/” and the actual routing information is hidden somewhere in the body, you'll make a huge mess. You'd also have to create a separate router specifically for POST requests. Why not simply use the same routing logic for all requests? Not sure what you mean. There's a special parameter for requests triggered through Ajax? Why? Where a request comes from shouldn't matter at all. Of course you may sometimes need a special representation of a resource (like JSON instead of HTML), but this should be specified through the Accept header. No need for an extra URL. My point is that a lot of your parameters and if statements and special cases will be unnecessary if you simply use standard HTTP. It's actually a very elegant protocol: A resource is located at a specific URL. You can do different things with a resource by using different methods (unfortunately, HTML is currently limited to GET and POST). You can also request different representations of the resource (HTML, JSON, XML, ...) through the Accept header.
  21. First: A request does not have multiple methods as suggested by Barand. There's no such thing in HTTP. The first line of a request specifies exactly one method (if the RFC is too abstract, you may also observe actual HTTP messages with your browser or a network sniffer). I do not recommend the use of $_REQUEST. In fact, it's awful and should be avoided like the plague, because it blurs the line between different aspects of the request, can lead to name collisions and has major security implications. For example, if URL parameters take precedence over cookies, an attacker can easily perform session fixation and break the CSRF protection of your application. There's a big difference between URL parameters, request body parameters and cookies, and throwing everything into one big array means asking for trouble. Like always: PHP makes it very easy to write bad code, but it makes you jump through hoops to write good code. No, send the request to the parent resource. In the previous example, that would be /blog-posts/. This is actually the original meaning of POST: It appends a new resource to a parent resource. No no no. The URL specifies the resource (like a blog post or a user). Think of it as a unique address pointing to an object. In fact, the URL is usually the same for GET requests and POST requests. You may fetch /blog-posts/321, but you may also update /blog-posts/321. Things like CSRF tokens are only relevant for updating the data, so they go into the request body. They have nothing to do with the resource itself.
  22. As NotionCommotion already said multiple times, this is not the question. He understands the difference between GET and POST and when to use which. The question was about the parameters.
  23. I strongly recommend that you avoid terms like “GET parameters” or “POST parameters”. They make no sense and only cause confusion. Yes, I know that PHP has a $_GET and a $_POST superglobal, but that's nonsensical as well. A request has a method, be it GET, POST, PUT, DELETE or whatever. Requests also have a body which may be empty (in the case of GET) or contain additional information (in the case of POST and PUT). To retrieve a resource, a send a GET request to the URL where the resource is located. For example: GET /blog-posts/321 HTTP/1.1 The exact format of the URL is irrelevant. Instead of using “pretty URLs”, you might as well put the ID into a URL parameter: id=321. That's entirely up to you as long as the URL uniquely identifies the resource. To update a resource, you send a PUT (or POST) request to the URL where the resource is located: POST /blog-posts/231 HTTP/1.1 What exactly should be changed is specified in the request body. The exact format of the body is, again, irrelevant. Could be JSON, could be XML, it doesn't matter. However, HTML forms typically use percent-encoded parameters, so the body may look like this: author=NotionCommotion&text=Hello%20world! So, yes, the ID goes into the URL, and the author and text go into the request body. Now, for some strange reason the PHP people have decided to copy the URL parameters into a superglobal called “_GET” and put data from the parsed request body into a superglobal called “_POST”. This is plain nonsense, because the location of the parameters has nothing to do with the request method. A POST request may very well use URL parameters, which leads to the absurd situation that you have to pull them from the $_GET superglobal. There is no GET anywhere! I guess there's some historical explanation for those names, but in practice, just accept it as a weird tradition. $_GET has nothing to do with GET requests, and $_POST has nothing to do with POST requests.
  24. You shouldn't mix URL parameters and request body parameters at all. It indicates that you don't really have a concept for what to put where. But it's actually very easy: URL parameters contain the data necessary for retrieving a resource. This includes IDs, page numers, search terms etc. On the other hand, request body parameters contain the data that should be changed or added like a new username, a new blog entry or whatever. The URL parameter task=display makes no sense, because displaying a page is the one and only purpose of GET. Even the body parameter task=save is rather nonsensical, because saving changes is, again, the whole point of POST. So maybe you can fix the whole confusion simply by removing the superfluous parameter and checking the request method instead.
  25. ... and the $stmt variable obviously had the wrong name. However, the real problem is that your error handling is broken. That's why you get those cryptic “non-objects” messages instead of a proper error description. First of all, do not print internal MySQL errors on the screen (like in line 31). They are meant for you, the developer, not the general public. In fact, disclosing your technical issues will massively irritate legitimate users while helping attackers. Your users will wonder WTF is wrong with the site, and your attackers will know exactly what's wrong. So never just echo an internal message, not even for testing. Secondly, you need to actually check for errors. It's definitely not a good idea to just keep going, because this causes weird consequential errors (as you can see) and may leave the application in a problematic state. The old school way of error handling is to literally check the return value of every single function call. This is valid and was in fact the only option back in the days of the old MySQL extension. Nowadays, we can just turn on error reporting in the MySQLi driver: <?php // turn on error reporting $database_driver = new mysqli_driver(); $database_driver->report_mode = MYSQLI_REPORT_ERROR | MYSQLI_REPORT_STRICT; $database = new mysqli(...); This will automatically throw an exception with all relevant information whenever a query fails. The exception will then be passed to the standard error handler which writes it to the log (in production) or prints it on the screen (during development). Try it yourself: $database->query('SELECT idonotexist'); This will actually tell you that the column doesn't exist.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.