Jump to content

T-Bird

Members
  • Posts

    53
  • Joined

  • Last visited

    Never

Profile Information

  • Gender
    Not Telling

T-Bird's Achievements

Member

Member (2/5)

0

Reputation

  1. An HTML form can send as many pieces of information as you want. For each field in the form you want to get the data out of, add the 'name' attribute. Then the php script that receives the data can that information through the $_GET (or $_POST, or $_REQUEST) superglobal array. More information about forms: http://w3schools.com/html/html_forms.asp More information about $_GET: http://w3schools.com/php/php_get.asp I hope I understood your question correctly.
  2. I just realized that in such a scenario whenever a new member registers the php page would still need to use a statically defined user but this time the user would have privileges to alter my mysql user and grant tables. If anything this would be less secure. Guess I got in over my head on this one, sorry for wasting your time.
  3. Hello. Let me preface this by saying I'm not experienced in database architecture, I've only worked on a few small, simple, low-security (, MySQL) databases for blogs/forums. The following is the result of some brainstorming I did, there is probably a better method, I'm requesting feedback. I was recently trying to think of how I - in my limited experience - would go about making a database more secure than my typical method. Usually my database design consists of one user, Member, with read/write privileges on all tables for that database. I would then have tables that hold all of a particular type of data for the community - I.E. a users table or a posts table. In PHP I would have hard coded into the pages (or into an included file) the username and password for 'Member' which I would access the database with. As an example I would have a users table and a posts table that would look something like: Users ID, Username, PasswordHash, DoB, FirstName, LastName, JoinDate, etc, etc Posts ID, UserID, Text, Date, etc, etc Of course I would use php to limit the queries so that only information that belongs to a particular user would be given to them. However I got to thinking, if someone manages to find/crack the password for 'Member' then they can read/destroy the data for ALL other users since they have access to the whole Users table. Now obviously there are methods of securing databases, but I haven't picked up any books on database architecture (if you know a good one let me know), so this is what I got out of my personal brainstorming. Perhaps I'm spot on, perhaps I'm in left-field (probably the latter). Is it acceptable to have a separate user and separate tables created for every new member to my website? For example, when a user registers as a member and picks the user name 'JaneDoe' with the password 'Foo' I would create a user in my DBMS, 'JaneDoe', with the password 'Foo' I would then create the tables 'JaneDoePersonalInfo', 'JaneDoeCreditCards', 'JaneDoeFriends', etc. Only user JaneDoe would have access to the tables prefaced by her name, and she could access only her tables. My php pages, rather than having a global 'Member' user coded into the page, would take her username/password input and use these to log into the database. My thinking was that now, worst-case scenario, an attacker can only access one client's information for every password they steal - rather than getting all user information with one password. But I know very little about databases and even less about making them efficient and stable. SO my questions are: 1) Does this method work, or is there some major oversight on my part? 2) Do DBMSs (MySQL particularly) even allow you to have that many users/tables? 3) Are there any security disadvantages this way? 4) What are the performance (dis)advantages to this method in small, medium, or enterprise sized applications - will this even operate at an acceptable rate? 5) Is there a better (more accepted) way to do this? I wish I was a little more educated on the subject, but I figured the best way to learn is to ask. Please let me have any feedback (or good book recommendations). Thank you.
  4. Awesome. Worked great. I feel a little dense though.
  5. I have files that need to be downloaded only if the user has the correct password, thus I'm using the standard readfile() based download method as seen in example 1 of php.net's readfile manual. if (file_exists($file)) { header('Content-Description: File Transfer'); header('Content-Type: application/octet-stream'); header('Content-Disposition: attachment; filename='.basename($file)); header('Content-Transfer-Encoding: binary'); header('Expires: 0'); header('Cache-Control: must-revalidate, post-check=0, pre-check=0'); header('Pragma: public'); header('Content-Length: ' . filesize($file)); ob_clean(); flush(); readfile($file); exit; } The files I'm trying to transfer are on average 80MB zip files stored on the same box that is running the scripts. I have two problems which I think are related... but may not be. a) The user submits the password and it takes approximately a minute for the save dialogue box to appear. PHP logs a timeout error, but the file downloads fine. b) With files less than 60MB I can download several files at a time no sweat. With files over 60MB in size I can only download one file at a time. Anyone who tries to download the file while a transfer is in progress gets an empty placeholder file. I tried switching to a chunked transfer method. I got a code out of the comments on the php.net site that seemed promising and tried it. function readfile_chunked($filename,$retbytes=true) { $chunksize = 1*(1024*1024); // how many bytes per chunk $buffer = ''; $cnt =0; // $handle = fopen($filename, 'rb'); $handle = fopen($filename, 'rb'); if ($handle === false) { return false; } while (!feof($handle)) { $buffer = fread($handle, $chunksize); echo $buffer; ob_flush(); flush(); if ($retbytes) { $cnt += strlen($buffer); } } $status = fclose($handle); if ($retbytes && $status) { return $cnt; // return num. bytes delivered like readfile() does. } return $status; } This solved problem-a as data immediately started to transfer to the browser. I didn't get to test if it solved problem-b because after aprox 25-30 seconds php again (understandably) logged the timeout error and the transfer died. Any clue why php is refusing to allow multiple downloads on code a? Is there a way I can adjust PHP's timeout for only ONE page so that I can use code B without having all my pages use an ugly 1hour timeout? Any other suggestions?
  6. I have a similar topic floating in the PHP board - for good reason. When I started off I was sure my problem was an error in my PHP configuration, as I further tested I have become convinced it is an error in my server configuration. That said, sorry if anyone is offended by my reasking. I have a file sitting on my local file server. Do to unfortunate circumstance, this is a WinXP Home box running WAMPServer2. I have opened my router/firewall. I have taken Apache off the usual port 80 and put it on port 880. Direct links to the files work fine - meaning if end-user types "MYIP:880/file.pdf" they get the file. However, when I use my remote web host (run by Network Solutions) to access the file via PHP's readfile() command, I get a connection timeout error. Similar functions like file_get_contents, fopen, and cURL all fail similarly. My web host does indeed have allow_url_fopen turned on in it's php.ini (it can use this same readfile() process for files on other machines). It seems to me the problem must be an error in the configuration of Apache on either my webhost or, in all likelyhood, my file server. I'd be willing to look into faulty access configurations in windows as well. To save space, I'm not going to reprint the diagram/explanation of my full testing here. If you feel it would help, it can be found here.
  7. More information that may or may not help. fopen and file_get_contents time out as well. cURL gives me error #7. I can't believe with everything I've tried we still haven't fixed it...
  8. If it at all helps anyone to grasp what's going on here, I drew a diagram. The red path is the one that ultimately I will be using and that is not working. The guest's node will connect to the passfile.php page of my webhost. Passfile.php holds the previously mentioned code which attempts to connect to my fileserver via a direct ip address:port - 70.166.xxx.xxx:880. It is supposed to use readfile() to make that connection and return the file contents from file.zip. However it gives me a connection timeout error. In blue was a test a ran to check the configuration status of my fileserver. When the server locally runs passfile.php which is identical - including using a full external ip instead of an alias like localhost, the file is returned properly. In green is a third test, where guest uses the fileserver's passfile.php to successfully retrieve file.zip. In purple was a forth test, whereby the guest direct connects to file.zip and successfully retrieves it. In cyan is a fifth test where guest connects to passfile.php on the webhost which is using identical code to test one except the IP is changed to another server I rent from bluehost. This is why I'm so baffled. Test five shows it cannot be an error with the php configuration on my webhost - for example allow_url_fopen must be on as it allows me to retrieve the file from my other server. It would seem to me that tests 2-4 indicate I have no leeching protection as both an arbitrary outside user as well as internal scripts both freely access the file. I cannot fathom what the actual problem must be. Any/all advice for further tests or fixes is gratefully accepted.
  9. Yeah. That was one of the first places I checked. When I first posted I was sure it was a php error, however as I look at it I think it could possibly be an Apache configuration problem as well. I'm lost on this one. A readfile download doesn't use a seperate port I'd have to forward does it? Doesn't it go through whatever port I'm sending the rest of the page through?
  10. Alright, I'm at a loss as to what my problem is. I have a server box running Apache, PHP, and MySQL. This box contians a php file called passfile.php with the following code. <?php include('inc/function.inc.php'); $file = verify_file($_GET['file']); if($file != false) { header('Content-type: application/octet-stream'); header('Content-Disposition: attachment; filename="'.basename($file).'"'); readfile('http://MyFileHostExternalIP:880/bids/plans'.$file); } else { plans_error("<h3>Error: Requested File Not Found</h3>"); } ?> This works great. Now when I put this same php file, on my webhost and try to stream said file through my webpage to the user (so that I can better secure my files) I get the following error. Any ideas what I'm doing wrong. Why can readfile access the file when launched from my local box, but not when launched from my webhost? Is it an incorrect configuration?
  11. GAH!!!! I knew it was something stupid. Sorry all. I'm running the server on WAMP under windows XP Home (not an ideal situation, but it's temporary) and while I opened up my router, I forgot to unblock the port on my software firewall! Thx for the help.
  12. Which sticky, the one about php not being able to find the function? Or did I look at the wrong sticky? At any rate, I'm convinced that it must be a configuration error as I'm using WAMP, and have very little idea of how to configure MySQL. I originally had this problem on another computer running WAMP at the office, decided to see if I get the same problem on my home machine - I do. So far as I can tell they are running under the same - seemingly faulty - configuration. One difference is that the office computer was returning a MySQL error 111. Since I think it may be a configuration error, if it helps, here are all the uncommented lines of my my.ini file [client] port = 3306 socket = /tmp/mysql.sock [wampmysqld] port = 3306 socket = /tmp/mysql.sock skip-locking key_buffer = 16M max_allowed_packet = 1M table_cache = 64 sort_buffer_size = 512K net_buffer_length = 8K read_buffer_size = 256K read_rnd_buffer_size = 512K myisam_sort_buffer_size = 8M basedir=c:/wamp/bin/mysql/mysql5.1.33 log-error=c:/wamp/logs/mysql.log datadir=c:/wamp/bin/mysql/mysql5.1.33/data skip-federated log-bin=mysql-bin binlog_format=mixed server-id = 1 [mysqldump] quick max_allowed_packet = 16M [mysql] no-auto-rehash [isamchk] key_buffer = 20M sort_buffer_size = 20M read_buffer = 2M write_buffer = 2M [myisamchk] key_buffer = 20M sort_buffer_size = 20M read_buffer = 2M write_buffer = 2M [mysqlhotcopy] interactive-timeout [mysqld] port=3306 I'm using MySQL version 5.1.
  13. I have a site, hosted on my bluehost account which I would like to have reference the database on my local server. Please note the database is not on the remote site, the one hosted by bluehost, that the end-user connects too. I am logging in with a user account that accepts any host (host is set to %). It wasn't working so I made a streamlined file that contains only the necessary code for testing, and this also does not work. <?php $con = mysql_connect('MyExternalIP','UserName','Password'); mysql_error($con); ?> My routers ports are opened correctly, as proven by canyouseeme.org. I'm currently under the default WAMP configuration except I changed Apache to port 880 - I doubt that makes a difference here. Is there a setting I have to have turned on locally in order to allow external connections? Is there a setting I have to have my webhost turn on in order to allow retrieval from external databases?
  14. Thanks for all the help everyone. I think I've got a pretty good grip on this now!
  15. When I initially wrote it I didn't use the ^$ characters to limit it to the strings beginning and end, so I felt I had to search for any characters that did not match, and then disqualify the string if it had any of those characters (thus the backwardsness). I actually tacked the ^$ on just before posting it here. Although now I feel like slapping myself in the forehead. That's good to know about the dash. Also in this case the minimum/maximum length didn't really apply because they were FTP users pre-created on the server. I was just sanitizing out of the user's attempted login data any inappropriate characters before storing/using it. Just for practice, lets say I wanted it to include those same characters, begin with a letter, and be 5-15 characters long. Would the following work? /^\w+[-\w@]{4,14}$/ What if I wanted to expand the above to include at least one number. Would this work? /^\w+([-a-zA-z@]*\d+[-\w@]){4,14}*$/ Also for general storage into a MySQL database, is it advisable to use a prebuilt filter, or a custom "whitelist" regex?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.