Jump to content

vinny42

Members
  • Posts

    411
  • Joined

  • Last visited

  • Days Won

    3

Posts posted by vinny42

  1.  


    I cant think of any other instances where I will have to use the log in code again, so I was just wondering if that what the basis on knowing whether or not you should make a function for it.

     

    Functions enable code re-use, so in the most strict sense that is the criterium; no re-use: no function. But in modern programming you always have at least two uses: once in your application and once in your unittests.

    Logging-in of course ican be re-used all over the place; on the main login form, but also in login popups that appear when you visit a bookmarked page inside the restricted area, in datafeeds, basically in every situation where you ned to access restricted data while you're not logged in.

  2. The design does look a bit weird, a user is not an improved databaseclass. A userhandler or usermanager can use SQL to fetch data from a database and polulate userobjects with that data, but a user does not know where it came from or where it should be saved. Don't extend database classes, use composition. The userhandler class sends hardcoded queries to a database-access class that executes them and returns raw data that the handler processes into userobjects.

     

    Also note that there is no one-to-one relation between PHP objects and database tables. This idea forces you to create separate objects for the user's basic data, his accessrights, his addresses, preferences etc. Before you know it, you'll have dozens of objects and dozens of queries to get the most simple pieces of information about a user.

    So just write queries that get what you need.

     

    As a final note; why do you put the tablename in a property, are you planning to change the tablename at runtime? :-) if you prefix the tablename properly you will never have a reason to change it ever.

  3.  


    , I'm noticing that my server does not process the request if the URI is longer than 2000 characters.  

     

    That's because a URL usually cannot be longer than 2000 characters. If you need to send more, you should use the POST method. so you can create an HTTP body that has as much data in it s you need, hunderds of megabytes if you want.

  4.  


    In your guys experience, would you create a login function? Or just code it right at the top of the login page?

     

    If you put everything else in functions then why make a n exception for the login? And are you sure that the login code will only ever be used on the login page?

     

    But think about testing also; you cannot easily automatically test PHP code that is embedded in a piece of HTML, while it's very easy to execute a function and see if it returns true or false.

  5. Perhaps I quoted the wrong sentince, I thought I was quoting a reply to this:

     

     


    You should be submitting the form with POST method. With GET everything entered in the form will be visible in the browsers url when the form has been submitted. This is very insecure method of sending/receiving of sensitive information.

  6. More secure, yes.

    [/quote]

     

     

    Let's get that out of the way right now; POST is in no way more secure than GET.

    The fact that POST parameters don't show up in the URL means very little,  anybody who can press F12 in chrome can edit all of the parameters directly in the page. POST does not protect your data in any way whatsoever.

     

    That said; it is usually better to use POST for forms because there is a limit to how long a URL can be and with GET you can reach that limit quite soon. POST can handle much more data and has fewer issues with special characters.

  7. In fact, even the superglobals are a "bad idea", but given PHP's loose structure of "start whichever script you like, where ever you like, how you like", there's not realy any other way of doing it.

     

    Anyway, the problem with globals is that they just exist. You never know where they came from or what other pieces of code have done to them before you got to them.

    So if your routine needs to parse GET parameters you might be tempted to take $_GET and start modfifying it with a loop, leaving it empty at the end. The next function in your script may also want to do something with $_GET and find it empty. You'll never notice that your second piece of code has stopped working because of something the other code did, because there is nothing in your code that suggests that the processed version of $_GET is sent to the second piece of code.

     

    So, think of functions and methods as black-boxes. All the data they need is given to them through the function parameters, they should *never* assume that something exists in the environment around them. That way you have complete control over what data goes where and it becomes much clearer what data is modified by which function.

  8.  


    Yes I read your response but quoted mine.

     

    And did you understand what I posted? Because you are not asking questions about it, you just repeat what you've already said before, and coming to the same conclusion that what you are doing is weird and bad and doesn't work.

  9.  


    JS disabled so <noscript> tag redirected to example.com/fallbackSystem

     

     

    Noscript means no script, so you cannot redirect if JS is not enabled.

     

    But javascript is not needed here. PHP can see if cookies were set and PHP can set cookies. If PHP detects that no cookies was set

    it serves a page with the extra content and sets a cookie. If the cookie was set, PHP leaves the extra content out and doesn't set a cookie.

     

    No javascript involved at all.

  10.  


     I thought I could do this with javascript

     

    Then try that first. It doesn't matter to the visitor and you'll probably have a working solution much sooner than with PHP.

     

    Just make it intelligent; don't add all the brokers into the HTML and hide all but the current one; use ajax to only load the current one. That way the webserver will also register a hit for every broker loaded and you can prove to your brokers that the all had a equal share of hits, etc...

  11. If you are planning to insert a few hundred recors using separate URL's then you'll force PHP to open and close it's connection to MYSQL for every record, that adds a delay of 20-30ms per record, which quickly adds up. If you have a dedicated server you can try using pconnect (do not *ever* use that on a shared server) which will keep the connection open when the PHP script ends, saving the time to open the connection again later. Again, do not use pconnect on a server that you do not control.

     

    Insertint records quickly works best if you can do it in batches. If you can send a POST request htat holds the dat for 1000 inserts at a time, you can use transactions to speed the inserts up.

    If all the records are inserted into the same table you can make it even faster by writing the data to a CSV file and using the LOAD DATA INFILE function of MySQL, which will copy the data from the CSV directly into the table.

  12.  


    I have needed to delete more data today and the overhead now stands at around 13,000B.

     

    So you have hardly any overhead at all :-)

     


    One other question...upon a new record being inserted into the table, the overhead should reduce right..?

     

    Only if MySQL can re-use the empty space left behind from earlier DELETE operations which I'm not sure it can.

     

    But I really wouldn't worry about this overhead thing, if you're not running  out of diskspace or experiencing an unexplained slowdown then the overhead is simply not an issue.

  13. Excessive overhead, yeah it's a bit of a grey area. Small tables aren't bothered by overhead, and large tables can be too large to optimize anyway.

    I'd define "too much overhead" as hunderds of megabytes of wasted space in a table. That's purely a gut feeling, not based on any measurements.

    As long as you don't optimize to get a few dozen megabytes back, especially if the table is 2GB.

     

     


    Are you saying that because of there being an overhead, new data cannot be inserted into the table? Sorry, I'm new to SQL databases and still learning as I go along.

     

    Of course overhead doesn't prevent new records from being inserted :-) That would be weird.

     

    I'll try to explain:

    Databases store information, or "records" in a file on disk. Files are sequencial things, each record is written directly after the previous record, like words in a text.

    When a record is removed, the space where the record used to be is marked as 'empty'. When a new record is created the database needs to find an empty space in the file, large wnough to hold the new record. Most databases don't search for empty space inside the file, they simply append the new record to the end of the file, where there is always enough space.

    PostgreSQL and the like, maintain a freespacemap, that effectively remembers where all the empty spaces in the file are. When a new record has to be stored, the freespacemap is used to find a gap large enough to hold the new record. If there is such a gap, it's re-used by the new record, thus reducing the amount of wasted space.

     

    So again, overhead has no effect at all except for wasting diskspace. If you can live with the waste, you can live with the overhead.

  14.  


    When I figured out that the mysql functions are going to be deprecated, I didn't even hesitate to look up for PDO.

     

     

    It's that "PDO is the new black" that I find worrying. That, along with the tendency to use prepared statements for everything "because it's safer".

    Examine the consequences of what you do before you tell someone else it's the right solution :-)

     

    In real life you'll want to write a wrapper around PDO anyway, so PDO will only be used in one class, which means you can just as easily stick with mysqli.

     

    But, back to the topic.

     

    Don't bother the database with the overhead of a prepare when all you are going to do is update a record. In fact, definately don't prepare this because the chance is significant that the database can't use an index to find the correct record, and waste time didn a sequential scan.

    The reason for using a prepared statement here is to escape the id value because that costs a roundtrip to the database, but preparing effectively replaces that call with an even slower roundtrip to prepare the query.

    I would not be at all surprised if preparing is much slower than escaping.

     

     

    Even more on topic; 

     


    get the current date and time 

     

    Does that mean the current time of day, or the time mentioned in the form?

    For the current time you can use the SQL keyword NOW() or one of it's equivalents. If you want the time from the form you'll need to agree on a format to use in the form, and a way to convert it to what the database understands. Fortunately MySQL has str_to_date for that.

  15. In addition to what is said in that link:

     

    Overhead has very little impact on performance, do not optimize pre-emptively.

    Why: running optimize causes MySQL to completely rebuild the table. it does this

    by marking the old table as "no longer used", and copying all data to the new table.

    During this procedure you effectively have no working table.

    If that operation is interrupted in any way other than what MySQL expects, you loose both copies

    of the table. What kind of interruptions can you get? Well running out of diskspace is a very popular one.

    MySQL doesn't (or didn't, I haven't tried this for a while) check for the required amount of diskspace.

    So if you want to get more diskspace by running OPTIMIZE, make sure you have enough space

    to rebuild the table (which you probably don't, otherwise you wouldn't want to clear diskspace  :) )

     

    I don't know about MySQL but other databases have freespace mappers that keep track of this overhead

    and re-use it when the amount of space is enough to hold a new record.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.