Jump to content

Recommended Posts

I want my form to POST to the same page, but not have the 'document expired' problem when sessions are being used somewhere in the script (when I hit press the back button in the browser). I know this can be avoided by posting to a different PHP page, and then use a header(location) to redirect, but I ideally want to keep it on the same page as I use some of the variables produced from within the processing section. 

 

Please excuse the crude examples,

 

This works:

session_start();

/*HEADER*/
if(isset($_POST['test']){
    
    //do some validation
    $validated= true;
    
    if($validated){
        //process the form
        $_SESSION['notification'] = 'form submitted successfully';
        header('location: samepage.php');
        exit;
    }
    else{
        $_SESSION['notification'] = 'there was an error';
        header('location: samepage.php');
        exit;
    }
}

/*BODY*/
if(!isset($_SESSION['notification'])){
     //show the form.. posts 'test' to current page
}
else {
    echo $_SESSION['notification'];
    unset($_SESSION['notification']);
}


And this method shows the document expired when the page is refreshed...

session_start();

/*HEADER*/
if(isset($_POST['test']){
    
    //do some validation
    $validated= true;
    
    if($validated){
        //process the form
        $success = true;
    }
    else{
        $success = false;
    }
}

/*BODY*/
if(!isset($success)){
     //show the form.. posts 'test' to current page
}
else if($success){
     echo 'form submitted successfully';
}
else{
     echo 'there was an error';
}

I've read that I can edit the php.ini file to allow caching, which should solve the issue for example 2.

 

Is example 1 the correct way to be doing this? It's been pointed out that I shouldn't be using 'exit' so much so I'm trying to explore the correct methods

Edited by paddy_fields
Link to comment
https://forums.phpfreaks.com/topic/290202-post-to-the-same-page/
Share on other sites

when you submit to the same page that the form is on, when you have successfully processed the form data, use a header() redirect to the same page to clear the post data.

 

you would need to use an exit; after a header() redirect, unless you have structured the logic in your code so that the remainder of the code on the page isn't executed after the header() statement. so, short-answer, you might as well ALWAYS use an exit; and simplify the logic on the page.

 

whomever stated you are using too many exit's, may have been referring to combining common/repeated parts of your code (i.e. don't repeat yourself - DRY.) in your example code, you are repeating the header()/exit pair. you would move the header()/exit to the common execution point following the conditional logic and just leave the session assignment statement in the conditional logic.

  • Like 1

I am not sure why you want to allow the back button on a form. There are a couple of possible reasons that I can think of: 1) to allow the user to start again or 2) to allow the user to correct a mistake. However, in case (1) you can provide an button to clear /reset

<input type="reset">

and in the event of an error (2) you can redisplay the form with values prefilled based on the submitted data:

<?php
$myfield = $_POST['myfield']
//validate data
if ($myfield!=$myexpectedvalue){
$errors['myfield']='I expected something else';
}
//don't forget to sanitize data even though I don't in this example
if (!isset($errors)){
// write to db or whatever here
//display success message and optionally go on to next step
}else{
//redisplay form
?>
....
<?php if (isset($errors['myfield']) {echo $errors['myfield'] ;}?><input name="myfield" value="<?php if (isset($myfield) echo $myfield;?>"> 
<input type="reset'>
....

My objection to the redirects is that you pay a high price in performance for every user because each header() requires a round trip from server to browser to save someone from seeing a dialogue box in the rare instance that they hit the back button.

Edited by davidannis

There isn't any particular reason why i'd want the back button used on a form, it's just that when a user updates their details with a form for example, and then decides to use the back button for whatever reason, i don't like the 'document expired' screen coming up. Or is this what should really happen anyway. Am I making an issue out of nothing? 

 

With validation yes I wouldn't redirect, so I suppose it's always going to happen if a user clicks back afterwards then?

 

Sorry I'm think I'm confusing myself with this for the sake of it :keep_quiet:

 

edit: thanks for the example davidannis I'll have a play around with that this evening

Edited by paddy_fields

Doing a redirect after a successful POST request is absolutely correct. Don't let anybody talk you out of it.

 

This is the PRG pattern, and we all use it. Not only does it prevent strange error messages. It also prevents people from accidentally submitting the same data again, which can be a serious problem depending on the context.

 

I have no idea what davidannis is talking about. A redirect makes you “pay a high price in performance”? What century are you living in?

  • Like 1

 

I have no idea what davidannis is talking about. A redirect makes you “pay a high price in performance”? What century are you living in?

 

 

For small download file sizes, such as a search results page, RTT is the major contributing factor to latency on "fast" (broadband) connections. Therefore, an important strategy for speeding up web page performance is to minimize the number of round trips that need to be made. Since the majority of those round trips consist of HTTP requests and responses, it's especially important to minimize the number of requests that the client needs to make and to parallelize them as much as possible.

From Google developer best practices https://developers.google.com/speed/docs/best-practices/rtt?hl=fr-FR  which was updated less than two years ago.

It also prevents people from accidentally submitting the same data again, which can be a serious problem depending on the context.

 

The message that the OP was trying to suppress when the back button was hit is designed to prevent people from accidentally submitting the same data again. If resubmission of the same data is a serious problem there are better ways to avoid it than using extra redirects (such as a nonce).

Sorry, but this is just blind nano-optimization voodoo bullshit.

 

Do you know paddy_fields's server setup? Do you have usage statistics? Benchmarks? Profiles? No? Then on what basis do you tell him to avoid redirects?

 

As far as I'm concerned, his website may very well run on the cheapest shared host with the slowest possible Apache/CGI configuration and 10 users per day. And you're worried about redirects?

 

You first write proper code, then check the performance, then analyze the problem (if there's any) and finally choose the right technique to solve this particular problem (if there's any). Running around applying random “optimizations” in the hopes that they'll do something doesn't get you anywhere. In fact, this is downright harmful, because optimizations usually come at a price.

 

Let's summarize:

 

Removing redirects will

  • lead to irritating error messages in case the user tries to reload the page or go back
  • lead to duplicate submissions if the user clicks the wrong button – unless you make the script twice as complex by implementing nonces

What you might get is a theoretical nano-optimization by saving a single roundtrip. Is it even measurable? Perceptible? Do the users even care in this particular case? We have no idea, it's all just speculation.

 

In other words, you've replaced a minor virtual problem with a major real problem. Congratulations. ::)

Jacques,

 

I'm not going to argue with you. Perhaps I (and Google's best practices and many other sources I've read) are wrong. Perhaps paddy-fields users are all on his 100GBps local area network a single hop from the server. I will, however, point out to you that comments like:

 

What century are you living in?

and

 

this is just blind nano-optimization voodoo bullshit.

serve only to personalize a disagreement and that it might be more productive not to do that.

The Google recommendations are not “wrong”. They are micro-optimization techniques which can help in certain setups in certain situations. For example, if you've already maxed out your hardware and need the last bit of performance to satisfy your thousands of impatient users, it may indeed make sense to fine-tune the traffic characteristics.

 

As I've already traid to explain, proper optimization requires a concrete goal, analysis and judgement. If you've gone through that and found out that getting rid of redirects will lead to a major performance boost for your particular site, by all means, do it. I'm all for it. But what doesn't help is tell everybody that redirects are evil and that people should throw away their good code for the sake of some alleged performance benefit.

 

I mean, this is like tuning a car: If you need the last bit of performance for your Porsche, go ahead. But that doesn't mean your neighbours are supposed to fill nitrous oxide into the family van. It's not gonna help them drive the kids to school, and it's actually a rather bad idea.

I mean, this is like tuning a car: If you need the last bit of performance for your Porsche, go ahead. But that doesn't mean your neighbours are supposed to fill nitrous oxide into the family van. It's not gonna help them drive the kids to school, and it's actually a rather bad idea.

Absolutely, nitrous belongs in beer. Sorry, couldn't resist.

Jacques,

 

I know that I said I wouldn't continue to argue but one final post. Eliminating http round trips is not "micro-optimization" to squeeze out "the last bit of performance" If you read the quote I posted earlier (emphasis added):

 

For small download file sizes, such as a search results page, RTT is the major contributing factor to latency on "fast" (broadband) connections. Therefore, an important strategy for speeding up web page performance is to minimize the number of round trips that need to be made. Since the majority of those round trips consist of HTTP requests and responses, it's especially important to minimize the number of requests that the client needs to make and to parallelize them as much as possible.

This is so important for a simple reason -- latency once you are off the local machine and onto the network is 6 or 7 orders of magnitude higher than it is on the server. (see http://www.slideshare.net/guest22d4179/latency-trumps-all and http://www.slideshare.net/guest22d4179/latency-trumps-all )

 

You state that

 

You first write proper code, then check the performance,

Well, writing proper code, includes writing code that is optimized for the real world, a world in which operations that require data to leave your datacenter have a much higher latency than operations that do not. Furthermore, in most web applications you can't just "then check the performance" because you don't know in advance where all your users will be or the route that packets will take to get there.

 

Finally, you are not "saving a single roundtrip" Here's another credible source that explains how it works (emphasis added)

 

Two of the biggest factors in page load delays are distance, measured as round-trip time (RTT), and the number of round-trips between the client and servers. A round-trip is a network packet sent by a client to a server, which then sends a response packet. Downloading a single Web page can require as few as 2 or 3 round-trips if the page is sparse, or many tens of round-trips for a rich appearance. Opening a TCP connection is one round-trip. Sending a Get and receiving the first 3KB is one round-trip. Sending Acknowledgements for more data is more round-trips.
For users who are located near the Web site data center, say less than several hundred miles away such as a user in Los Angeles connecting to a server in Silicon Valley, RTT is only 20ms (.020 seconds) and not a significant factor in page load time. There could be 50 round-trips back and forth between the client and server in these circumstances and the page-load time would still only have 1 second of network delay before any server time was added (50 RTs × 20ms = 1 sec).
But the story is much different for a user in Europe or Asia connecting to the same Silicon Valley server. Japan and England are currently 120ms in round-trip time from California at minimum. When you consider a server disk seek time for each request is usually less than 10ms, versus round-trips of hundreds of milliseconds repeated many times over, you'll see that distance and the number of round-trips are easily the most significant contributing factor in long page-load times.

So, trying to minimize the "most significant contributing factor in long page load times" is IMHO "writing proper code" not "micro-optimization" that you deal with only after you deploy the application and only if your users have problems that you become aware of. Google, Microsoft, and Yahoo seem to agree with me.

This thread is more than a year old. Please don't revive it unless you have something important to add.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.