Jump to content

nunja

New Members
  • Posts

    5
  • Joined

  • Last visited

    Never

Everything posted by nunja

  1. Hello, We did not come with any solution right now. This is weird, we want to achieve two things here: Estimate the user bandwidth in Kbit/s, and at the same time (same thread / script), limit this bandwidth. There might be a strange behavior here, I don't see why it do not work technically.
  2. Hello ! An exemple of the results that I have for the output timer in the first script: 0.152241 0.0001514744 0.41506307856 0.2500698878 0.1329874 0.0189658 ... if I divid the total output size of all my chunks by the total of time it took (adding all the above), according to this formula: (total_sent*8)/(tt_time)/1024 I obtain a good estimate of the average Kbit/s. It tested it successfully against a Net Limiter. If I limit the 80 http port to 64Kbit/s, my results here are more or less accurate. Now, with the usleep my microtime results are affected in a strange way (I do not want to record the time took by usleep, just want to record the time that the client takes to get a particular flushed chunk). Kind of results like this: 0.0001514744 0.0005681002 0.000054168 0.00080167 0.00057199 ... That's too low ! For sure, result in Kbit/s is totally wrong then. I got inspired by this script at first (8th post by Bokeh): http://www.webdeveloper.com/forum/showthread.php?t=77891 Thanx !
  3. Basically, that is a simple abstraction of what I want to achieve for real, but the problem is still here. Everything is dummy data here but is a good representation of what the script should do. It's not overwriting, that's an addition (+=), the total of both vars at the end represents a mean. Because I simulate a sort of bandwidth shaping. Consider that the chunk of data is enough, on the clientside, to trigger a 1second time process (at least). The rest would be queued in the client buffer anyway (w/o usleep), because it needs more or less one second to process the chunk of data (for some reason). no it's not, it sums all results. the size is the same for each chunk. you're right on that part, but I have no choice, and as PHP is a server side app, it would be able to calculate this, with more or less precision. Sorry if I was not so clear. Nunja
  4. Thanks a lot for helping out ! usleep() takes microseconds (not millisecs, which would lead to 16 minutes indeed). This script is only delaying one sec per iteration. The thing that I do not catch is why the microtime() measurements are affected by usleep . I only want to know the echo+flush delay (what it takes for the server to deliver the chunk to the client in one iteration ). Is there something I did not catch ? at the protocol level ? php level ? many thanks
  5. Hi all, I post a strange behavior that could be reproduced (at least on apache2+php5). I don't know if I am doing wrong but let me explain what I try to achieve. I need to send chunks of binary data (let's say 30) and analyze the average Kbit/s at the end : I sum each chunk output time, each chunk size, and perform my Kbit/s calculation at the end. <?php // build my binary chunk $var= ''; $o=9000; while($o--) { $var.= "testtest"; } // get the size, prepare the memory. $size = strlen($var); $tt_sent = 0; $tt_time = 0; // I send my chunk 30 times for ($i = 0; $i < 30; $i++) { // start time $t = microtime(true); echo $var."\n"; ob_flush(); flush(); $e = microtime(true); // end time // the difference should reprenent what it takes to the server to // transmit chunk to client right ? // add this chuck bench to the total $tt_time += round($e-$t,4); $tt_sent += $size; } // total result echo "\n total: ".(($tt_sent*8)/($tt_time)/1024)."\n"; ?> In this example above, it works so far ( on localhost, it oscillate from 7000 to 10000 Kbit/s through different tests). Now, let's say I want to shape the transmission, because I know that the client will have enough of a chunk of data to process for a second. I decide to use usleep(1000000), to mark a pause between chunk transmission. <?php // build my binary chunk $var= ''; $o=9000; while($o--) { $var.= "testtest"; } // get the size, prepare the memory. $size = strlen($var); $tt_sent = 0; $tt_time = 0; // I send my chunk 30 times for ($i = 0; $i < 30; $i++) { // start time $t = microtime(true); echo $var."\n"; ob_flush(); flush(); $e = microtime(true); // end time // the difference should reprenent what it takes to the server to // transmit chunk to client right ? // add this chuck bench to the total $tt_time += round($e-$t,4); $tt_sent += $size; usleep(1000000); } // total result echo "\n total: ".(($tt_sent*8)/($tt_time)/1024)."\n"; ?> I am doing something wrong ? Does the buffer output is not synchronous ? Thanks a Lot Nunja
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.