Jump to content

Alex_

Members
  • Posts

    37
  • Joined

  • Last visited

Everything posted by Alex_

  1. You could just send a GET request from your php script which gets the entire dom object from that page, and then use a dom-parser that can extract it for you. See example http://simplehtmldom.sourceforge.net/
  2. Don't understand what you're trying to achieve. There's an autocomplete on the input field, I get that. But you're not even adding the input fields to the <form> that you wanted to submit. So the data from the input fields will never reach whatever script the form is linked to. Not sure where you're trying to submit either, since there's no submit button.
  3. Doesn't look like you're adding the input fields to the form? If you are, print the output of $_GET in your php script. It won't be in $_POST considering you're using GET as the form method.
  4. Not entirely sure what you meant, but can't you just swap this line input = $("<input>").appendTo(div) with something like input = $("<input name='whatevername'>").appendTo(div) ?
  5. $update = "UPDATE password FROM $tbl_name WHERE pass='$newp'"; if($update) { echo " ".$row['username']." password has been reset! "; } else { die("Error"); } In short hand, you should, at the moment, always end up at the echo $row['username'] part. Which may seem right, but it's really not. Here's a small list of wrongs in that piece of code: - Your UPDATE query is wrong. Specify table first, then column (see below). It's also missing a "correct" WHERE clause - You never execute the Update query at all - You're asking the Script if the String $update has a real value, which it does, but it's a string, and therefor it's considered valid, so you get the message but the password is never updated, for several reasons. Here's how your Update query should look like: $updateQuery = 'UPDATE tableName SET pass = newPassword WHERE uniqueIdentifier = value'; $update = $db->execute($updateQuery); if($update) { ... } else { ... } Where you replace uniqueIdentifier with something like username/userid, and the value being either the username/userid. I see no uniqueidentifiers in your code there, but guessing it could be in session. So try that. Edit: no clue where this fontcolor came from, but oh well!
  6. Not sure I understood exactly what you meant, but I'm assuming a tag like this.. <a href="#">Link</a> You do not want it to process the code? If so, you could just wrap the current code inside the click eventhandler with an if-statement. if(this.href !== '#') { ... //Your code here } Or matching it for cases like href="#/link/here"> if(this.href.indexOf('#') === -1) { ...//Your code here } If I missunderstood, just let me know.
  7. That's the idea behind a switch statement. Should be no issues with that code you just wrote. However, because I'm picky, I want to point out that you are currently referencing your switch statement's keys with Strings. IE '65', '68'. Because browsers parse/react differently to javascript objects and keys/values, I'd like to make this suggestion: var keyCode = parseInt(e.keyCode); switch(keyCode) { case 65: ... break; case 68: ... break; } This way you can be sure it will work cross-browsers. I know some browsers tend to consider the keyCode as an integer, which makes sense. While others consider it a string. So in short, parsing the String to an Integer is the safer option here.
  8. The logical answer is that you'll need a way to determine when a user is active, or in "streaming terms": "Went live". You mentioned a Mysql Table where you currently retain currently Live streams. I'd say throw in a new column there that says when the Stream started. Doing so, you will be able to something similar to this: (Let's call the new column "UpdateTime", for example) Frontend var startTime; //Gets filled in startTimer() var streamsSelector = ''; //Selector for where you want the updated response function handleUpdatedStreamsResponse(response) { //Do whatever you want to do with the response here. //Preferably you should keep to using JSON responses from backend, but let's assume plain html for now.. $(streamsSelector).append(response); //The reason it's not completely overwriting the current html is because you'll only be returning the "Updated"/"New" streams from the backend. //This can of course be changed should you wish to handle it differently. //Also update the startTime startTime = moment().format('YYYY-MM-DD HH:mm:ss'); } function getUpdatedStreams() { $.get('streamUpdates.php?startTime=' + startTime, handleUpdatedStreamsResponse); } function startTimer() { startTime = moment().format('YYYY-MM-DD HH:mm:ss'); //Small tip: Always use momentjs. Lovely library! setInterval(getUpdatedStreams, 10000); //Fetch updated streams every 10 seconds ? } $(document).ready(startTimer); Backend <?php $db = new MyDbClass(...); $startTime = $_GET['starttime']; $rows = $db->query('SELECT * FROM mytable WHERE UpdateTime >= $startTime'); $response = ''; foreach($rows as $row) { $response .= buildHtmlBlock($row); //Some function that builds the html block for the new record } echo $response; This is very minimalistic and there's plenty of room for improvement, but I wrote it as simple as I could so that you could get the idea behind it.
  9. You should use callbacks on those three jQuery functions. Right now they're all executed at the same time, which may cause the effect you mentioned. var selector = 'div#stream'; $(selector).fadeOut('slow', function() { $(selector).load(location.href + ' #stream', function() { $(selector).fadeIn('slow'); )); }); Obviously that coding style isn't good practise, but it's to make it as simple as possible. Generally you would want to keep your callbacks as external functions and just reference them as the callback function. In any case, this means it will: - Fadeout - Load when Fadeout is complete - Fadein when Load is complete
  10. Assuming you're using jQuery just do something simple to get you started. function dataFetched(response) { //Parse response (whether its plaintext or JSON) //Add to the view } function getData() { $.get('getScores.php', dataFetched); } function startTimer() { setInterval(getData, 10000); //Every 10 secs } $(document).ready(startTimer); Assuming you followed Psycho's advice on moving the code out to a seperate file. There's obviously a lot of things to improve in the example above, it's just to get you started.
  11. Javascript is primarily used for that specific purpose, making a website dynamic/'alive', rather than a completely static 'dead' website. You could fairly easily implement a form of "timer" that changes the image every X seconds to a random image in any given list with just a few lines of code. Don't have to use jQuery, but it does make it easier. Not only to read, but also to write.
  12. What do you mean "keep repeating itself", is the fwrite function called more than once? Also not sure why you have the fwrite function in an if-statement prior to closing the file. The file pointer should always be closed, regardless of successfully writing or not.
  13. With just PHP alone you probably won't be able to pull it off, if I got your question right. You want to add an image to the rendered HTML Template using only HTML and PHP right? If so, I don't see how that would work. PHP isn't an event-driven language, and it's blocking. It'll render the template in your given if-else statement snippet above, and then it will consider itself done, and won't continue executing any more php-code. Maybe you can find luck in CSS, but that I doubt as well. I say as the first replier did, go with Javascript. It's easy and fast.
  14. Sounds more like a Redirect to me. And if that's what it really is, then I'm not entirely sure why you're trying to use the browser's history for it. Assuming your form submits to some php script that updates the formentioned data displayed on the previous-previous page (-2), you can simply redirect back to that page upon finishing the update of data in the script. Or am I assuming wrong?
  15. Yeah IP and unique Session ID is what I was trying, like trying to prevent further requests until the first one is finished, but attempting to limit it in any way ended up not working, because then i'd be stopping the "most recent" request object which in turn means the user gets no response at all.
  16. No that was just my example of what is technically happening when a user is spamming dat Refresh button. In essence it's just like looping over the function X times.
  17. That could work, not entirely ideal but it could work. I'll give it a go.
  18. Yeah this I've tried, but doesn't seem to work. I set a Session variable to isRequesting = true and cross-reference it upon a request, but since the last request in the "spam-loop" is the one being considered the "renderer", the page ends up blank. The data was fetched on the first instance of the "spam-loop", where isRequesting = true is set, but lost at the end of the "spam-loop" (Since it's a new instance of the script upon a new request).
  19. Hey. Got a slight issue, originally occured during the NodeJS version of the application, but it's also existing in the PHP version. (Yes, I have two versions, I know it's stupid) The issue at hand is that I've got a Script that calls an API for data around 5 times each time the script is executed. So imagine http://Mywebsite/mypath/generate public function doGenerate() { $restRequestHelper = new RestRequestHelper(...); $data1 = $restRequestHelper->fetch(...); ... ... $data5 = $restRequestHelper->fetch(...); } As you can imagine it hits a bit on the API's CPU since there's a lot of concurrent requests involving database operations. I know it's inevitable to optimize the API as well, but first I want to solve the client side of things, namely the mass refresh issue. If a Client has entered /generate/ and starts refreshing the page over and over again (or a Client just sending GET requests), the script keeps going for however many times they've refreshed and eventually the API will crash. You can think of it as a DOS attack, sort of. In the PHP version it's not nearly as harsh, but still harsh. It will keep doing it X amount of times and eventually show the response on the last request in the loop. In the NodeJS version, it's way worse. It's way faster, resulting in more concurrent requests on the API, making it crash faster. Same reason here, the spam. Anyone got a good way of blocking this kind of behavior of the client? PHP or NodeJS solutions will help. Think of the problem like this: $requestCount = 0; while($requestCount < 20) { $data1 = $restRequestHelper->fetch(...); $data2 = $restRequestHelper->fetch(...); $data3 = $restRequestHelper->fetch(...); $data4 = $restRequestHelper->fetch(...); $data5 = $restRequestHelper->fetch(...); $requestCount++; }
  20. Hey.. So lately I've been hooking up large requests to resque jobs in my application, for the purpose of not keeping the user(s) waiting for a longer period of time before they can continue using the application. At the same time, I have a frontend library & eventlistener alerting the users when any specific job was actually completed by the workers. I have realized however that in some cases, this doesn't pan out. I love the idea of resque jobs, but my gut feeling is that I somehow lose control of the request I'm processing. The reason I'm posting this is because there's one part of my application that processes an insane amount of data, in a loop(!). Obviously this can take a long time, and that's to be expected. But when an error occurs along the way, I have no good way of logging it nor alerting the user about it. Picture a scenario where a Worker is doing a Job that would take 2 minutes, and is 80% done, at which point it fails for whatever reason. How would you recommend something like this to be handled? Obviously I get internal logs from the jobs the workers perform, but more specifically how would you catch the errors right from the script? Currently this is my setup... try { ...//code that takes > 1min } catch(AppException $error) { ...//Internal code error, catch } catch(SeqDbException $dbError) { ..//DbError or Query } catch(SystemException $exception) { ..//More severe exceptions } Either I'm missing a catch, or I'm missing something else. The errors rarely occur as well, so it's not easy for me to debug it or re-create them. It's very dependent on the data that is being processed.
  21. Of course. You can connect to however many servers you want from a script. The thing you have to take in mind is configure the firewalls so that server2 accepts connections from server1, for instance. And also make sure the mysql user is allowed to come from server1 to server2. After that, you should be fine.
  22. The way I see it you have two options: 1. Send a request with the data you want inserted to server2 2. After getting the data from server1 (your result), you setup another Database connection to the database on server2 Personally I would opt for the second option, if that is available to you. Then you could simply loop over your results. Pardon me if the code example looks a bit iffy, I genuienly don't use mysqli, but I'm sure you'll get the idea. $databaseServer2 = mysqli_connect($host, $user, $pass, $db); while($row = mysqli_fetch_assoc($result)) { $name = $row['username']; $databaseServer2->query('INSERT INTO table(name) VALUES($name)'); } As long as I understood what you wanted to do correctly, then this should work
  23. Yeah that's something to try i suppose. I've been reading up a bit on the different loop options and I already had a general idea behind the lot of them, and I'm aware foreach copies an array prior to issuing the loop, which may be something to think over, considering the array can be quite large. My thoughts have been striking over to PHP's extract method, considering it's a local function (won't overwrite anything outside the bounds of the function), with data only obtained from the database, meaning they're going to be safe variables. The only doubts I have is if the extract method will be faster than a loop or not. @mogo: Yeah I know it's difficult without the real data, the #1 post is just an example of how the multidimensional array "can" look like. Although there's a lot more fields in the 'real deal'. I never display the data for the user, but it's used to calculate values for another table (each user = new row), and there can be info on up to 31 days/user. Adding in all the code here probably won't be too helpful, there's a lot of stuff that I can't really explain in a post like this.. In short, it's calculating Paycheck data that is later generated into PDF's. @Barand: Unfortunately they are not. The Second union DB is on a seperate server, because it's very sensitive. Any critisism or thoughts about using php's extract for this? Downsides, upsides, etc. Tested the speed of the function earlier today, and the peak time in seconds to complete was 9 seconds. That's insane, in my eyes.
  24. Depends on the second server, really. Does it have an API? Or some script that can insert the data to that second server's database? You could just convert the data selected from server1 into an array and send that as the body in a POST(or PUT) request to server2 provided server2 has a script to handle that.
  25. I disagree with no validation logic in the form. Helping the user on the fly with credentials/data needed to be placed in the fields is a big plus in most applications. However, you won't be able to do this with PHP. You'd need to use, for instance, Javascript. (For on-the-fly validation, like I'm assuming based on your post is what you wanted, upon entering an Email address) In addition, like Psycho already mentioned, you should also have a validation on the script that actually process the data, for obvious reasons.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.