Jump to content

seany123

Members
  • Posts

    1,053
  • Joined

  • Last visited

Profile Information

  • Gender
    Not Telling

seany123's Achievements

Advanced Member

Advanced Member (4/5)

1

Reputation

  1. Hello, What would be the easiest way to compare rows from different databases? I have 2 tables 1 is live and the other is for newly inserted data which needs to be compared against. firstly the unique_id must match, if not then comparing the rest of the columns is irrelevant. The structure is the same for both tables, something like this: id unique_id val1 val2 val3 so i thought i could do something like this <?php $q = mysqli_query($conn, "SELECT * FROM live_table LIMIT 10"); while ($r = mysqli_fetch_array($q)) { $uniqueId = $r['unique_id']; $val1 = $r['val1']; $val2 = $r['val2']; $check_query = mysqli_query($conn, "SELECT * FROM unverified_table WHERE unique_id='$uniqueId' AND val1='$val1' AND val2='$val2'"); //from here i would check the num rows and get the result etc... } however i have around 20+ columns which would need to be checked to see if they match. also i would preferable want to be able to print out which columns matched and which didn't depending on what i need. the other idea would be to firstly check for a row with the same unique_id, return it back as an array and then loop through each individual element to check if they match... EG. <?php $q = mysqli_query($conn, "SELECT * FROM live_table LIMIT 10"); while ($r = mysqli_fetch_array($q)) { $uniqueId = $r['unique_id']; $check_query = mysqli_query($conn, "SELECT * FROM unverified_table WHERE unique_id='$uniqueId'"); while ($r2 = mysqli_fetch_array($check_query)) { if ($r[1] != $r2[1]) { //etc... } } } is there a better way to do this? thanks sean
  2. to be completely honest i haven't run into this problem before, usually i'm able to use the function to display the website then i use a simple scraping function which scrapes parts out from the source code. I thought curl downloaded the source code after the includes had been created, so didn't realize that include paths made a difference. I will try running the script locally as you suggested. thanks sean
  3. this is exactly the code i'm running: <?php //error_reporting(E_ALL); //ini_set('max_execution_time', 0); $url = "https://groceries.asda.com/"; $main_page = Acurl($url); echo $main_page; function Acurl($url) { //$cookie_file = "cookie.txt"; // Assigning cURL options to an array $options = Array( CURLOPT_RETURNTRANSFER => TRUE, // Setting cURL's option to return the webpage data CURLOPT_FOLLOWLOCATION => TRUE, // Setting cURL to follow 'location' HTTP headers CURLOPT_AUTOREFERER => TRUE, // Automatically set the referer where following 'location' HTTP headers CURLOPT_CONNECTTIMEOUT => 120, // Setting the amount of time (in seconds) before the request times out CURLOPT_TIMEOUT => 120, // Setting the maximum amount of time for cURL to execute queries CURLOPT_MAXREDIRS => 10, // Setting the maximum number of redirections to follow CURLOPT_USERAGENT => "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1a2pre) Gecko/2008073000 Shredder/3.0a2pre ThunderBrowse/3.2.1.8", // Setting the useragent CURLOPT_URL => $url, // Setting cURL's URL option with the $url variable passed into the function //this is for the cookie. CURLOPT_COOKIESESSION => TRUE, CURLOPT_COOKIEFILE => $cookie_file, CURLOPT_COOKIEJAR => $cookie_file, ); $ch = curl_init(); // Initialising cURL curl_setopt_array($ch, $options); // Setting cURL's options using the previously assigned array data in $options $data = curl_exec($ch); // Executing the cURL request and assigning the returned data to the $data variable curl_close($ch); // Closing cURL return $data; // Returning the data from the function } strange that even with your test it wasn't showing the entire webpage? just parts of the nav bar? might there be something the website is doing to block a curl connection? Ok i will give that a try and see what it returns.
  4. Thanks for the response. That's strange, im running the exact code and its redirecting. maybe this a something with my servers settings causing the redirect.
  5. Thanks for your response, i have just now installed and tried it with httprequester on firefox, it appears to display the website without any further redirects.
  6. Hello, so im trying to use Curl to connect to a website, but when i try to return the url, i get redirected and it ends up redirecting me back to "myowndomain.com"/back-soon. is there a way to see why the site is redirecting when using Curl, it doesnt redirect when i connect to the website through a usual web browser.. code: $url = "https://groceries.asda.com/"; $main_page = curlFunction($url); echo $main_page; function curlFunction($url) { $cookie_file = "cookie.txt"; // Assigning cURL options to an array $options = Array( CURLOPT_RETURNTRANSFER => TRUE, // Setting cURL's option to return the webpage data CURLOPT_FOLLOWLOCATION => TRUE, // Setting cURL to follow 'location' HTTP headers CURLOPT_AUTOREFERER => TRUE, // Automatically set the referer where following 'location' HTTP headers CURLOPT_CONNECTTIMEOUT => 120, // Setting the amount of time (in seconds) before the request times out CURLOPT_TIMEOUT => 120, // Setting the maximum amount of time for cURL to execute queries CURLOPT_MAXREDIRS => 10, // Setting the maximum number of redirections to follow CURLOPT_USERAGENT => "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1a2pre) Gecko/2008073000 Shredder/3.0a2pre ThunderBrowse/3.2.1.8", // Setting the useragent CURLOPT_URL => $url, // Setting cURL's URL option with the $url variable passed into the function //this is for the cookie. CURLOPT_COOKIESESSION => TRUE, CURLOPT_COOKIEFILE => $cookie_file, CURLOPT_COOKIEJAR => $cookie_file, ); $ch = curl_init(); // Initialising cURL curl_setopt_array($ch, $options); // Setting cURL's options using the previously assigned array data in $options $data = curl_exec($ch); // Executing the cURL request and assigning the returned data to the $data variable curl_close($ch); // Closing cURL return $data; // Returning the data from the function } Any help would be great. sean
  7. Thanks for your answer, I explained the way above as i thought that was the easiest way to explain, but basically the comma delimiter keywords are a set of keywords which could be 50+ long so to create a column for each keyword wouldn't really be feasible either. Thanks for your reply, I just used that as an example.
  8. Hello, Im not sure if the title makes sense or not but ill try explain it best i can. i have 2 tables, the first table which ill call "haystack" contains a varchar field called keywords which contains keywords separated with a comma delimiter eg: "keyword1, keyword5, keyword2, keyword10" the second table which ill call "needle" has for example 10 rows with a varchar called keyword, this contain only 1 of the keywords for each row eg "keyword5" the aim is to run a search on the haystack table to only return rows where keywords from the needle table match. so i could do this in php by exploding the haystack keywords field and then looping through to compare against the needle keywords, but that would require to select * rows and would make its take a long time, so i was hoping this could be done in sql? thanks sean
  9. Hi, I know this could probably be solved using either php or SQL, so i wasnt sure which board to post this on, but ive always been under the impression its best to solve the issues through SQL if possible as thats the most efficient way. I have a products table which contains product_name varchar (255) then im allowing users to search for products, with the current php code: product_name LIKE '%$searchTerms%' //searchTerm would contain ie: "red apples" it wont return results when the users type the plural for a word ie apples instead of apple, or if they type apple red instead of red apple. im wondering the best way to search products? i thought i could explode the searchTerm variable and run the like query for every word product_name LIKE '%$searchTerm[0]%' OR product_name LIKE '%$searchTerm[1]%' but then im worried about relevance, if one of the search terms is the word "and" then it might return completely irrelevant results. also how would i go about dealing with the plurals? Any suggestions on the best way to do this? thanks
  10. Im currently doing website scraping for the first time, although im getting success, im finding that for when a piece of code is dynamic, then im having to do alot of additional lines when if i could do something like <div id"{wildcard for any value}"> then it would reduce the lines required to collect the data, here is an example of the html im scraping: < div id = "productWrapper" > < div id = "hih_2126_348" class = "descriptionDetails" data - product - id = "22133545" > < div class = "desc" id = "hih_3_266_401" > < h1 id = "hih_3_266_4441" > < span data - title = "true" id = "hih_1_1466_99" > ProductNameHere </span> </h1 > so every page that i try scraping potentially has a different value id for the div h1 span tags the id value can change in characters/length/symbols etc is there a way to basically scrape between for example < span data - title = "true" id = "{wildcard to allow for any value/text here}" > and </span> the php functions i'm currently using to scrape are below. any help would be awesome thanks. // Defining the basic cURL function function curl($url) { // Assigning cURL options to an array $options = Array( CURLOPT_RETURNTRANSFER => TRUE, // Setting cURL's option to return the webpage data CURLOPT_FOLLOWLOCATION => TRUE, // Setting cURL to follow 'location' HTTP headers CURLOPT_AUTOREFERER => TRUE, // Automatically set the referer where following 'location' HTTP headers CURLOPT_CONNECTTIMEOUT => 120, // Setting the amount of time (in seconds) before the request times out CURLOPT_TIMEOUT => 120, // Setting the maximum amount of time for cURL to execute queries CURLOPT_MAXREDIRS => 10, // Setting the maximum number of redirections to follow CURLOPT_USERAGENT => "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1a2pre) Gecko/2008073000 Shredder/3.0a2pre ThunderBrowse/3.2.1.8", // Setting the useragent CURLOPT_URL => $url, // Setting cURL's URL option with the $url variable passed into the function ); $ch = curl_init(); // Initialising cURL curl_setopt_array($ch, $options); // Setting cURL's options using the previously assigned array data in $options $data = curl_exec($ch); // Executing the cURL request and assigning the returned data to the $data variable curl_close($ch); // Closing cURL return $data; // Returning the data from the function } // Defining the basic scraping function function scrape_between($data, $start, $end){ $data = stristr($data, $start); // Stripping all data from before $start $data = substr($data, strlen($start)); // Stripping $start $stop = stripos($data, $end); // Getting the position of the $end of the data to scrape $data = substr($data, 0, $stop); // Stripping all data from after and including the $end of the data to scrape return $data; // Returning the scraped data from the function }
  11. if i have an enum like this: enum manufacturers { Mazda, Ford, Fiat, Honda }; manufacturers car = manufacturers.Mazda; is it possible to then print(car) to return its current enum value?
  12. thanks for your continued help, i agree that its a very unlikely thing to happen, but it also depends on the user themselves returning back to the success page, so it could go wrong for something as simple as the user losing internet connection before being redirect.. which makes you think this probably happens more often than first thought. I have now done a little research and apparently IPNs requires a HTTP/1.1 200 OK HTTP status response whenever PayPal POST's the IPN data to it, if not it will try again. anyway i shall continue to look into this, so thanks for your help.
  13. how can the api do this all alone though?, if for example my server is down as paypal tries redirecting to the return url then the script will never receive the notification to update the db. i thought it was only by the ipn that you could send notifications after the payment, again the same thing would apply for payment_status changes and refund notifications surely that would all be done via ipn.
  14. I have another question which i related this Api/ipn the way i have it currently is before the person is directed to paypal to make the payment, there is a insertion into my db table which holds all of the current information about the order, then once the person completes the payment and returns to the success url via the api, it updates the table row with transaction number etc. then when the ipn is received it updates the table row based on the transaction id. the issue im thinking could easily occur is, what happens if they go to paypal and make the payment but is unable to return to my success url? im assuming there would need to be failsafes in the ipn itself but without the transaction id, how would the listener script know which row to update?
  15. thanks for the code now that all looks a lot easier to understand.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.