QuickOldCar
Staff Alumni-
Posts
2,972 -
Joined
-
Last visited
-
Days Won
28
Everything posted by QuickOldCar
-
The fastest and best way is using curl. If your host is too slow get a faster one. If the api is too slow use something else.
-
I've not personally used hostforlife.eu. There is good or bad experiences every host, it's hard to keep everyone happy. I'm not one to use shared hosting as it seems that is where most issues arise, I only own or rent dedicated servers. Since ms server 2012 they come with a graphical interface, server manager and server core, so seeing it is not an issue. Plus is also remote desktop. All that is just for directly managing the database though, you will still need to either replicate or write asp scripts with curl to make some magic happen between your remote and local server. Although it's possible for you to start doing backups and restores one machine to the other until figure out the rest of your goals. From first hand experience...it's not enjoyable at all to transfer and deal with large data.
-
Welcome aboard. Classic asp is dead but asp.net is widely used. Be sure to visit the php manual. Is a pile of frameworks out there, too many actually. Some popular ones may want to consider: jquery bootstrap angularjs symphony laravel nette octobercms is worth a mention, CMS based on laravel framework
-
http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/d40b56ee-90d4-45e1-9b82-4aaea90eb02e.mspx?mfr=true You can copy the .MD0 and .SD0 file in the C:\Windows\system32\inetsrv\Metaback folder and perform the restore or http://www.iis.net/downloads/community/2013/04/iis-easy-migration-tool-%28iemt%29
-
ovh is a good host Replication is possible although I feel it would limit on what you can do if only used that. http://www.codeproject.com/Articles/715550/SQL-Server-Replication-Step-by-Step Another route would be to create an api. Host everything on the remote server, your local servers can send any data directly. Would be able to schedule backups any location desired or even replication. Create an api to get,post,put,delete,update any data Can use a single password or set public and or private keys for access. You could have your local server access the api with scripts you create. All matters where,who and how much data are sending, how often. If are unfamiliar with api's I wrote this post a bit ago. I personally prefer REST api's. http://forums.phpfreaks.com/topic/291985-how-to-make-useful-native-apps/?p=1494550 curl for scripts mysql workbench for developing or direct access phpmyadmin for some database management Take one step at a time and work it all out.
-
You are correct with it being old code. Your connection or query most likely failed. Start off with at least mysqli_* or pdo Don't suppress your connection error with @ Don't use extract, either check with $_SERVER['REQUEST_METHOD'] or check if a specific method(value in array) is set and not empty Don't add slashes, use pdo prepared statements or mysqli_real_escape_string Echo your query and see if it's what you expect, double check the names If you want multiple results use a while loop mysqli_fetch_assoc example codes
-
Is there another reason to define $name in the super function? I can see doing something like this public static function super() { $name = $_SERVER['REQUEST_METHOD']; switch($name) { case 'POST': $array = $_POST; break; case 'GET': $array = $_GET; break; case 'REQUEST': $array = $_REQUEST; break; case 'COOKIE': $array = $_COOKIE; break; case 'SESSION': $array = $_SESSION; break; case 'PUT': $array = $_PUT; break; } return $array; } Since are using $array for everything can eliminate the $get and $post defines. Just a reminder that the default is always an empty GET array. The value from $_SERVER['REQUEST_METHOD'] is uppercased
-
Besides the mixed mysql and mysqli functions.....you are using a lower and upper case of the state variable $state = mysqli_real_escape_string($fon, $_GET['State']); $insert = 'INSERT INTO 'XXXXXXXXXX' ('ID', 'FirstName', 'LastName', 'City', 'State') VALUES (NULL,'$FirstName','$LastName','$City','$State')'; BTW, you don't need to pass ID for the insert if using autoincrement for the id's
-
You could first detect what method is actually being used $_SERVER['REQUEST_METHOD'] It's even possible to use $_REQUEST instead
-
octobercms is pretty good and based on the laravel framework
-
If these sites are your own and have the ability to make your own cms...to me that's the right path to take and not rely on others work or if they will exist one day. Sounds more like you need to make your cms reusable. There are people that specialize in certain cms out there and spent lots of time learning it. You should learn how to make plugins,widgets and integrate custom pages within any theme. Another good way is to make your own theme with it's own functions. https://codex.wordpress.org/Developer_Documentation https://codex.wordpress.org/Writing_a_Plugin https://codex.wordpress.org/Widgets_API https://codex.wordpress.org/Theme_Development
-
Glad nobody helped you.
-
The project you describe will be pretty advanced in the end and not as simple as one would think. Make an image scraper/crawler You can make a site specific one or try to make something more general to do additional websites. I personally wouldn't download and store images but instead just save it's url location. You would be able to examine these images initially and also at later times if desired. It's entirely possible to save these images into a folder and mark in your database it's location/filename. If you need to examine images in more detail it needs to be downloaded or in some cases just partially downloaded, but you don't need to keep that image, set an expire time on the image for deletion and refer back to the original sites image for display purposes. Discover every page a site has (all sites are different with their url patterns and pagination, also the same page could have different dynamic content than the last time visited it) Discover all images per page, save the images href,title,alt into a database using a unique constraint to try and eliminate duplicates. You can try the random approach but it takes longer and does not guarantee will even get every page. If the goal is to just amass piles of images and not caring what site had them, just start visiting a random url from domain or url lists and grab whatever you find. Would be possible to scrape every href found on a page and either save them a session,list or database, then select a random one from that the next round. It's possible on some sites to follow their pagination by doing a +1 on the page numbers, can stop your scraper when it no longer finds content or set a max limit manually. For this approach you would most likely want to only store href urls from that domain or possibly even from certain sections of the site. If your goal is to make something like an image search many sites I would make a system that just stores image locations scraped any page visited, then you can automate that script in many possible ways, use your imagination. Not every site has a sitemap, even if they do is not usually every single url that exists on a site, usually the latest data such as a feed or merely the links to their sections. So it will be up to you depending on how the website is designed and determine the best way to find their content. As for the scraping aspect: curl (to me is the best method to connect and can also follow redirects) file_get_contents (fast and easy, can create a stream context but still limited in what you can do, it will fail a lot) preg_match or preg_match_all simplehtmldom dom simplexml You will also have to fix relative urls, determine and convert/replace character,language and document encoding I have my own search engine,website index, piles of tools and automation for scraping, it's not items willing to just hand out but if have any particular questions about something feel free to post. I'm trying to give you some information to research. One method that can work for this particular site is to gather the photo related links and read their opengraph data, save that information. As an example: http://photo.net/photodb/photo?photo_id=17635199 <meta property="og:title" content="SMP_0016a copy: Photo by Photographer Jiri Subrt"> <meta property="og:type" content="article"> <meta property="og:url" content="http://photo.net/photodb/photo?photo_id=17635199"/> <meta property="og:image" content="http://gallery.photo.net/photo/17635199-lg.jpg"> Some code to find the opengraph data and place into an array. <?php $html = file_get_contents('http://photo.net/photodb/photo?photo_id=17635199'); preg_match_all('~<\s*meta\s+property=["\'](og:[^"\']+)["\']\s+content=["\']([^"\']*)~is', $html, $matches); $og_array = array(); foreach ($matches[1] as $k => $match) { $match = str_replace(":", "_", $match); //echo "(".$match.") ".$matches[2][$k]."<br />"; $og_array[trim($match)] = trim($matches[2][$k]); $$match = trim($matches[2][$k]); } echo "<pre>"; print_r($og_array); echo "</pre>"; ?> Results: Array ( [og_title] => SMP_0016a copy: Photo by Photographer Jiri Subrt [og_type] => article [og_url] => http://photo.net/photodb/photo?photo_id=17635199 [og_image] => http://gallery.photo.net/photo/17635199-lg.jpg )
-
WordPress force HTTPS and change all HTTP links to HTTPS?
QuickOldCar replied to kavoir.com's topic in Applications
Login to your WordPress dashboard and navigate to Settings > General. Ensure that the WordPress Address (URL) and Site Address (URL) are https. If not, add s after http to make https and save it. -
Search image using image matching techniques.
QuickOldCar replied to usamarehan's topic in Applications
The terms you are looking for is image recognition,pattern recognition,facial recognition or if wanted to discover text in the image would be optical character recognition(OCR). These are intensive processes and not suitable for everyday websites. You can however try to develop your own mapping pixel locations and colors using something like GD or imagemagick There would be even more to it to find a similar shape within, have to consider scales, boundaries and minor differences. It may help to convert the image to gd2 before the processing You can probably get some decent results by matching text in their image name or their titles.- 1 reply
-
- image matching
- image retrieve
-
(and 1 more)
Tagged with:
-
If host #1 fails, serve from host #2 (emergency only)
QuickOldCar replied to sKunKbad's topic in Miscellaneous
If site A goes down people are connecting to site A and it can never go to site B unless you manually set dns and wait it out. I would say dns failover or round robin. It will need a monitoring service that can determine which dns to send traffic to and a lower TTL The right way would be to use a datacenter and have the same LAN, use HTTP load balancing to handle server failures http://www.haproxy.org/ https://www.digitalocean.com/community/tutorials/how-to-use-haproxy-to-set-up-http-load-balancing-on-an-ubuntu-vps If you ever decided using a datacenter, check out mesosphere -
There are plugins made for more advanced or category searches. This one should work for you. https://wordpress.org/plugins/search-everything/
-
Although these are pretty old tutorials this is the link. http://www.phpfreaks.com/tutorials
-
The php manual is a very good place to start. http://php.net/manual/en/tutorial.php http://php.net/manual/en/langref.php
- 2 replies
-
- php
- programming
-
(and 2 more)
Tagged with:
-
That is a vague question. Is many variables to them that could make them slower. By default they are fast. A big helper would be to cache the results or pages somehow. Optimize queries and create an index for the database. Only fetch data you are using Don't load lots of images or large ones, cache smaller thumbnail versions of them when can. Use gzip or any other compression for js and css Can test your pages out here and try to find out what is making it slow. http://tools.pingdom.com/fpt/
-
Here is an article I wrote explaining what an api is and some ways how to access the data. http://forums.phpfreaks.com/topic/291985-how-to-make-useful-native-apps/?p=1494550 It all depends each particular api service and also what you want to do with the results. For something live as displaying latest results in a sidebar can retrieve the json or xml response and output it for display in a sidebar You could build your own api service or even own website using all their data if you store the results a db. They also have feeds which are similar minus the summaries. Is many scripts,widgets and example codes of how do display feeds on a website. http://www.tvrage.com/rss.php
-
The reason you see your domain is because the relative urls are scraping need to be fixed from target link #carousel-example-generic #carousel-example-generic one directory up ../contact/ ../our-services/ ../all-other-services/ two direcories up /../../our-services/ /../../our-services/ /../../our-services/ I did a cheap fix for you, when it comes down to handling targeted directories you'll have to pop / positions accordingly <?php include_once('simple_html_dom.php'); $target_url = "http://www.expresslawnmowing.com.au/"; $target_domain = parse_url(trim($target_url), PHP_URL_HOST); $target_domain = str_replace("www.", "", $target_domain); $hrefs = array(); $html = new simple_html_dom(); $html->load_file($target_url); foreach ($html->find('a') as $link) { $url = trim($link->href); if (!preg_match("~:\/\/~", $url)) { if (substr($url, 0, 1) == "/") { $url = ltrim($url, "/"); } $url = str_replace(array( "../", "./" ), "", $url); $url = "http://" . $target_domain . "/" . $url; } $hrefs[] = $url; } if (!empty($hrefs)) { $hrefs = array_unique($hrefs); foreach ($hrefs as $href) { $mobile_check = 'https://www.googleapis.com/pagespeedonline/v3beta1/mobileReady?key=AIzaSyDkEX-f1JNLQLC164SZaobALqFv4PHV-kA&screenshot=true&snapshots=true&locale=en_US&url=' . $href . '%2F&strategy=mobile&filter_third_party_resources=false&callback=_callbacks_._Ce2bYp0wchLY'; echo $mobile_check . "<br /><br /><br />"; echo $href . "<br /><br /><br />"; } } ?>
-
You will have to show some of your existing code for someone to start helping you. It's not going to be a simple here is a code that works.
-
Actually move_uploaded_file is looking for the full path, added that into code and using directory named upload if($_SERVER['REQUEST_METHOD'] == "POST") { $dir = "./upload/"; $file = $dir . basename($_FILES['userfile']['name']); $ext = pathinfo($file, PATHINFO_EXTENSION); $filename = $_FILES['userfile']['name']; $path = $dir . $filename; $allowed = array('jpg', 'png', 'gif', 'bmp'); if($_FILES['userfile']['size'] > 10000) { die("File is too large!"); } if(!in_array($ext,$allowed)) { die("Invalid Image File. Possible hack attempt!"); } if(move_uploaded_file($_FILES['userfile']['tmp_name'], $path)) { echo "File: ".$_FILES['userfile']['name']." has been uploaded to ".$path."! "; } else { die("Error: ".$_FILES['userfile']['error']." "); } }
-
Could be file permissions Maybe make a directory named upload and give it 755 permission for www-data $dir = "./upload/";