Jump to content

themistral

Members
  • Posts

    301
  • Joined

  • Last visited

Contact Methods

  • Website URL
    http://www.globalbedandbreakfasts.net

Profile Information

  • Gender
    Not Telling

themistral's Achievements

Regular Member

Regular Member (3/5)

0

Reputation

  1. Hi guys, I'm working on an old version of osCommerce with a mod that rewrites URLs to SEO friendly ones. This is not a site I have built and as such am unfamiliar with the mod. .htaccess is setup with the following rewrite rule: RewriteRule ^(.*).html$ root.php?$1.html&%{QUERY_STRING} This works absolutely fine, except for the fact that when we use this from Adwords, we lose the gclid which means that conversion data is being lost. Is there a way to check for the gclid parameter and parse that as a query string? I was thinking something like RewriteCond %{QUERY_STRING} (^|&)gclid($|&) RewriteRule ^(.*).html?{QUERY_STRING}$ root.php?$1.html&%{QUERY_STRING} [QSA] I've tried this but it's not working. Utlimately want I want to achieve is page_uri.html?gclid=status Is this even possible, and if so, can someone point me in the right direction? Thanks
  2. Hi guys, I am using PHP to create an XML data feed for Google Merchant Centre. However, the database contents are stored as ISO-8859-1 but the GMC feed needs to be in UTF8 (as far as I can tell). It works fine if I leave the description out, but when I add in the description, the feed fails. I am presuming this is because of the difference in character encoding. I've tried utf8_encode and iconv, neither of which worked. There are characters such as bullets in the description so could this be the problem? Can someone please point me in the right direction to solve this? Thanks!
  3. Hi guys, I'm not very good with jquery so need a bit of help. I've got it all working OK except for the fact that everytime I try to click into a form field using the mouse, it focuses back on the first field. You can see what I mean here http://bit.ly/QPIA6s Any help would be gratefully appreciated!
  4. Yes - that's what was being used before which is why I tested $_SERVER['REQUEST_URI']
  5. Hi guys, I'm having a strange problem. I'm working on somebody else's website and they track every page that is visited for their own records, using their own script. Fine. The script currently uses $_SERVER['REDIRECT_URL'] and records everything fine. Each page is dynamically generated so the pages do not physically exist on the server. The problem comes with a custom 404 page. Currently, the htaccess is set to redirect back to the homepage if the page is not found. I have setup a custom 404 page and changed the htaccess file to reflect that, and even when the website is showing a valid page, the tracker records it as the custom 404 page. I tried changing the tracking script to use $_SERVER['REQUEST_URI'] but the same is still happening. Can anyone shed any light on why a page might be showing correctly to the user, but the webserver thinks the request is generating a 404 error? Thanks
  6. Hi guys, I am working on a website whereby the URL structure has changed, and these old urls are still showing up in Google. I therefore would like to create an .htaccess rule to redirect them. The new URLs are created via PHP and the whole lot is parsed to a querystring, so I can't just take the parts I need and create a new rule. index.php?url=dir/the-new-url&id=1 The old URL structure was /item1-item2-item3-item4-item5.html The new URL structure is /item4/item1-item2-item3-item5.html Ideally I would like to check if the GET[id] variable is equal to 1, then rewrite the rule RequestMethod GET RewriteRule ^(.*)/(.*)-(.*)-(.*)-(.*).html$ $2-$3-$4-$1-$5.html Obviously, all this is going to do is check if there are GET variables, and I've no idea if my rule would even work. How do I check the value of GET[id]? I am going along the right path, or is regex going to be needed...? If someone could point me in the right direction, that would be great!
  7. Thanks JustLikeIcarus, I had missed out a THEN which is why it wasn't working. That was exactly what I wanted - thank you sooo much
  8. Thanks JustLikeIcarus, This looks promising! It works brilliantly for one set of case statements. However, I need to use 6 of these statements in order to combine various other columns. Is this something I can do? I've tried using a comma after the END but it produces an error.
  9. Hi guys, I am trying to create a query and could do with some direction. SELECT id FROM table Nice and easy right? Well, here comes the hard bit! There are numerous columns within the table. They are essentially boolean values. I need to export data from these columns but the mapping has changed from how it used to be. Before, it was one column mapped to one export column. Now the mapping requires that a true value from any of 4 columns could produce a true export value in one column. e.g a query result of id col1 col2 col3 col4 1 [/td] true used to produce an export file of id col1 col2 col3 col4 1 true [td] Now it needs to produce an export file of id new_col 1 true I have tried SELECT id, (SELECT col1 FROM table UNION SELECT col2 FROM table UNION SELECT col3 FROM table UNION SELECT col4 FROM TABLE) as new_col FROM table but I get the error I hope I've explained what I need to achieve clearly enough. Any direction anyone can offer would be greatly appreciated - even if you tell me what I'm trying to do is not possible in one query. Thanks
  10. Cheers .josh That's great to know as it explains a few things! They implemented this as they were getting hammered by bots - it's an ecom site so I would guess a shopping bot was the problem. Yep I know bad bots will ignore robots.txt, and some pages of the site are included in Google's index, so I would guess that at best, the robots.txt file is confusing bots at the moment.
  11. Hi guys, I am evaluating a site currently and the Robots.txt file has the following code: User-agent: Googlebot Allow: / User-agent: Slurp Crawl-delay: 120 Allow: / User-agent: Msnbot Allow: / User-agent: ia_archiver Allow: / User-agent: * Disallow: / I use a tool to check various things and it flagged up that robots was disallowing bots. I would like to check if the order makes any difference - does that final command override the bot-specific rules? Thanks
  12. Hi guys, I need some help with a redirect. I currently have the following to add a trailing slash to the end of my URL. RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !^(.*)/$ RewriteCond %{REQUEST_URI} !^(.*).(css|CSS|js|JS|png|PNG|jpg|JPG|jpeg|JPEG|gif|GIF|zip|ZIP|xml|XML|pdf|PDF|html|HTML|php|PHP|txt|TXT)$ RewriteRule ^(.*)$ /$1/ [QSA,R=301,L] This works fine. However, when using the google UTM code appended to the homepage, the URL breaks, so I need to remove the trailing slash. I've tried a number of things but nothing is working. Can anyone advise how I exclude the homepage in the above code? I was thinking RewriteCond %{REQUEST_URI} !^$ but that's not working
  13. Hi guys, This is driving me mad, and I'm wondering if it will be resolved when I set the site live. I have a form with a hidden field containing the callback URL. The form is submitted to Worldpay. I am creating a new site in Wordpress although the callback file is a normal php file (ie not Wordpress). However, a successful payment results in staying at the standard Worldpay success pay instead of redirecting to the callback. This is being hosted on a temporary domain. I've tried the following combinations: Form on temp website to callback on current site - works OK. Form on temp website to callback on temp site - callback doesn't work. Form on current website to callback on current site - works OK. Form on current website to callback on temp site - callback doesn't work. If I take the form and put it on the domain I will be using, the callback seems to operate as I would expect. The other thing is that currently the site is on a dedicated server, and the temporary site is on a shared server - and will be staying there. Is this something that sounds like it will be resolved when the new site is hosted on the proper domain? Thanks!
  14. Hi guys, I'm running the following code: $result = shell_exec("su user; scp /path/to/file/text.txt user@domain:/path/to/file/text.txt"); Now, the file get's transferred correctly. However, I need to put in place a check to check if the transfer was successful. $result seems to return the same whether the transfer was successful or not. I want to send an email to state whether the file transfer was successful so what do I need to do to check this? Thanks!
  15. Thanks for the help guys. dragon_sa I'm going to have to do that I think - I wanted to avoid it as this script is going across multiple sites.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.