Jump to content

GURAQTINVU

Members
  • Posts

    11
  • Joined

  • Last visited

    Never

Profile Information

  • Gender
    Not Telling

GURAQTINVU's Achievements

Newbie

Newbie (1/5)

0

Reputation

  1. Hi Jodha. As a newbie myself, I couldn't help but notice similarities to a problem I have grappled with, and overcome except for a bug in DW CS3. It looks to me as though you have more or less got it, but for expediency, and as I haven't much time, I'll just try to list how I have over come most difficulties: I have more or less a php template page (representing all pages) that I 'fill' with 'stuff' that is extracted from an xml file. A typical menu link is: <a href="?page=index"> if Javascript is off, this querystring gets sent to the original page (beit index.php or contact.php etc). I return a value (the specific page xml) from an include statement and save the value in a variable that all subsequent requests for relevant xml, which are called from the point where I need the xml in the page, echo it as a string and Bob's your auntie. It is much more convoluted than this because I also have the whole thing duplicated for when Javascript is on - sigh! Hope it helps. PS You don't know how to get php to read an attribute from a script tag, within the main document (such as index.php itself) do you? Cheers
  2. Firstly, for me (I'm neither on the fence nor on the fence) Adobe's website is awful. I have spent hours (now and in the past) trying to get what I want, which is light on the following: Is there a known bug in Dreamweaver CS3 (no, not MX or V8 Adobe!) that ruins the automatic relative-path adaption (relating to Templates stored in the template folder) for files stored in 'normal' folders elsewhere within the site. The Adobe site claims there is an issue where it can't turn OFF the path correction for files kept within the Templates folder (it offers a workaround), but doesn't report an issue for files that can't be turned ON when kept elsewhere. Javascript doesn't have the same problem, (i.e. the paths are adjusted correctly while all this is going on) and I have wondered whether there is a way to get php to copy the path from the 'automatically adjusted' script path for each page into the . If anyone knows how I might be able to do this I would be grateful. N.B. I have tested using absolute paths too, but there seems to be a problem here too. As yet I have steered clear of querying the server for its path because I have had such a headache tracking bad code in php that I hardly dare embark on something about which I know nothing (I have had great fun understanding scope, persistence etc). Many thanks
  3. I have been here before but at a lower level and am know paying for my workaround. I have a single xml file that supplies all the necessary page specifics for the site, including image details and text: <?xml version="1.0" encoding="utf-8"?> <thepages> <pageindex> <headdiv>WELCOME</headdiv> <bodydiv> <p>First paragraph.</p><p>Second<b><em>small</em></b>paragraph.</p> </bodydiv> <images> <image1> <title>Image One</title> <img id="Placeholder" src="images/picone.png" alt="Do not point!" width="170" height="235" /> </image1> </images> </pageindex> <pagetwo> <headdiv>Another Page</headdiv> . . </thepages> To populate the required page, I have written a function that detects the existence of and subsequently the value of a Query String. If tJavascript is off, the php returns JUST the relevant page details as an array of SimpleXML objects and the various '<divs>' in the recipient page are 'echoed' with the necessary (all coded and working fine). If Javascript is on, I want the same to happen, but I haven't managed to return JUST the relevant page details as xml that Javascript can then parse having assigned the data from the responseXML property - the only time that javascript returns data in the responseXML is when php uses this code: ob_clean(); header('Content-type: text/xml'); flush(); readfile($filePathName); The route I thought should work, extracted the string and tags (which together represented he desired page) from the xml file using: $simpleXMLArray = $xml->xpath($entireXMLPathRequired); while(list( , $node) = each($simpleXMLArray)) { $mySimpleXMLString .= trim($node->asXML()); } echo '<?xml version="1.0" encoding="utf-8"?>'.<outer>$mySimpleXMLString.</outer>; I have tried everything including saving a new file and then using 'readfile()', but but I just can't get it to play ball. If anyone has an insight into the problem I would be grateful - many thanks.
  4. I have thrashed out a solution of retrieving, parsing (using xml_parser_create()), returning the 'tagged' string (print_r()), using an include(file) statement in the XHTML and am working on attaching the same parsing file to Javascript via window.onload = PrepareLinks; which all works thus far. There is a problem however sending the parsed string to Javascript - it complains about not finding the file. Does anyone have an idea of how to take a tagged string and return it the way that printf($dataString) does, but so that it looks like a file to Javascript as Ajax. I am obviously trying to avoid maintaining two versions of the same parsing function. Thanks for looking
  5. Thanks, you are right about xpath but I seem to struggle at every turn as I am now left with an array of SimpleXMLElement objects (which exactly defines the xml I want apart from the fact that it returns a array with only <em> having an associative index - em) and cannot find a way to turn that array into a string of tags and text. :-\
  6. OK, so I have my xml file: <?xml version="1.0" encoding="utf-8"?> <details> <nameone> <paras>1</paras> <paras><em>2</em></paras> <paras>3</paras> </nameone> <nametwo> <paras>1</paras> <paras>2</paras> <paras><em>3</em></paras> <website>http://www.us@here.co.uk</website> <email>special me@here.co.uk</email> </nametwo>< </details> I can do what I want with: $xml = simplexml_load_file($file); $myVar = ""; foreach ($xml->nameone->para as $para) {$myVar .= "<p>".$para."</p>";} echo $myVar for example, but this doesn't allow for nested 'ad lib' tags (please see my cunning nested <em>). What I need is a way of requesting all nodes and values beneath a node, <nameone> for instance, so that all nodes beneath are just included ver batum. Does anyone know an easy and neat way - I'd be grateful.
  7. N.B. Actually, I also originally tested in Firefox to no avail, but for me it is not testimony to much other than I had bugs of one sort or another then too, and that these too may or may not exist in the lastest inevitably tweaked attempt to get it working.
  8. Don't mock, but I am by default testing in IE7 & 8 (of all things - and with their new firebug equivalent - it's in colour as opposed to firebug).
  9. Thanks for both suggestions firstly, but have to report back no success. The responseXML always comes back basically empty, although the string is in responseText - but I think this is not new. By the by, the 'type' of responseXML that is retuned is IXMLDOMDocument2, which means very little to me. I am trying to avoid parsing simple text as XML is supposed to have done all the hard work in essence.
  10. No, it's a simple xml file: <?xml version="1.0" encoding="utf-8"?> <details> <name> <paras>1</paras> <paras>2</paras> <paras>3</paras> </name> </details>
  11. OK, I've tried solo, but as it's not my day job, I'm now asking (N.B. This really is a php question I think, honestly). I have an Ajax test project below that firstly attaches a functon to 'on-click' events - successfully (in principle). When a link is clicked, the 'on-click' function uses XMLHttpRequest (via client Javascript) to try to open that PHP file (on the server), which through a 'case' construct choses the correct XML file, which it SHOULD pass back, which should (via client Javascript) parse as required. OK, it doesn't. The problem seems to be in the use of the intermediate PHP file - because it all works if the XML file is hard wired into the 'on-click' function. I'm sure the main problem is in the headers that are sent or recieved (or not) but this is not my forte and I haven't been able to find anyone who is using this same route, which aims to degrade gracefully if Javascript is off on the client (I have succesfully implemented the solution of PHP parsing for when it is off). function PrepareLinks() { if (!document.getElementById || !document.getElementsByTagName) {return;} if (!document.getElementById("mylist")) {return;} var list = document.getElementById("mylist"); var links = list.getElementsByTagName("a"); for (var i=0; i<links.length; i++) { links.onclick = function() { var queryPageID = this.getAttribute("href").split("?")[1]; //get the value of query string 'page' var url = "siteCompletePageDetails.php?"+queryPageID; if (grabFile(url) == true) {return false;} else {return true;} } } } function grabFile(file) { var request = getHTTPObject(); if (request) { request.onreadystatechange = function() { parseResponse(request); }; request.open("GET", file, true); request.send(null); return true; } else { return false; } } function parseResponse(request) { if (request.readyState == 4) { if (request.status == 200 || request.status == 304) { var data = request.responseXML; . . . . . . } } } //siteCompletePageDetails.php <?php if (isset($_GET["page"])) { //Only true when a link is clicked and Javascript is ON, and DOM fully supported in Browser and Div tags available. switch ($_GET["page"]) { case "testFile": include "testFile.xml"; break; } } ?> Please help if you can - thanks. :-\
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.