Jump to content

Darkstar

Members
  • Posts

    20
  • Joined

  • Last visited

    Never

Everything posted by Darkstar

  1. I have looked at phpThumb and it does, in fact look awesome. Well, folks, I'm very glad you all helped out so quickly but as it turns out it was just plain stupidity. There was a rogue block of code (forgot the else around it because it's just 2 lines long) and it was outputting the full image after the resized image. The browser would show me the resized image but still load the full image in the background. My function is very similar to Garethp except in an attempt to save computing power I put an if block around the actual resizing portion of the function. If the desired size is greater or equal to the current image size, simply display the image instead of bothering with GD. The function was only to be used for downsizing images so I didn't have to worry about making them bigger. Like I said, I forgot the else and it was ALWAYS outputting the full sized image but only showing the resized one.
  2. I'm currently playing with the idea of resizing images on the fly for certain applications and my original thought process was that if I pass a smaller size image to the browser (or flash or anything else) it should load faster because it will theoretically be a smaller filesize as well. I have a "full quality" 2000x2000px image and that's what I'd use as my base for resizing. Somehow this 1.56MB file gets a filesize of ~1.8MB when it's sized down to 1800x1800 and at a mere 300x300 it's a still 1.6MB. Has anybody else noticed this? I tried setting the quality on imagejpeg() and this gave me a minimal change in size even when set to about 50 (even though it looked horrible).
  3. Basically what I want to avoid is what you hit on, an obtrusive watermark that takes away from the experience of viewing the images. What I also don't want to have to do is open up photoshop for every image and manually watermark it because that'll take a considerable amount of time. So let me run this past you: I was originally going to use a script to add all new photos to the database and then prompt you for categories it would show up in. I figure I can make the the watermark part of this process and prompt me for where (in coordinates) to put it so it looks unobtrusive.
  4. that wouldn't solve it, unfortunately. The number would be appended but if you copy the URL and type it in the addressbar in IE it'll be cached because the randomizer won't change from the img src to the time it's loaded by itself. That would work if I needed it displayed differently on 2 pages but not if it's being called by itself.
  5. although true, most people wouldn't go through the trouble although you're right, I should watermark them regardless. I just wanted it to be at least a little friendly if you were looking at the pictures on the site. I know that I hate going to sites and seeing watermarks on images that are simply being displayed in the page. Maybe the solution i'm looking for is smaller thumbnails that link to a watermarked, larger image.
  6. the while sets $row so you don't need the above, commented, line. As for it still showing blank. try print_r($row) in the while instead of the echo to see what it really stored. Work from there
  7. Let me start at the beginning. Somebody wanted me to code their website and it's a photography site so I wanted to disallow image downloads for obvious reasons. I was going to use a few tricks to accomplish this. I would overlay a transparent image over the actual image so that if somebody right clicks and hits save as they don't get the image. If they're smart enough to view source they'd see the following: <img src="nodownload.php?image=image.jpg"> nodownload.php checks to see if it's being called from in a page on a certain domain, if it is it'll use gd, grab the image and display it. If the page is being hotlinked or no HTTP_REFERER then it'll watermark the picture with GD then output it. This works fine on firefox because firefox isn't caching the output of the php page so if they right type the url of the image it runs through the script again. IE on the other hand is caching the output and displaying the picture it displayed when called from the <img> tag which defeats the purpose of having the code at all if it's going to get cached. I need a way to force all browsers to run the script (not the containing page) instead of caching it. Thoughts? If not, is there a better way or doing this other than mod_rewrite? If it comes down to it I guess I could simply use mod_rewrite to redirect to an error page.
  8. The only other thing I can think of is the possibility that because this particular page is called by ajax that if the user navigates away before it finishes loading/running it may stop the script, but i'm not sure if that's even possible. Would the script continue running or would it stop? There's more to the actual script, that part fetches it, then it's parsed as XML and put into a variable. The variable is then saved as XML locally as a cache and the variable is then used as output. If the script is stopping due to a user navigating away, is there a way to make it run completely through?
  9. $timeout = 5; // set to zero for no timeout //retrieve file $ch = curl_init(); curl_setopt ($ch, CURLOPT_URL, 'http://www.xxxxxx.com/xxxxx.aspx'); curl_setopt ($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt ($ch, CURLOPT_CONNECTTIMEOUT, $timeout); curl_setopt ($ch, CURLOPT_TIMEOUT, $timeout); $file = curl_exec($ch); this is what i'm using right now.
  10. I basically have a script that fetches info from another site using curl. To avoid multiple instances of the fetch at one time I have it create a file as soon as it begins to fetch and set a curl timeout for the fetch. The problem I'm having is if it times out it doesn't remove the file, it just times out and doesn't continue. I looked up possibilities and found set_time_limit(0); but i don't want it to be able to run forever, I just want it to write the file, attempt to fetch and if it hasn't completed after x seconds, remove the file. any thoughts?
  11. yea, i actually took care of it.  i also realized if you use the same query string it caches that as well.  so i can use the same query string until it's time to download a new picture from the original server and have it cache it until the new one is downloaded.  thanks for the help.
  12. would any of you happen to know how to force images to update?  if the image that's fetched via the script changes, my script will download the new one and save it with the same name as the old one.  is there a way to force it to refresh the image cache?
  13. heh, well if it's not a flaw then it's simply a pain in the ass
  14. as it turns out, IE has another flaw when it comes to AJAX.  it refuses to fetch the actual content on the fetched page but will instead use the last cache of it.  I searched google for a minute or two and came up with the following: [code]setRequestHeader("If-Modified-Since", "Sat, 1 Jan 2000 00:00:00 GMT");[/code] i just inserted it into: [code]  xmlHttp.open("GET",urlFetch,true);   xmlHttp.setRequestHeader("If-Modified-Since", "Sat, 1 Jan 2000 00:00:00 GMT");   xmlHttp.send(null);[/code] and voila!  no more cache
  15. good call. IE seems to process the JS differently.  I hate IE.  It didn't process the page title as a part of the responseText and on top of that it jumps the gun trying to replace HTML items before they've finished loading.  I had to make the page title another div and add a delay for the first loading of the index.  At least it seems to work on both browsers now.
  16. Thanks for the tip on firebug!  it really helped, it listed everything in your obj variable and i realized i could use it to return specific parts of the html, such as childNodes.  I gave it an extra parameter and seeing as i know exactly how many fields I'm calling and what order they're in i can use a simple call of the modified htmlDom_html2dom(); here's my modification: [code]function htmlDom_html2dom(html,idNumber){   var obj = null;   if(html.length > 0){     obj = document.createElement('div');     obj.innerHTML = html;   }   return obj.childNodes[idNumber].innerHTML; }[/code]
  17. ah, that's a great bit of info.  I'll try that, thanks.  If you can think of anything to fix my script as well I would be grateful.  I'll keep messing with it in the meantime.
  18. I have a php script that fetches information from a website, parses it, and saves it to a file.  If the page is accessed within the next 90 seconds it will use the information cached in the text file to generate the page, otherwise it'll fetch the site again, parse and save.  Originally I was using a simple meta refresh to reload the page and run the script again.  Reloading the page slowed it down a bit and also would stop refreshing if it encountered an error such as the destination page being unreachable. I figured by using AJAX I could not only load a portion of the page, making it faster, but also have it keep from stopping upon encountering an error.  Since the JS timer will still be running, it can recover just by calling the AJAX function again.  The script returns the information in 4 div tags.  The first is the title, the second is part of the body, the third is the last time the cache was updated, the last is the new timeout value for the timer. if i don't parse the HTML returned by the AJAX then I'll have to call either 4 different pages or the same page 4 times with different GET variables just to get all the information i want. I hope that was clear enough
  19. i got an alert that says [object HTMLDivElement]
  20. Let me start with the fact that I know responsetext is not a DOM object. I got the following from a thread made earlier today, I was hoping it would solve all my problems by changing responseText into a DOM object, it didn't work or I didn't use it right. [code]function htmlDom_html2dom(html){   var obj = null;   if(html.length > 0){     obj = document.createElement('div');     obj.innerHTML = html;   }   return obj.firstChild; }[/code] i used it in the following: [code]  xmlHttp.onreadystatechange=function()     {     if(xmlHttp.readyState==4)       {       wootCache=htmlDom_html2dom(xmlHttp.responseText);         document.getElementById('woottable').innerHTML = wootCache.getElementById('table');       }     }[/code] and my browser returns [quote]Error: wootCache.getElementById is not a function Source File: ajax.js Line: 49[/quote] all I want is to not have to call a php page 3 times 3 different ways to get 3 different bits of information when I should be able to separate the page into 3 sections with HTML tags and parse it.  any thoughts?  I've hit a wall considering I never picked up JS too well, I prefer server-side scripting. please help!
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.