Jump to content

nogginj

Members
  • Posts

    24
  • Joined

  • Last visited

    Never

Profile Information

  • Gender
    Not Telling

nogginj's Achievements

Newbie

Newbie (1/5)

0

Reputation

  1. Ive done this using subqueries, say ... SELECT u.user as user, (SELECT COUNT(hits) FROM hits) as hits FROM users ORDER BY hits
  2. Thank you for the handy pointer, my queries work perfectly now. Now a bit of discussion, is this somehow an 'expensive' call to make? I mean is it going to severely slow down performance if the 'not in' list is huge?
  3. NOT IN sounds like something I should read more about, thanks. -j
  4. I have a table videos and users. i also have a table called videoViews which just links users to videos they have watched. I want to choose a random entry from videos table, that the user has not viewed. Currently, I do a join vid->videoViews, and get multiple entries, video entry repeated for each time a user viewed it. I cannot do a 'WHERE vidViews.user != user', because there are multiple entries, and so it might still return me a particular video just joined to someone else's view. So how can I exclude ALL instances of a video, if ONE instance of the video matches my criteria? Am I using the wrong kind of join, or is there some funciton in MYSQL I am not familiar with?
  5. SOLVED Seems like this was the issue! Thank you very much. I was also able to restructure yet again to reduce number of joins. Thanks yall.
  6. Apologies for the many replies...am I missing an edit button, I only see 'Modify' immediately after I post? Anyway, LEFT JOIN is MUCH faster. Why is that? Is my query still too complex (takes about 5 seconds to execute, which feels too long for me)?
  7. So how many JOINS is too many? Suggestions above have led me to write one gnarly SQL statement covering a bunch of tables and aliasing them to get the results i need. This does provide nice results on small sets, but seems to be breaking down with any larger set. I've got: SELECT g.groupID, gs.statusID as status, g.groupTitle, g.groupCreation, g.featured, g.protocol, g.etc, g.etCetera, g.more, p1.photographer as principal, p1.title as title1, p1.score as score1, c1.cat as cat1, p1.origin as origin1, p1.pID as pID1, p1.thumb as thumb1, p1.embed as embed1, p1.tags as tags1, p1.link as link1, p1.timestamp as timestamp1, p2.photographer as secondary, p2.title as title2, p2.score as score2, c2.cat as cat2, p2.origin as origin2, p2.pID as pID2, p2.thumb as thumb2, p2.embed as embed2, p2.tags as tags2, p2.link as link2, p2.timestamp as timestamp2 FROM groups g JOIN group_status gs ON g.groupID = gs.groupID JOIN photos p1 ON g.photoID1 = p1.photoID JOIN cats_photos cp1 ON p1.photoID = cp1.photoID JOIN categories c1 ON cp1.catID = c1.catID JOIN photos p2 ON g.photoID2 = p2.photoID JOIN cats_photos cp2 ON p2.photoID = cp2.photoID JOIN categories c2 ON cp1.catID = c2.catID WHERE etc etc Is that too many JOINs? if so, whats the best way to pull a lot of data about a lot of items? In this case, I am thinking one call per group, even if there are 100 of them, would be faster. I understand I probably shouldn't use the particular photo ids in the group table, but at this juncture I know there will only ever be two photos per group, and one is called 'principal' and the other 'secondary', so I can live that. Thanks
  8. Just a note, phpMyAdmin really doesn't seem to like long SQL calls like that...
  9. I can appreciate this and so I am just devoting my time now to finding the 'perfect' sql call. Again, appreciate this. I figured it was ok, but I think I will instead create a groups_photos table that relates the two tables. This, added with a photos_categories and a groups_status table are making for one HELL of a JOIN (more like 8 JOINS), and a whole lot of aliases, but if its the smarter way, then I will continue down this road.
  10. More details: Every photo exists also in a 'group' category. For simplicity, let's say each group can have one or two items. I have entries for photos, as described above, and entries for groups, which contains a unique group ID, a title for the group and also the two photoIDs associated with this group. I have since discovered I can do a JOIN with aliases on the same table to pull all information into one row. SELECT * FROM groups g JOIN photos p1 ON g.photoID1 = p1.photoID JOIN photos p2 ON g.photoID2 = p2.photoID Problem is, now these rows contain two entries with the same name...I guess that is the heart of the problem, how to pull information about multiple entries from one table, MULTIPLE TIMES per page in as few SQL queries as possible. If I run the code above, I cannot differentiate between the different fields for each photo; ie there will be multiple columns listed as 'timestamp'. I could use aliases and be more specific about the data I retrieve (by changing the SELECT * to something specific), but this seems too inflexible and breaks down if there are a lot of fields to select for each photo. So instead I was going to just get a list of the groups, and then run a single query on each group. Would love to hear ideas on how to get around this.
  11. Howdy I'm using MYSQL client version: 5.0.67. Is it ok to hit a database with lots of similar calls per page...on the order of 100s? I have table of photographs, all indexed by a unique id, and the information associated with each one, i.e. size, exposure, timestamp, etc. A sample db would be created with: CREATE TABLE `photos` ( `photoID` INT NOT NULL AUTO_INCREMENT PRIMARY KEY , `exposure` TINYINT NOT NULL , `timestamp` TIMESTAMP NOT NULL ) ENGINE = MYISAM ; I am trying to display a list of the photos and their associated information. I have written a function to pull the data associated with a photo given an id. The SQL query it runs is : SELECT * FROM photos where photos.photoID = '001' If each page displays the information for 100 photos, is it reasonable to run this function for each photo repeatedly, assuming I open the db connection only once per page view? I realise I could write a function that gets multiple entries from the DB, but in actuality, my problem is a lot more complicated than this, and I really just want to know if it's ok to run this function something like 100 times per page. Thank you, would appreciate a little light shed on what is / isnt ok. -J
  12. That is a great idea aboutthe gmailer. I solved the problem...it was a line in php.ini that needed to set my reply-to or from address. sendmail_path=/usr/sbin/sendmail -t -i -faddress@domain.com that line had to be set with the -f+address Thanks to those that hepled.
  13. also: $percent = mph/$speeds*100; should probably be $mph?
  14. Thanks. I have contacted our providers to see what they can help with. I have also tried other mail options like phpmailer to no avail, so I think it is a server issue.
  15. Howdy I have a mail script that send out basic messages, nothing fancy, no attachments, just html messages. The script works fine to services like gmail, but the emails are not making it at all to addresses like .edu or other non-free email providers. Is there a spam filter thing I am missing? I assumed that the gmail spam was a good test, and I can get through that no problem, but I cannot get an email to my .edu address. Headers are as follows (EMAIL_FROM_ADDR is just a constant): $headers = 'From: '.EMAIL_FROM_ADDR."\r\n" . "MIME-Version: 1.0\n" . "Content-type: text/html; charset=iso-8859-1". "\r\n" . 'X-Mailer: PHP/' . phpversion(); Thank yall. -j
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.