Jump to content

bagnallc

Members
  • Posts

    40
  • Joined

  • Last visited

    Never

Profile Information

  • Gender
    Not Telling

bagnallc's Achievements

Member

Member (2/5)

0

Reputation

  1. I know of the with rollup feature but want to know if it can be implemented against all columns to produce totals against all possibilities. For example - select col1, col2, count(*) as Sum from tb group by col1, col2 with rollup returns col1.Value1, col2.Value1, Sum col1.Value1, col2.Value2, Sum col1.Value1, NULL, Sum col1.Value2, col2.Value1, Sum col1.Value2, col2.Value2, Sum col1.Value2, NULL, Sum NULL, NULL, Sum I want these additional rows to also be included as part of a query NULL, col2.Value1, Sum NULL, col2.Value2, Sum Is it possible to return all this data using MySQL?
  2. Hi fenway there are many types of thing im trying to query but an example is to select a count of all members who have made the same selection on 2 consecutive selections
  3. what i mean is there were several different people gathering data and as a result when the info was loaded there were files of say 2003 ready before 2001 as an example anyway, do you feel the tables are worth rebuilding and if i did how would i write a self join on timestamp. appreciate your help
  4. yes unfortunately this has been the only way i have found. timestamp i did not think would work as the data was originally retrieved in batches and therefore not entered in the correct order as such. perhaps i am wrong?
  5. By creating the additional table as described in first post
  6. Because there is no way of joining it to itself and ensuring accuracy that i can see. How could i tell mysql which row of data is the previous record for any other row without the new table? Its the same principle a lot of queires i now have to build. For example select members with same selection on 4 subsequent occasions select most common characteristics of members with selection 1 on two of last 3 occasions what adds to the confusion is the same member could and should appear twice if for example he had made the same selection on 4 subsequent occasions more than once there is another table called masterrecord which records general data and i used this to create the prevrecord table masterrecordid mediumint unsigned not null primary key masterdate date unsigned not null daybatch tinyint unsigned not null entrylevel tinyint unsigned not null owner mediumint unsigned not null this has about 150k rows and again this is indexed and performs very well individually and also when combined with records the only other option i can see is to do self joins based on the two tables using a combination of the masterdate fields to get last occurences. i tried this previously though and it took forever appreciate your time and help
  7. Hi, I have a table which holds records of occurences, which is very simple records recordid varchar(20) not null primary key memberid medium int unsigned not null selection tinyint unsigned not null characteristics tinyint unsigned not null there are about 1.5 million records and indexes on each of the columns querying this table is very fast but where i am having a little trouble is when i want to compare the table to itself. an example of what i am trying to achieve is to list a memberid and if he made the same selection 2 times in a row the way i have done this is to create a new table prevdata recordid varchar(20) not null primary key prevrecordid varchar(20) prevrecordid has a unique index on it and does contain null values (as sometimes there is no previous information) and then i run the following query select count(*) from records join prevdata on records.recordid=prevdata.recordid join records as records2 on prevdata.prevrecordid=records2.recordid where records.selection=records2.selection the query does run but it takes well over a minute which is no good for the environment it will be used in. am i making a design mistake here
  8. thanks, is there any way of recording a piece of code such as all of a while statement except the } and then calling it whenever i need it rather than having to type it all the time
  9. I am trying to write a page that relies on the same code being used several times but in differing contexts, its basically several hundred different while statements which need to cross reference each other. Can you please help me answer the following - 1 - Can i open a while statement within a function and then close it outside of the function? 2 - Can i use mysql_query to insert data to a database within a function?
  10. Hi all, I am starting up a website which uses PHP to process database queries from MySQL. It works great and im very pleased with it. One thing i am concerned about though is when usage on the site increases. At the moment i have it set up so that every time someone runs a query my password etc opens the database and then its processed. However i dont really know if this is viable if 30 people all tried to search at the same time? Should i be setting something else up? Many thanks for your help. Chris
  11. yes with php. i feel there must be a way of saying if time to load query exceeds limit then advise user im not too worried about a "your query is loading" screen, but i have seen them on travel sites etc
  12. hi, is there a way of doing the following. a) display an error message if a mysql query takes too long to run? b) have a display screen thats shows while mysql calculates a query. many thanks chris
  13. Im trying to figure out the best way to run a certain type of query and would appreciate advise. I have a main table with about a million records in. the columns are reference, id, prev_id, 2ndprev_id the reference column is a unique individual and the other three columns are id's containing numeric values which relate to a second table with further information. an id will appear up to 50 times in a column. so for example three rows which have the same id could be - anghi 6245 4921 3215 hibng 6245 1520 859 kiljh 6245 4895 4441 i have a primary key (unique) and indexes (id, prev_id , 2ndprev_id) (prev_id, 2ndprev_id) (2ndprev_id, id) the second table has twenty columns in it and about 100,000 records it is extremely fast when running queries on it. the table consists of a unique id and then various details about that id. for the example i am using one column which is named type. an example of a possible query and where i am having trouble optimizing is in english COUNT RECORDS WHERE ID TYPE WAS 1, PREV_ID TYPE was 2 & 2NDLAST_ID TYPE WAS 3 i have thought of five ways of doing this and none are fast and none are using the multiple indexes i set up on the maintable. they are all only picking up on the first part. in this example only 78 rows return and it is taking a long long time to give that answer. OPTION 1 select count(*) from maintable where id in (select id from secondtable where type=1) and prev_id in (select id from secondtable where type=2) and 2ndlast_id in (select id from secondtable where type=3) OPTION 2 select count(*) from maintable join secondtable as a on maintable.id=a.id join secondtable as b on maintable.prev_id=b.id join secondtable as c on maintable.2ndlast_id=c.id where a.type=1 and b.type=2 and c.type=3 OPTION 3 select count(*) from maintable join (select id from secondtable where type=1) as a on maintable.id=a.id join (select id from secondtable where type=2) as b on maintable.prev_id=b.id join (select id from secondtable where type=3) as c on maintable.2ndlast_id=c.id OPTION 4 do three separate queries and then make IN statements in the where clause this works best in theory but when there are 20000 ids matching a criteria it causes the select query to collapse. the query would look like select count(*) from maintable where id in ( huge number-range entered ) and prev_id in ( another number range ) and 2ndlast_id in ( yet another large range ) OPTION 5 The only other option i can think of is to have further columns in the main table but if going down this route i would have a maintable with 63 columns (20 columns with id details, 20 with prev_id details and 20 with 2ndlast_id details). Surely this goes against every data normalization rule going? I appreciate any feedback, advice or criticism. Many thanks Chris
  14. cheers i actually just got the other one working there by using array_push($array, 1); which adds a one on the end so it does return the correct value all a learning process for me. thanks again
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.