Jump to content

n1concepts

Members
  • Posts

    195
  • Joined

  • Last visited

Everything posted by n1concepts

  1. Hi, What is the best MySQL function to find the 'endofweek' DATE similar to using 'LAST-DAY()' function to get end of month date? SELECT LAST_DAY('2013-10-31') I am asking of I'm not sure their is a global function for this purpose or if I have to combine server date functions to get result. Any insight appreciated - thx! Example: CURDATE() - for any date - it will always yield the 'Y-m-d' of that week's date for the following Saturday of that week. In this case, that would be '2013-10-19' for today '2014-10-15'. Thx! SELECT LAST_DAY('2009-05-18'); SELECT LAST_DAY('2009-05-18'); SELECT LAST_DAY('2009-05-18');
  2. FYI: I think i figured it out - will script that db field to '<noescape></noescape>' - hopefully, that fix the problem. But welcome any suggestions as i may be wrong - haven't tested yet...
  3. Hi, I have a url where vendor variable is actually $ltid and based on placement in url - see below example - the 'lt' reference is being converted to the 'less than' (>) sign. Is there a way I can not escape this entity so the correct variable is parsed in the link? Example: http://www.domain.com/system/cAPI.php?paid=114&ltid=121 Note: the '&lt' is the problem - i need to render that part 'as is' Results: it parses to http://www.domain.com/system/cAPI.php?paid=114&<id=121 '&<' which is the problem, I need '<' to remain 'lt'. Any suggestions appreciated and "no, already asked - vendor can't change $ltid on their end. Thx
  4. I won't even ask how you came up with this expression so fast.... I'll just say, 'Thanks!' - much appreciated!
  5. BTW: I've broke down and reviewing http://www.regular-expressions.info/tutorial.html again to figure it out but any suggestions surely appreciated....
  6. Hi, I've always struggled w/RegX and need some guidance on how best to parse an xml file that will contain a series of (four) numbers between 1 thru 99 for each column. Note: some of the entries will contain spaces while other will not - see (below) examples: <body> <p>There may be several lines of text prior to the row of numbers we need to capture is presented in later sentences/paragraphs</p> <p>Here is 1st example of numbers we need to capture: 1 - 41 - 16 - 99</p> <p>In this 2nd example, the file may send the numbers: 1-41-16-99</p> <p>Could be alot more text to follow after that set of numbers but will only be one set of the numbers in the file (never multiple). </body> So I got to figure out an expression/function that will: 1. exclude all the other content and just grab (to later echo in script or save to MySQL db) as such: 1-41-16-99
  7. You could resolve the need for MySQL queries by implementing memcache. Note: configure memcache to be called and set a function to check if dat has passed set timeframe - if date older than set criteria, then you pull from the database. I have this running on a setup and where I was having to query the db seven to twelve times - per instance - it only queries on the initial 'column' INSERT and/or an UPDATE. Rest of the time, the php script is pulling data straight from memcache and MySQL never touched - i even wrote function to only open (and close) the db connection if/when the db query required. As a result: where I was averaging over 70 open connections - handling around 25,000 http requests to the php script, max connections only 1 now so works for me using method I've stated above. Here's an example to review to come up w/your own solution. Example: http://net.tutsplus.com/tutorials/php/faster-php-mysql-websites-in-minutes Good luck!
  8. It would take too long to explain everything and i feel the responses are filled with ego - not sure why. I just wanted to hear from someone that had experience w/Redis..... Disregard providing help. I will figure ut out... Thx
  9. Hey mac_gyver, Yeah, I'm aware of all of that but I don't think (still) you see what I'm getting at... There is no reason to perform (repeat) queries for same info so the objective is to cache that info which is what I've read and confirmed from multiple authorities. Managing this in MySQL is just not the most efficient way to do this.... The application logic is what I'm looking to do w/Redis as that data never changes and serving via Redis oppose to querying MySQL just make sense (not just from my conclusion) but from all the info I've gathered (including examples from Dig, Facebook, etc...) My question - here - was to see if anyone had actually worked w/it in this matter - that's all. As always: I appreciate your input (thx!)
  10. Hi, I have a situation where I need to develop code that: 1. can manage an constant and high amount of affiliate clicks and conversation stats (24/7). Currently, this setup is operating with PHP (frontend) and MySQL (data storage). Results: quickly seeing lag time due to the constant queries, updates, and/or inserts (backend forth to the db). Note: all of the queries are looking for same (unchanged info - actually, the info will NEVER change so perfect for caching). ---- My question: Is this better suited for Redis - which can handle both READ/WRITE operations or Memcache - which only READ. I know Redis is the answer but I'm wanting to hear from someone that works w/Redis to know the following: 1. is Redis realiable enough to not have t worry about lost data - i read where it's not 100% but Redis clusters could be implemented to shard data (is this a good idea)? Any insight and/or examples of affiliate or click scripts links to review appreciated. Thx!
  11. Found afew examples - this will do what i need: http://papermashup.com/using-memcache-with-php http://dev.mysql.com/doc/refman/5.1/en/ha-memcached-interfaces-php.html
  12. Q: Anybody got info to show how to setup caching (php scripts) to reduce having to perform MySQL queries on each HTTP request - being alot of the data is unchanged? I need to cache this info which will eliminate about 80% of 'repeat' queries to a db that's under constant load. Any info to help define this process appreciated - thx!
  13. Never mind - I got it (Dah!) - thx mysql> SELECT users_id,id,subscriptions_id FROM user_subscriptions where created IN (SELECT MAX(created) max_date FROM user_subscriptions GROUP BY users_id) ORDER BY users_id DESC; +----------+-----+------------------+ | users_id | id | subscriptions_id | +----------+-----+------------------+ | 191 | 137 | 5 | | 190 | 136 | 5 | | 187 | 127 | 5 | | 186 | 126 | 5 | | 185 | 125 | 5 | | 184 | 121 | 5 | | 183 | 118 | 5 | | 181 | 117 | 5 | | 180 | 116 | 5 | | 178 | 115 | 5 | | 177 | 111 | 5 | | 171 | 109 | 5 | | 170 | 108 | 5 | | 169 | 135 | 2 | +----------+-----+------------------+ 14 rows in set
  14. See table info - listing all records for 'users_id' column: ------------------ mysql> SELECT id,users_id, subscriptions_id,created from user_subscriptions; +-----+----------+------------------+---------------------+ | id | users_id | subscriptions_id | created | +-----+----------+------------------+---------------------+ | 108 | 170 | 5 | 2013-06-04 14:29:59 | | 109 | 171 | 5 | 2013-06-04 17:23:16 | | 110 | 177 | 5 | 2013-06-13 17:24:01 | | 111 | 177 | 5 | 2013-06-13 19:37:41 | | 112 | 178 | 5 | 2013-06-13 19:46:04 | | 113 | 178 | 1 | 2013-06-13 19:47:02 | | 114 | 178 | 5 | 2013-06-13 20:32:01 | | 115 | 178 | 5 | 2013-06-13 20:50:36 | | 116 | 180 | 5 | 2013-06-14 22:45:00 | | 117 | 181 | 5 | 2013-06-17 16:42:24 | | 118 | 183 | 5 | 2013-06-17 16:51:25 | | 120 | 184 | 5 | 2013-06-21 16:52:36 | | 121 | 184 | 5 | 2013-06-21 16:57:39 | | 122 | 169 | 5 | 2013-06-21 21:12:27 | | 123 | 169 | 5 | 2013-06-24 14:02:52 | | 124 | 169 | 5 | 2013-06-26 15:32:23 | | 125 | 185 | 5 | 2013-06-27 14:28:10 | | 126 | 186 | 5 | 2013-06-29 05:34:39 | | 127 | 187 | 5 | 2013-07-01 22:15:42 | | 128 | 169 | 5 | 2013-07-02 19:28:58 | | 129 | 169 | 5 | 2013-07-08 13:27:39 | | 130 | 169 | 5 | 2013-07-08 13:49:35 | | 131 | 169 | 5 | 2013-07-08 13:51:08 | | 132 | 169 | 5 | 2013-07-08 14:37:25 | | 133 | 169 | 5 | 2013-07-08 20:02:56 | | 134 | 169 | 1 | 2013-07-08 20:13:05 | | 135 | 169 | 2 | 2013-07-08 20:25:38 | | 136 | 190 | 5 | 2013-07-12 13:48:49 | | 137 | 191 | 5 | 2013-07-12 14:49:06 | +-----+----------+------------------+---------------------+ 29 rows in set Note: I'm trying to figure out the correct query that will list (only) the most recent entries - highlight in Bold above - which is a total of 14 entries (you can see there are 14 individaul 'users_id' values (shown below so to confirm the count that should be returned from the above query.) mysql> select DISTINCT(users_id) from user_subscriptions; +----------+ | users_id | +----------+ | 170 | | 171 | | 177 | | 178 | | 180 | | 181 | | 183 | | 184 | | 169 | | 185 | | 186 | | 187 | | 190 | | 191 | +----------+ 14 rows in set mysql> Objective: Define a MySQL select statement that will filter and only display the most recent record entry - for each 'users_id' column which will include subscriptions_id. I have been looking at using MAX() to filter and GROUPING on users_id but having issues w/that - reason I'm asking for help (LOL!) FYI: The below statement gets me the date info i need but that's where I'm concerned - what if there are multiple entries (although there should be if constraint on that column - i have to check). but what is there are two dates that are the same? This is my question and reason I'm thinking I need to filter/group on both - 'created' column and 'users_id'? mysql> SELECT MAX(created) max_date FROM user_subscriptions GROUP BY users_id; +---------------------+ | max_date | +---------------------+ | 2013-07-08 20:25:38 | | 2013-06-04 14:29:59 | | 2013-06-04 17:23:16 | | 2013-06-13 19:37:41 | | 2013-06-13 20:50:36 | | 2013-06-14 22:45:00 | | 2013-06-17 16:42:24 | | 2013-06-17 16:51:25 | | 2013-06-21 16:57:39 | | 2013-06-27 14:28:10 | | 2013-06-29 05:34:39 | | 2013-07-01 22:15:42 | | 2013-07-12 13:48:49 | | 2013-07-12 14:49:06 | +---------------------+ 14 rows in set I would appreciate some input to help me finish building this query to produce the unique rows based on most recent entry (per 'users_id'). Thx!
  15. Thx for your reply but I think you misunderstood my question. Appreciate response however - i got it worked out.... thx!
  16. FYI: I found afew articles on the subject: http://www.php.net/manual/en/features.persistent-connections.php http://stackoverflow.com/questions/9736188/mysql-persistent-connection-vs-connection-pooling To that, I don't have a problem w/max connections as I'm closing them and the average - even with thousands of hits to the db - is only 78 which is good considering the high volume of traffic. Still, welcome comments if anyone have good insight.
  17. Can someone provide some stats and/or details regarding the impact of opening & closing 'multiple' MySQL connections - in particular within the same script that requires multiple queries and/or inserts or updates to satisfy objective? I'm curious to see a benchmark of stats comparing - executing Open/Close oppose to one 'persistent' connection to complete the entire query / insert operation then closing the connection per request. Note: the thought of opening & closing the connection - for each - was to allow an infux of request (that's constantly hitting the db) to have ample time to process each respective query or insert/update. I know this flawed thinking but before making the neccessary edits, wanted to research to see stats - if any available - and review actual facts on why, 'multiple connections' bad way to handle processing the data. BTW: the script is handling several thousands hits per hour (and each of those request require five to seven queries (searches) and three to four inserts so imagine the load placed on db - opening and closing connections along w/executing each process. Any info appreciated - thx!
  18. I got it - thx for input! WHERE YEAR(cdate) AND MONTH(cdate) = MONTH(NOW()) AND (DAYOFWEEK(cdate) BETWEEN 1 AND 7)
  19. Yes, that column is indexed and I see your point about using BETWEEN or IN to take advantage of the indexing oppose to calculating which will result in full search. That leads to next question: I need this WHERE clause to be dynamic to always pull Sunday thru Saturday for the current week (so based on current date). So I still need something like: WHERE MONTH(cdate) = CURDATE() AND (DAYOFWEEK(cdate) BETWEEN 1 AND 7) I know this is wrong as it's not producing any results but using to make point - how do I go about: 1. grabbing the current date to extract the present year (as there are multiple years and i only need current month and year) 2. from the current month and year, I filter on just that week "starting with Sunday" and "ending with Saturday" for said week. That's the issue I'm still trying to figure out... thx! I do appreciate your insight as this one killing me...
  20. Can someone look at this query and tell me what I have wrong - it doesn't error but yield zero results when I know there are thousands of rows for this current Year (2013) and month (July). mysql> select rid,cdate from records WHERE YEAR(cdate) = CURDATE() AND MONTH(cdate) = MONTH(CURDATE()) AND DAYOFWEEK(cdate) <=7; Empty set mysql> Note: I need to match on the current month (of this present year) and pull all records the present week (Sunday thru Saturday). So for this moment (7/8/13) that would be anything match 7/6/13 thru 7/13/13 <when that time elapse of course)>. Q: What am I doing wrong w/above query?
  21. Got chat - that's for that insight (I was looking into persistent connections) but was concerned if holding any particular connection open - beig there are countless queries and INSERT/UPDATES hitting same script (database) nonstop - would create a problem. I'll beta test this to confirm but sounds like that will reduce load, easily, by 25% if not more... Thanks for reply - helps alot (appreciate it!)
  22. Hi, I know what you are saying and asking - dealing with similar issues myself. From a quick glance at your SQL statements and the 'EXPLAIN" results - one thing that sticks out to me is not having any defined 'indexes'. I would suggest - based on your actual queries - establish indexes based on your "WHERE" clauses (look at 'ON' clauses too which I assume those are all primary/foriegn keys. Reason: without those constraints/keys, the SQL query peforming a full search - as shown in the EXPLAIN results (indexes NULL). ALso, I see temp tables and file sorts being established b/c of 'Order by' clause, etc... Might not be any way around that and ok as long as indexes set. Got cha: be careful on how much you rely on indexes b/c if there are alot of INSERTS, UPDATES, or DELETES, then all of those processes will take a hit - although your searches will benefit greatly. You basically have to figure out the priorities to decide which out weigh the other - this thinking is that all READ/WRITE operations running on one db server; There's a whole other discussion if you slaving or replicating db operations. Hope this, at leasts, gives you some insight on how to optimize your query. Stary w/looking at your indexes.
  23. Hi, Need some input on the pros and cons of having 'multiple' MySQL connections - opening and closing (per query) which required to process the entire <php script> operation - oppose to opening 'one' connection and running that same script, THEN closing that connection before queuing the next one for processing. Setup: Have a php script that reads and captures inbound (appended) variables from URL's and peforms several checks (queries and inserts) to ensure: 1. data sent in the link is valid and if so, decide where to forward the requestor based on those variables - read from that (inbound) URL. Note: there are seven - individaul - MySQL queries along with four INSERTs that take place for each process. To add, there are - between - 1500 to 8500 HTTP hits to that script (sending in this information) per hour and that count will gradually grow (expect times ten within the next year). --- Current performance time - opening and closing the db connection (for every <individual> query and/or insert) - is working just fine and processing the information w/little to now latency (average four seconds total which includes the actual redirect to external sites - the DB portion taking one to two seconds on average). Here's that current db connection: (example of course): function execute($query) { $db_handle = mysqli_connect("1.2.3.4","db_user1","pass123","db_name"); $res=mysqli_query($db_handle,$query); mysqli_close($db_handle); return $res; } Note: when the 'execute()' function is called - in the actual scripts - it immediately close that db connection right after running that specific query or insert. Thus, multiple connections happening per process - oppose to just opening the db (once) at beginning of script then forcing close at very end of script once the entire set of operations complete within the script. The (original) thinking behind this logic was to - attempt - try and keep 'queue' time on incoming requests to a minimum by processing bits of each request - in hopes to avoid a backlog (waiting to be processed). <?php function execute($query) { // $db_handle = mysqli_connect("1.2.3.4","db_user123","pass123","db_name"); $res=mysqli_query($db_handle,$query); // mysqli_close($db_handle); return $res; } ?> In writing this - to explain - i just answered my own question (LOL!!!): this method still creating a queue b/c in processing portions of any one request - being there are several queries and inserts required to complete that entire request - still creates the buffering I want to avoid. Anyway, I'm thinking I need to change the db connection string (move to static 'include' file) to just establish that connection - comment out 'mysqli_close ()' in the above function and execute that close function at the very end of the actual scripts (themselves). Objective: establish just ONE db connection for each record being processing oppose to seven or eight connections for same queued entry. I see this is the logical approach but appreciate insight to get other's thoughts on this being the setup is (somewhat) high volume of traffic in short bursts. FYI: batch processing the entries - to take load off the db by saving entries to a file for later import to db - not possible b/c each record logged must be unique to all 'previous' records entered - thus, every NEW record is checked against the existing records so reason 'batch' processing not an option - every record (on initial hit to script) must be read, processed, and logged to the db - all in real-time. Again, everything works - currently - based on the current 'db connection' function shown above but i'm looking to optimize this (reduce lag time if better to establish just one db connection oppose to multiple to process each individual request/record). I know it's extra load on db but curious if this best due to high volume of requests coming in 24/7. Again: the question is, 'is it best to establish just one db connection and hold it open until the entire script completes?' or ' stay w/the current "Open/Close" method due to high volume?" What I'm looking for is input from those that have (actually) dealt w/this type scenario - I konw the 'book' answer but need real experience on issues encountered and experiences on why one choice made over the other - thx! Let me know your thoughts.
  24. I need to pull the total count for a table but only for rows timestamped within that current. For example: if it's 18:59pm whe query executed, then only records with time logged within 6:00 to 6:59 should be counted base on: select count(id) from leads; Note: I need some suggestions on WHERE clause to only filter on that current hour - and the current date (itself). for example: if cdate has records - all from 2013-06-28 thru 2013-07-02 - and the current time of the query is 18:59 (which is 6:59), then only records with times between 18:00 to 18:59 for 2013-07-02 should show. Note: the date & time is in datetime format: 2013-07-02 18:47:24 What would be the WHERE clause to filter on just those rows between 18:00 to 18:59 for current date - having that hour change (at the top of every hour so only that CURRENT hour's set of records always returned in query)?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.