Jump to content

kjtocool

Members
  • Posts

    187
  • Joined

  • Last visited

    Never

Profile Information

  • Gender
    Not Telling

kjtocool's Achievements

Member

Member (2/5)

0

Reputation

  1. You're not wrong (for typical connections). The garbage collection will clean it up. There are some performance benefits however to closing it explicitly, you regain the memory quicker and other minor things. The main reason I'd advocate closing it manually is simply because it's good practice. In this instance, the call to close the connection is actually a redundant one. It doesn't NEED to be done, it was just infuriating me that I didn't know HOW to do it.
  2. Thank you!!! I don't think I would have found that on my own.
  3. The only way I've found that I can do this, is if I unset the connection variable after I close it, like so: $this->connection->close(); unset($this->connection); That allows me to check isset and correctly know if the object needs closed. I would prefer not to do that. I want to do something like: $this->connection->close(); if (*some test to see if it's open*) { $this->connection->close(); } else { // Already closed, don't need to do anything }
  4. That thread is the same question, but the user in that thread decided to simply manually close the connection. Never was it answered HOW you can test if a connection is open. I am attempting to always close the connection in my destruct, but if it was already closed, i'm running into an error. I understand how PHP cleanup works, but I still want to close it manually in the destruct if possible as there are some benefits. To do this, I need to know how to test it, and can't figure it out. I even tried: $this->connection->ping() But that too throws the same error: mysqli::ping() [mysqli.ping]: Couldn't fetch mysqli
  5. I have a class I use to handle my various DB objects. I want to close any connections, if needed, in my destructor. I want to do this even though I already know the connection has been closed previously. What I am looking for is a way to test the $connection variable I have to see if it's in need of closing. I can't figure out how to do this. if (isset($this->connection)) { echo "is set"; } if ($this->connection) { echo "here"; } The above would output "is set" and "here", because in both tests say the already closed connection is still there. Can anyone offer any help?
  6. I do, but not as recent as I'd like. I'd lose about 100,000 records worth of data. I don't think it's that bad however. Even though the table is locked, I think I still have read access and should be able to make a full dump of the DB if need be. The issue seems to be an insert statement that hung, locked the table, and now the keys are all messed up. I've decided to try repairing the table via command line, unfortunatly the command "myisamchk --recover --quick phpbb_posts.MYI" can't be used because it fills up the /tmp directory to 100% capacity then hangs as it waits for more space. As far as I know, the only way to grant that directory more space would be to reformat, ugh. I'm trying instead to repair the table via command line using "myisamchk --safe-recover --quick phpbb_posts.MYI" which uses memory rather than the tmp directory. Unfortunatly it's also incredibly slow, so I'm waiting as it runs, hopefully if it completes my issue will be fixed.
  7. I have a large phpbb forum, the phpbb_posts table crashed late last night. When I run a check on the table, I get: worldofk_phpbb3.phpbb_posts check warning Table is marked as crashed and last repair failed worldofk_phpbb3.phpbb_posts check warning 1 client is using or hasn't closed the table prope... worldofk_phpbb3.phpbb_posts check warning Size of indexfile is: 1847600128 Should be: 1... worldofk_phpbb3.phpbb_posts check warning Size of datafile is: 858198988 Should be: 84... worldofk_phpbb3.phpbb_posts check error Found 1469781 keys of 1084641 worldofk_phpbb3.phpbb_posts check error Corrupt Any attempt to repair the table causes Apache (httpd) to crash. Executing "Unlock Tables" executes successfully, but does not clear the lock on phpbb_posts. Restarting the MYSQL service does not clear the lock. Restarting the server did not clear the lock. I'm out of ideas, any help is greatly appreciated.
  8. At this point, no. And only because I know there is an issue with my current indexes. I figure it's more important to get the current indexes working before I start trying to optimize them even more. As is, since all the keys are disabled, even had I added that index, it would run into the same issue where it would have a cardinality of none and never be used. Assuming enabling keys does fix my indexes/cardinality issue, I may then consider adding the index you suggested to test out the possible gain.
  9. I think I found my issue! I did the following query: SHOW INDEXES FROM phpbb_posts, which showed me that my keys were all disabled. I then ran the following query: ALTER TABLE phpbb_posts ENABLE KEYS That query is currently running, and my guess is it will take quite a while as it will have to rebuild all the indexes. But, assuming it works, that SHOULD solve my issue. Wish there was more visibility into the keys being turned off!
  10. Yep, that's my problem then. The cardinalities are all wrong. My selects aren't able to discount rows based on forum_id (which would limit searchs from 1.5 million rows to no more than 420,000 on the largest forum). If I can figure out why the cardinalities of those additional indexes aren't correct, or how to rebuild them, I'll solve the issue. Any ideas? ??? An analyze SHOULD fix this issue, but it's not in my case.
  11. I tried an optimize, didn't affect the cardinality, nor did it change the results of the explain. From what I know of cardinalities, there sure shouldn't be NONE for every index other than the primary key, and I'm sure that's one of the reasons it's forced to search all rows. I tried optimizing, repairing and analyzing the table. Everything came back looking normal, but no change was seen in the cardinalities. I'm interested in the idea of table stats, but through a quick google search i wasn't able to find anything I understood that could help. If anyone knows more about table stats or analytical stats and any relevant command that would let me shed some light onto what is going on here I would really appreciate it.
  12. I have a 1.5 million row table, it happens to be a posts table for a large forum. Each query that hits this table takes a long time (3 to 10 seconds). The table has various keys: Keyname Type Cardinality Action Field PRIMARY PRIMARY 1430198 post_id forum_id INDEX None forum_id topic_id INDEX None topic_id poster_ip INDEX None poster_ip poster_id INDEX None poster_id post_approved INDEX None post_approved tid_post_time INDEX None topic_id post_time post_subject FULLTEXT None post_subject post_text FULLTEXT None post_text post_content FULLTEXT None post_subject post_text Doin an explain on a few of the slow queries showed me that possible_keys, key, key_len and ref are all NULL. I am guessing, but am fairly certain that the lack of a key on a 1.5 million row table is what's causing the issue. I can't imagine searching all 1.5 million posts each time is a very efficient way to go about things. One of the slow queries: SELECT post_id, topic_id, forum_id FROM phpbb_posts WHERE topic_id = 51544 Returns three rows, takes an average of about 3 and a half seconds. Explain on the query above returns: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE phpbb_posts ALL NULL NULL NULL NULL 1430559 Using where The things that jump out to me are that A) the rows searched about 1.5 million and B) there are no indexes used, so it has to search ALL 1.5 million rows. As this forum continues to grow, my understanding is it's going to get slower and slower. Unless I'm mistaken, the fact it must search ALL post records is the reason for the slow response. I am hoping that A) someone can confirm this and B) someone can tell me if there is anything I can do to speed up queries hitting the phpbb_posts table.
  13. I just ran the following: SHOW VARIABLES LIKE 'max_allowed_packet'; Which returned: max_allowed_packet 33554432 I then tried an insert directly from phpmyadmin, and it now works! Unfortunately, the error still persists on the forum, and now I'm even more confused as to how to fix it. I really appreciate anyone who helps out, I'm losing my mind over here.
  14. Heh, I went ahead and figured out how to add the following to my .my.cnf file: [mysqld] max_allowed_packet=32M It didn't seem to do anything. Any other ideas?
  15. Well, I got SSH access, now it's just a matter of finding the commands to accomplish what I want. Any suggestions?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.