
fry2010
Members-
Posts
326 -
Joined
-
Last visited
Everything posted by fry2010
-
Ok I got another one. I wish to log unique hits per page, and then some content on the site is going to change after x many hits. Would it be wise to use InnoDB to store the unique hits per page? here is the table layout idea: create table uniqueHits ( pageId mediumint( unsigned not null REFERENCES pageTable(pageId), uniqueUserIp char(15) not null, PRIMARY KEY (pageId), UNIQUE KEY(uniqueUserIp) ); ENGINE=InnoDB; create table `pageTable` ( pageId mediumint( unsigned not null AUTO_INCREMENT, changeAtHits smallint(5) unsigned not null DEFAULT 100, displayedContentVersion tinyint(1) unsigned not null DEFAULT 0, PRIMARY KEY (pageId) ); Each page load would have the following queries: $sql1 = "SELECT COUNT(*) FROM `uniqueHits` WHERE pageId = $pageId && `uniqueUserIp` = $uniqueUserIp LIMIT 1"; // If no entry is found, insert new entry if($rowCount == 0) { // Insert this new unique hit for the page $sql2 = "INSERT INTO uniqueHits (uniqueUserIp) VALUES ($uniqueUserIp) WHERE pageId = $pageId"; // Now check to see if the number of unique hits has reached the change content limit: $sql3 = "SELECT a.changeAtHits, COUNT(b.pageId) FROM `pageTable` AS a LEFT JOIN `uniqueHits` AS b ON a.pageId = b.pageId WHERE a.pageId = $pageId LIMIT 1"; // These are example values of what is returned from previous query $changeAtHits = 100; $uniqueHitsCount = 100; // Change the content displayed since it reached the target of 100 unique hits if($uniqueHitsCount >= $changeAtHits) { $sql4 = "UPDATE pageTable SET displayedContentVersion = 2 WHERE pageId = $pageId"; } } As you can see for every page there is 4 querys being made just to log and update the unique hits, this doesnt include other querys that are used on the page. It also means there could be a sh*t load of records to go through to count the number of unique hits per page. Im open to better ideas on going about this/ database scheme.
-
Yeah thats great, that is how I have been designing my databases so far. I should probably do some benchmarking to figure out whats best for my particular set up. Cheers guys, lots to think about...
-
Its not so much that, gizmola, im just looking at the options and best practices available. The reason I asked is because in some other threads and sources i have read that you should not where possible create lots of tables, but rather group them together. So, performance wise, is it acually better to use one table for member data, even if some of that data will not likely be used by a high percentage, or split that data into seperate tables? Sorry for the questions, im just trying to nail it so I know for future also.
-
Ok.... so if there is a group of data that you will never need to seperate when performing a request for multiple users, then its a good idea to use serialization. However if you plan to use a peice of that data seperatly then dont. Have I got that right?
-
to requinix: Interesting, so I suppose by using a metadata method I could also store other pieces of data in this mannor. Still trying to get my head around that method though, but it looks pretty neat. Also I keep forgetting about serialization. That seems like a good idea to use when there are many values to a similair property... But how come you dont suggest to serialize age, sex and gender etc also? This leaves me with another question, does having a single row in a table offer better performance over several rows? I suppose what Im asking is, does serialization really offer performance improvement over just using seperate rows of tinyint(1)? I suppose that may come down more to the types of query being performed.. So if a whole bunch of data is selected at once, then serialize it. If not, then place that data in a seperate row. Thanks for the great info too.. to gizmola: Being brutally honest, I expect at best a few thousand members, so I guess Ill just stick with one table. Thanks.
-
I have a couple of questions and thoughts, I wondered if anyone could give some advice and recommendations: Question 1) In creating a member site I want lots of details about each member. However I imagine that many of these options may not be used. So in terms of performance, is it better to have one table that holds ALL member data or split into a couple of tables? The data I am thinking of is: facebook link, twitter link, website url, age, sex, gender, occupation, blog, privacy, newsletter subscription etc.. then usual member details like username, password etc. Should I combine this data or create seperate tables? Sorry if this is too general. Question 2) My understanding of MyISAM tables is that the table gets locked when it is being used, thus this can lead to bottlenecks. Would it be a good idea for later on performance wise, to have two tables of the same data then bind a table id to each user. Kind of how a multiple hard drive system might work on your computer. I would only intend to use this method for tables I believe to be used heavily. e.g. create table `user` ( userId int unsigned not null AUTO_INCREMENT, tableData tinyint(1) unsigned not null DEFAULT 1, etc.... ); create table `userProfile_table1` ( twitter char(255) not null DEFAULT '', facebook etc.... ); create table `userProfile_table2` ( twitter char(255) not null DEFAULT '', facebook etc.... ); At first I know what you will say, having two tables containing the same layout and data is a bad design. But both tables won't actually store the same data. They will store data for seperate users. e.g Lets say I had 2000 members sign up. 1000 of those members will get their data listed into `userProfile_table1`. The other 1000 have their data listed into `userProfile_table2` So now instead of all 2000 trying to access one table, it will be 1000 trying to access one table. A problem I see however, is that it will increase the load on the table `user` since it now needs to find out which table the user has their data saved into. If this is not such a ridiculous idea and I havnt embarresed myself, another thought would be: What if I worked out the optimum number of users it becomes worthwhile to split into a new table, then create a new table at that point. So eventually if there was 100's of thousands of members there could be several tables.
-
No thats excellent thehippy. Plenty of great info there, cheers. Basically what I meant by reduce the load on the database, I specifically meant reduce the number of queries. I know that when you construct a database effectivly and make effecient queries that there is a lot of performance to be had. Since I am not a professional database architect however, I thought perhaps that using directories to store files, so that they could then be used in a caching process, rather than performing the same requests over and over which give the same results. In effect what Im getting at is: Perform mysql query once -> store data in a cache file -> call that cache file whenever its needed. So it then lead me to think, since I would be creating a lot of cache files, will that in-fact lead to a worse performance gain than just sticking with a database in the first place? I know that many popular frameworks, such as wordpress, opencart, drupal etc use cached files. I suppose I should really just take a good look at how they have done it.. Awesome answer too, I wish I had the time to be able to learn everything you just posted, as well as loads of other stuff to do with programming. I find I always try to do things as quick as possible. I doesnt seem to get me very far though lol. Maybe if I make it someday Ill have the time to truly appreciate this answer.
-
ok, that makes life easier. I can always rely on you thorpe for an answer, thanks!
-
Basically what Im looking to do is create a directory for each new member to my site which will contain files relevant to them. Of course this could be done instead using mysql database, but I want to reduce the load on the database so I am considering putting certain information (for example profile view unique clicks) into their own files. Good idea or is it pointless?
-
I have always wondered about performance issues with having say hundreds or even thousands of files in a single directory and how it affects performane, specifically when using fopen(), fwrite() functions etc... Say I have a directory called '/temp/' and in that directory I have 10 files. I have another directory called '/temp2/' and in that directory I have 10000 files. Is there any significant difference in trying to open and/or edit a file in either situation?
-
ah cool. Thanks guys.
-
Thanks Will. Yeah I did try that out, but I felt it suited better to be below for some reason. Guess its just personal preference. Makes more sense above though... Hi fredrikrob. Yes I did have a problem with my site, but that was a few days ago, it should be working? Can someone else tell me if they have the same problem please?
-
Thanks cags, yes I was ssh as root, but I found the problem now. I had mod_ssl set to on in two config files that seemed to cause a conflict. I should have updated this thread, but forgot.
-
Hi, I have vps solution using parralells plesk control panel. All services are running, except httpd service will not start and it gives no errors. It simply states: failed. I have checked the httpd error files and get this: error_log.txt = [sun Oct 02 04:10:57 2011] [notice] Digest: generating secret for digest authentication ... [sun Oct 02 04:10:57 2011] [notice] Digest: done [sun Oct 02 04:10:58 2011] [notice] mod_python: Creating 4 session mutexes based on 10 max processes and 0 max threads. [sun Oct 02 15:42:03 2011] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) error_log.1.txt = [sun Oct 02 04:10:57 2011] [notice] SIGHUP received. Attempting to restart WARNING: MaxClients of 256 exceeds ServerLimit value of 10 servers, lowering MaxClients to 10. To increase, please see the ServerLimit directive. I have then looked into the conf.d/swtune.conf file since that seems to be where the max client problem is occuring and changed the values to this: <IfModule prefork.c> StartServers 1 MinSpareServers 1 MaxSpareServers 5 ServerLimit 255 MaxClients 255 MaxRequestsPerChild 4000 </IfModule> There is also a worker.c mod but I have not changed that. It looks like: <IfModule worker.c> StartServers 1 MaxClients 10 MinSpareThreads 1 MaxSpareThreads 4 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> Btw I only made this change AFTER the problem occured.
-
Look pretty packed out. Looks like something is happening with it and everything seems to work ok, however I only tested for a few minuites. The navigation is pretty slick too. Is this really your site as well, because it seems to be linked to begginertuts.com and that looks exactly the same? It almost seems like its a copy, although maybe you created both sites? Either way I personally cant fault it other than the fact I cant understand danish lol.
-
I wasn't going to post this here, but it has turned out to look better than anything I have done before. I am just very bad at graphic design and layouts. So just wanted some people to give pointers how I could really finish it off. There is not a great deal of content at the moment. Also the search and mail form does not work yet. I would appreciate any other ideas/ ways to improve it. Check it out here: https://www.youramazingproducts.com Thanks.
-
You could also do something like, if the page request has come from that URL you dont want used, then you could place in the robots metatag NOFOLLOW, NOINDEX. Something like: <?php $robots = 'FOLLOW,INDEX'; if($_SERVER['SERVER_NAME'] == 'http://www.adifferentsite.example.com') $robots = 'NOFOLLOW, NOINDEX'; ?>
-
deffinatly use the canonical because if someone posts a link to a page in your site and it gets rewritten by that other website, then google will deffinatly have duplicate content of those pages. Its good for things like rss feed links, twitter, facebook etc.. But there would be no harm in using 301 as well, not that I can see.
-
I have looked around everywhere but could not find decent answers to this. I want to count unique hits to certain pages, so on every page load if I was to record these hits in mysql, then would this cause a strain on the server due to the numerous connections and table updates? Would it be better to store this kind of data in a text file?
-
Use google analytics, all your problems have been solved already with that. However if you need to manipulate that data, then you could look at the google analytics API.
-
I know of this issue too. Make sure that you do the following: Place <link rel="canonical" href="<?php echo 'http://www.YOURWEBSITE.com'.$_SERVER['REQUEST_URI']; ?>" /> in the head section. This will tell search engines to rank based on the url provided. Of course do not link to the url you do not wish to be indexed. As long as you do this there would be no reason search engines will even find that url.
-
How to match two different conditions but give same rewrite rule
fry2010 replied to fry2010's topic in Apache HTTP Server
excellent thanks cags and giz, finally got it now, and a better solution too. -
oh sry, ignore that. I didnt look at your rule properly. I am not much good with rewrite either, just thought I spotted something similair to my issue.
-
How to match two different conditions but give same rewrite rule
fry2010 replied to fry2010's topic in Apache HTTP Server
oh ok. Its just cause I dont fully understand mod rewrite. I will probably get stuck with it so might be back lol. -
it is because you are looking for the same url each time, so the first one will execute and the second will not. Does it give an internal server error as well? I have similair issue, look in the thread below this one or above.