Jump to content

gizmola

Administrators
  • Posts

    5,945
  • Joined

  • Last visited

  • Days Won

    145

Everything posted by gizmola

  1. Let me straighten you out Don. I have posted over 3 thousand helpful posts in this forum over a period of years. I also wrote some tutorials here that were read over 30k times. On top of that, even though I am a pro LAMP developer, on top of everything else, I have donated several hundred dollars to the forum to help keep it running. You on the other hand, are new, basically clueless, have contributed nothing to the forum other than a handful of questions, AND CLEARLY HAVEN'T READ THE RULES OF THE FORUM. Notice my emphasis. See when you come to a community, and that community has rules, it's considered common courtesy to spend a little time reading the rules of the community, which you obviously didn't bother to do. Despite this fact, I took your gigantic blob of unformatted code, formatted it so I could see where the loops were, and figured out your problem. I told you what that problem was. Instead of acting upon the information you were given, your reply was So apparently you don't require the help of experts like myself -- you know better what your problems are than we do. If that's the case, by all means, please leave the forum POST HASTE and go somewhere else. If you'd like I can disable your account for you, to help you on your way.
  2. Ok, so the query I gave you, does work. SELECT underlord_date, COUNT(*) AS Total FROM underlord WHERE underlord_date>CURDATE() AND userid != $userid GROUP BY underlord_date HAVING TotalORDER BY underlord_date The problem is -- and this is not a problem with the query, but rather, a basic SQL fundamental -- that if you execute a query and there are no results, you get an empty set. So when you start out, and your table is empty, no matter what query is executed, you will get an empty result set. You will also get an empty result set in this case, if you have no rows inserted for a particular day, and this then becomes a chicken and egg issue. So, I considered a few different approaches. This is the simplest one I could come up with. Break the thinking into 2 different queries: 1. Establish dates that you CAN NOT schedule this person for (it's EITHER full, OR the user is already scheduled on that day) SELECT underlord_date FROM underlord WHERE userid = $userid UNION SELECT underlord_date FROM underlord WHERE underlord_date > CURDATE() GROUP BY underlord_date HAVING COUNT(*) > 15 ORDER BY underlord_date 2. Get the next schedule date, and using #1, look for the first date you can find, that isn't blocked by the results of query #1 above. select DATE_ADD(CURDATE(), INTERVAL +1 DAY) as nextday; So in the 2nd query will get you the first date that COULD be valid. Using PHP you can cycle through dates using something like this: echo date('Y-m-d', strtotime("+1 day")); I'll leave it to you to research how to convert back and forth between PHP date and mysql dates. The basic pseudocode however would be: 1. Get the First available date either from mysql or PHP 2. Query for dates that are unavailable for this USER (SEE UNION query above) 3. Starting with php date, check list of unavailable dates, and return the first available date -- INSERT NEW ROW. Repeat with subsequent days. One thing this strategy also protects against, is that if for some reason, a person got scheduled for some future date, the will not be scheduled 2x, simply because you were looping through a range of dates, which didn't take into account the potential for gaps. I don't know if this was likely, but it was always a possibility with the original algorithm.
  3. Just going back to the original post -- again, you seem to be questioning things as if your version of PHP is doing something odd, when in fact based on your code things are working exactly as designed. I already advised you previously not to change the $_POST. There's no point in attempting to sanitize the entire $_POST with mysql_real_escape_string(), as that function is specifically only designed to be used when you are sure you are going to insert a STRING into mysql. If your form has other data types, there' s no reason to be escaping them in advance. Regardless, best as I can tell, your complaint is that mysql_real_escape_string() doesn't take a whole array for your convenience. It simply doesn't. My feedback to you, respectfully, is that your original post -Didn't have any clear questions in it You'll find you get much better answers to questions you might have if you ask clear questions. For example: Barely a question here, and worst of all, you didn't provide any example output for us to see. We aren't mind readers.
  4. Of course the intranet people will have an internal IP range from NAT. The script can determine this using $_SERVER['REMOTE_ADDR']. So the membership code simply needs a stub out there that lets people in. Of course this assumes that there's not other code setting session variables, which there probably is. So even though the login code could be shortciruited, whatever determines that a person is in "logged in" state, still needs to be setup probably. Since we've got on code to work with here, there's not much more to be said on this thread unless the OP returns.
  5. Looks good. Seems like you got the gist of things --- Kuddos for reading up on the examples. Looks like you're on your way. To check for the userid, just add in WHERE underlord_date > CURDATE() AND userid != $userid So obviously the $userid is a variable your script needs to supply.
  6. I would agree that your code looks right. Are you 100% sure you have the right public key and private key in the scripts, and those keys match the domain you signed up for at recaptcha? I tried out your page, and verified that it is rejecting at the recaptcha.
  7. I haven't forgotten about you, but I'm really busy with work right now. When I get a free moment, I'll give you some queries. You can establish the initial date by doing a GROUP BY on underlord_date HAVING COUNT(*) CURDATE ORDER BY underlord_date. That's not the exact sql but basically what you need. The first row in your result set which you can get via LIMIT, will be the date you should begin with your scheduling. From there, my opinion is that you should query each day, as your scheduling run is going, so you can insure that there is still an available slot AND that the person isn't already scheduled for that day. Since you have the initial day established, doing this in a loop isn't difficult. The way you were doing it, was bound in my opinion to have a problem, due to the assumptions inherent in your initial approach.
  8. Well, I guess it's magic --- OR you could look carefully at your code, consider what I said, realize that you have the insert query INSIDE the foreach block, and that is the reason you do the query for every image that is uploaded. Move it outside the block and you will find it is only done once. OR not.
  9. You can think of wget as a command line browser that will get one page. It has numerous options to deal with the idiosyncracies of website security, cookies, user agents, etc. but basically you run it as wget target_url. [david@penny ~]$ wget --help GNU Wget 1.10.2 (Red Hat modified), a non-interactive network retriever. Usage: wget [OPTION]... [url]... Mandatory arguments to long options are mandatory for short options too. Startup: -V, --version display the version of Wget and exit. -h, --help print this help. -b, --background go to background after startup. -e, --execute=COMMAND execute a `.wgetrc'-style command. Logging and input file: -o, --output-file=FILE log messages to FILE. -a, --append-output=FILE append messages to FILE. -d, --debug print lots of debugging information. -q, --quiet quiet (no output). -v, --verbose be verbose (this is the default). -nv, --no-verbose turn off verboseness, without being quiet. -i, --input-file=FILE download URLs found in FILE. -F, --force-html treat input file as HTML. -B, --base=URL prepends URL to relative links in -F -i file. Download: -t, --tries=NUMBER set number of retries to NUMBER (0 unlimits). --retry-connrefused retry even if connection is refused. -O, --output-document=FILE write documents to FILE. -nc, --no-clobber skip downloads that would download to existing files. -c, --continue resume getting a partially-downloaded file. --progress=TYPE select progress gauge type. -N, --timestamping don't re-retrieve files unless newer than local. -S, --server-response print server response. --spider don't download anything. -T, --timeout=SECONDS set all timeout values to SECONDS. --dns-timeout=SECS set the DNS lookup timeout to SECS. --connect-timeout=SECS set the connect timeout to SECS. --read-timeout=SECS set the read timeout to SECS. -w, --wait=SECONDS wait SECONDS between retrievals. --waitretry=SECONDS wait 1..SECONDS between retries of a retrieval. --random-wait wait from 0...2*WAIT secs between retrievals. -Y, --proxy explicitly turn on proxy. --no-proxy explicitly turn off proxy. -Q, --quota=NUMBER set retrieval quota to NUMBER. --bind-address=ADDRESS bind to ADDRESS (hostname or IP) on local host. --limit-rate=RATE limit download rate to RATE. --no-dns-cache disable caching DNS lookups. --restrict-file-names=OS restrict chars in file names to ones OS allows. --ignore-case ignore case when matching files/directories. -4, --inet4-only connect only to IPv4 addresses. -6, --inet6-only connect only to IPv6 addresses. --prefer-family=FAMILY connect first to addresses of specified family, one of IPv6, IPv4, or none. --user=USER set both ftp and http user to USER. --password=PASS set both ftp and http password to PASS. Directories: -nd, --no-directories don't create directories. -x, --force-directories force creation of directories. -nH, --no-host-directories don't create host directories. --protocol-directories use protocol name in directories. -P, --directory-prefix=PREFIX save files to PREFIX/... --cut-dirs=NUMBER ignore NUMBER remote directory components. HTTP options: --http-user=USER set http user to USER. --http-password=PASS set http password to PASS. --no-cache disallow server-cached data. -E, --html-extension save HTML documents with `.html' extension. --ignore-length ignore `Content-Length' header field. --header=STRING insert STRING among the headers. --proxy-user=USER set USER as proxy username. --proxy-password=PASS set PASS as proxy password. --referer=URL include `Referer: URL' header in HTTP request. --save-headers save the HTTP headers to file. -U, --user-agent=AGENT identify as AGENT instead of Wget/VERSION. --no-http-keep-alive disable HTTP keep-alive (persistent connections). --no-cookies don't use cookies. --load-cookies=FILE load cookies from FILE before session. --save-cookies=FILE save cookies to FILE after session. --keep-session-cookies load and save session (non-permanent) cookies. --post-data=STRING use the POST method; send STRING as the data. --post-file=FILE use the POST method; send contents of FILE. --no-content-disposition don't honor Content-Disposition header. HTTPS (SSL/TLS) options: --secure-protocol=PR choose secure protocol, one of auto, SSLv2, SSLv3, and TLSv1. --no-check-certificate don't validate the server's certificate. --certificate=FILE client certificate file. --certificate-type=TYPE client certificate type, PEM or DER. --private-key=FILE private key file. --private-key-type=TYPE private key type, PEM or DER. --ca-certificate=FILE file with the bundle of CA's. --ca-directory=DIR directory where hash list of CA's is stored. --random-file=FILE file with random data for seeding the SSL PRNG. --egd-file=FILE file naming the EGD socket with random data. FTP options: --ftp-user=USER set ftp user to USER. --ftp-password=PASS set ftp password to PASS. --no-remove-listing don't remove `.listing' files. --no-glob turn off FTP file name globbing. --no-passive-ftp disable the "passive" transfer mode. --retr-symlinks when recursing, get linked-to files (not dir). --preserve-permissions preserve remote file permissions. Recursive download: -r, --recursive specify recursive download. -l, --level=NUMBER maximum recursion depth (inf or 0 for infinite). --delete-after delete files locally after downloading them. -k, --convert-links make links in downloaded HTML point to local files. -K, --backup-converted before converting file X, back up as X.orig. -m, --mirror shortcut for -N -r -l inf --no-remove-listing. -p, --page-requisites get all images, etc. needed to display HTML page. --strict-comments turn on strict (SGML) handling of HTML comments. Recursive accept/reject: -A, --accept=LIST comma-separated list of accepted extensions. -R, --reject=LIST comma-separated list of rejected extensions. -D, --domains=LIST comma-separated list of accepted domains. --exclude-domains=LIST comma-separated list of rejected domains. --follow-ftp follow FTP links from HTML documents. --follow-tags=LIST comma-separated list of followed HTML tags. --ignore-tags=LIST comma-separated list of ignored HTML tags. -H, --span-hosts go to foreign hosts when recursive. -L, --relative follow relative links only. -I, --include-directories=LIST list of allowed directories. -X, --exclude-directories=LIST list of excluded directories. -np, --no-parent don't ascend to the parent directory.
  10. Why are you trying to mess with future times? Simply set the last_activity to be NOW(). Then all you need to do is decide what the threshold of time is for you to consider the users to be Online/ Idle/ Offline. Your $difference should be $time - $usertime. This will be a value in seconds. It's up to you to decide how many seconds in the past activity indicates that they are online vs. Idle vs Offline. Just to raise the question -- so if I don't post I'm not "online"? Seems to me that online/offline usually is about login to a system.
  11. Basically yes that code is about right, although Mchl makes a really good point about the key to the table. If the table has a primary key like an auto_increment id, then you're best off using that to identifty the row you want to update.
  12. You should read the manual for that library then. I'm sure it has examples. At very least I'd expect something like: $DataSet = new pchart; pchart should be whatever the name of the pchart class is, and also the class definition needs to be require_once() into the script prior to instantiating the object.
  13. Why are you messing the the $_POST? That's a superglobal. You should consider it read only data. What is your question?
  14. Well $DataSet is an object of some class, that we don't know. Has it been instantiated using new? The error is telling you that at that point $DataSet isn't an object.
  15. Ok, so first off, in regards to doing it procedurally using 2 queries and some PHP code --- yes absolutely. There is nothing automatic about that, of course, but it's certainly possible. If you do go that way, then as an aside, make sure you initialize your $newsize variable to zero before you use it. Currently your thinking isn't quite right. You should not get more than one row back, so there's no reason to fetch in a loop when there should only be one row that matches a mailID. Regardless of that, you need an inner foreach loop on the $row variable. Also you should specify MYSQL_NUM or you'll get double the columns in the array, because you get both a numeric index'd array and an associative array with the key being the column name. You only need one or the other here, so I suggest to just specify the numeric one. At minimum your code should be something like this: $newsize = 0; while ($row = mysql_fetch_array($messresult, MYSQL_NUM)) { foreach ($row as $val) { $newsize += strlen($val); } }
  16. Oh yes, hehe, the headers need that trailing semi colon. Sometimes you miss the forest for the trees.
  17. Take a look at symfony. Symfony was designed to be a clone of ruby on rails, and in fact has crud generation. You may have to do some mapping of the tables if you are using mysql with myISAM tables, because myisam doesn't support constraints, and there is no way for the object/relational mapping engine to infer your relationships. With that said, symfony has the capability to do exactly what you want --- generate a whole usable system based on existing tables. http://www.symfony-project.org/ Watch their demo screencast to see this all in action.
  18. Insert is for new rows. You should not have a where clause. It looks like what you want to do is an UPDATE. The syntax is totally different. UPDATE site_clients SET client_account_number = 1234 WHERE .... etc.
  19. Yes we got what you want to do. Is this your code? How is it that you can't understand what you've written? Here's where probably you have an issue -- up at the top you do this foreach loop on each file. It's difficult to tell because the indentation of the code blocks is all over the place and basically a mess, but probably your Insert query is inside this foreach loop, so you are doing the insert for each picture. foreach ($_FILES['image']['name'] as $number => $file) { Figure out what loops and blocks are appropriate.
  20. So when you say you did the file size comparison, you used some tool to check the header that is being sent, and that value matches the size? Make sure there's no empty space at the bottom of the script, before the end tag, or better yet, remove the end tag (?>) entirely from the script, and try that. Also, since you're using apache 2, how about a test where you omit the Content-Length header entirely?
  21. Start with a database structure. These tables would support your application. Once you understand these, you should be able to write some PHP code that utilizes them to create a set of scripts that would support a page like your example, both in emitting the survey, and recording the responses/answers when the form is filled out. CREATE TABLE question ( question_id INTEGER UNSIGNED NOT NULL AUTO_INCREMENT, description VARCHAR(60), PRIMARY KEY (question_id) ); CREATE TABLE choice ( question_id INTEGER UNSIGNED NOT NULL, choice_id TINYINT NOT NULL, description VARCHAR(255), PRIMARY KEY (question_id, choice_id) ); CREATE TABLE survey ( survey_id VARCHAR(40) NOT NULL, taken TIMESTAMP, PRIMARY KEY (survey_id) ); CREATE TABLE surveyAnswer ( survey_id VARCHAR(40) NOT NULL, question_id INTEGER UNSIGNED NOT NULL, choice_id TINYINT NOT NULL, PRIMARY KEY (survey_id, question_id, choice_id) );
  22. Yes, LENGTH() will give you the size in bytes(). The rest of what I wrote you still holds true. If you want the message size to include the multiple columns, simply get LENGTH() for all of them and add em up in the trigger.
  23. That code most certainly won't work, and you're really off track. for loops are designed for simple iteration, and don't have any intelligence about dates. I think you're going about this the wrong way -- make your mysql database do the heavy lifting. Figure out the queries you need, which can be tested without php code. This is why we need the structure. Often you can do in a single query what it might take you blocks of code and multiple nested (and inefficient queries) to do when you are trying to approach things too procedureally.
  24. Sure, go with json if that works better for you -- it's certainly better from a bandwidth point of view in my experience. When you json_decode() you'll get a big array, so it's just as efficient in getting to those couple of keys you need, so long as the structure doesn't change. Depending on the code, the xml version might be a bit more resilient in the case that twitter changes the structure significantly. Can you simply do a describe on the table and paste that in here, so we can clearly see with datatypes what your structure is. Also it seems you have this summary table, but what of the detail table(s) that store the data for each person, and their followers? If I understand you correctly, what you want is something like this Account - So for each Account you have 1-Many FollowerUpdate rows. This is your transaction table. Based on your sampling period, you only want to insert a new FollowerUpdate row if the status id value has changed. Obviously the query to determine this, with an index is trivial and will come back in milliseconds, however you need to do this query for every follower. This is why I suggested the use of memcache, because you can add in a simple caching layer where you attempt to read from memcache first. The memcache key simply needs to be something like Follower ID . Status ID. You can store the date in there if you want as well. If the status hasn't changed, you will not need to query the mysql db, because you'll get the memcache cache hit. If it has changed, the key won't be found, so you know that you need to do an insert of a new FollowerUpdate row. The followerUpdate table needs to have a timestamp column in there so you can do your summarization process later. Your summarization is then a simple matter of applying a date range to a GROUP BY query for whatever the time period that you need to summarize. You can then write those totals into your summary table. I'm not a big fan of denormalizing in the way you apparently have done, but I'll leave that issue be, as it's less important than the structures of your transaction data.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.