Jump to content

n1concepts

Members
  • Content Count

    189
  • Joined

  • Last visited

Community Reputation

0 Neutral

About n1concepts

  • Rank
    Advanced Member

Profile Information

  • Gender
    Male
  1. Hi Kicken, my apologizes - i can see how that explanation is confusion so let me provide the answer (it came to me right after I posted for help...) Answer: UNION - I just joined the two queries based on the UNION clause to get the results I needed (worked perfectly after some adjustments to each query). Yes, you are correct - I was after the 'member_id' which is the primary_key in table which is also 'receiver_id' 2nd key. thx & problem sovled!
  2. Hi, I need some help to define the correct SQL query that will accomplish the following: VERSION: 10.0.38-MariaDB Select the 'member_id' based on the COUNT function using the 'receiver_id' column in the 'members' table Conditions are: WHERE 'astatus = 2' for 'member_id' (this condition 1st to filter before next condition actioned COUNT(receiver_id) is LESS than 4 or IS NULL The goal is to list the member_id that meets those two conditions. Now, I have two queries defined which giving me the results I want but I'm trying to combined those two into one query AND add the WHERE condition for 'astatus = 2'). I need help with this part as I can't see the logic to combined the two queries with OR statement and also (for both) add the condition for the 'astatus = 2'. Any help, surely, appreciated - thx! Screen shot of table excerpt below along with queries # Get Count of donors assigned to a receiver_id SELECT m.receiver_id, COUNT(m.receiver_id) AS donor_count FROM members AS m WHERE (m.receiver_id IS NOT NULL) GROUP BY m.receiver_id HAVING COUNT(m.receiver_id) < 4 LIMIT 1; # Find member that's not yet assigned a receiver_id SELECT m.member_id FROM members AS m WHERE (m.receiver_id IS NULL) Limit 4;
  3. Yes, there was a problem - i stated the objectives but think those issues wasn't understood (no worries & thanks for replying, anyway). To that, I went ahead & applied the changes that I thought would correct this issues to produce the, expected, results (and indeed that fixed the issues). I just removed the L flag on the 1st Rewrite rule to allow the remaining rules to be parsed for a match, then i added that 1st rule again, at the very end of the script with the L flag to stop processing should NO match found from previous rules. Results? working 100% now - thx again but not sure (again) anyway for response (problem fixed - My post was just to validate that I thought would fix the issue). PROBLEM SOLVED - thx! BTW: I like your post / reply...
  4. Hi, I'm starting a rebuild on an existing website & need to 'force' HTTPS to a subfolder entitled, 'dev' along with redirect specific 'HTML' files to their (new) 'PHP' extensions. FYI: only queries to that subfolder should be of focus of this particular .htaccess file - i.e. path/dev/.htaccess I have an .htaccess file defined in the 'dev' subfolder with the below content & that's forcing HTTP to HTTPS (see that code below) RewriteEngine on RewriteCond %{HTTPS} off RewriteCond %{HTTP:X-Forwarded-Proto} !https RewriteRule ^(.*)$ https://%{HTTP_HOST}/dev/$1 [R=301,L] # The remaining, below, rules are to map old (HTML) file names to their (new) php file names found in 'dev' folder RewriteRule ^contact-us.html$ contact-us.php [L] RewriteRule ^register.html$ register.php [L] RewriteRule ^welcome.html$ welcome.php [L] RewriteRule ^log-in.html$ log-in.php [L] RewriteRule ^log-out.html$ log-out.php [L] RewriteRule ^my-account.html$ my-account.php [L] Note: I know there is one, immediate, error (L flag stopping the remaining Rewrites from firing, right?) However, before I make any more edits, I thought best to ask (here) to get someone's insight on how to accomplish the following: 1st, i need to enforce HTTP to HTTPS for all content going to that subfolder - i.e. http://domain/dev or http://domain/contact-us.html, etc... - and that appears to be working based on the 1st 'ReWrite' rule. However, I think that rule is the problem so asking: Q1: should I remove the L flag & will that allow the remaining rules to parse to see if a match? Q2: if answer to 1st question, No, then please advise how best to modify the .htaccess file, specifically, for the 'dev' subfolder which is where that .htaccess file reside (there are others in / and other subfolders which should remain 'as is'. Appreciate the input in advance. Craig
  5. FYI: I left out one important feature with what I'm thinking may be a more suitable db design - Partitioning Note: I'm thinking to address the lag time / latency if a particular table grows - i.e. millions or even billions of records. Answer: by partitioning the table(s) by week, month, etc... that will reduce the search (latency); limiting the amount of data required to be searched / sorted to form the query. Again, the question is which is better based on the suggestions I've posted? 1. Stay with the (individual) db's per client or 2. compile all the client data into one MASTER db then implement partitioning and ensure proper indexing on tables to segment searches to keep load times to a minimum. Again, thanks, again, for your comments / suggestions!
  6. Hi, I have some questions based on a MS-SQL project I'm, now, apart of to see if the current db structure the most efficient - basically, asking about db provisioning 101 / theory as I'm getting push back on recommended edits to improve the current db design based on 'expected' future growth. Here's the current db structure for the application: There is one MS-SQL database server, running one instance, to which the application is (currently) creating a new database for each (new) client that signs up to the service - all on that one db server. The current devs stated that they designed the db process in this manner so that each client's data would be, completely, separate from the other clients' data & this way db administrative efforts would be less hectic & more efficient My view: I think there is a better way to address the db / data requirements - see below to which I would like feedback which approach is more efficient, short & long-term based on resource requirements & load / latency. Issue / Concern: (Giving a scenario of having - i.e 1k or even 10k clients at some point to which will accumulate, total, of around 30G of data over time) Q: Won't having that number of (individual) db's - all, literately, an exact replicate of the other db - actually create more (unnecessary) latency / load on the db server / resources then just having ONE 'master' database & housing ALL the clients' data within that (optimized) database (of course Indexing crucial to keep that db optimized to handle the total load for all client data; it's same as having multiple db's in my view based on proper indexing - just a more efficient design)? OR (even better approach based on the expected growth of data, overall) Distribute the, pertinent, data sets based on their role / service - individually - across separate database servers - so that the load / latency issues minimized for each db server (itself).For example: all email marketing (results) data would be housed on db1 server whereas all client specific data would be housed on db2 server. Reason: the bulk email marketing application demands a tremendous amount of resources & disk space so by isolating those services / operations removes contention between the two db's (Yes / No?) To hook the separate sets of data when required (client or email data)? Answer: - the SQL queries would just JOIN / UNION data sets from the (separate) specified MS-SQL db servers based on the required data to produce SQL results (JOIN / UNION already the method with current BI queries using to date; I'm just suggesting the data be segmented to different db servers based on their role / service to the application). Note: what I'm wanting to get other's views on is this: Q: does it make sense to have multiple copies of the same (exact) db schema for client data vs one 'master' db schema which can house ALL the data so there is less db administrative work (1st off) & (2ndly) reducing the actual load on the 'one server' itself. Note: I know storage pools can be implemented to expand disk space for the MS-SQL service but that's another topic regarding 'limited' disk space issues (per server) as the data grows, collectively. I would appreciate feedback on those questions to get others opinion(s) based on their experience in working with a 'global' / distributed cloud-based applications & MS-SQL. I have some ideas - 1. compiling all the (current), individual, client dbs to the one 'global' clients db then implementing a data warehouse (DW) to pull all data that's over seven days into the DW to which I'll then be able to manage said data using a variety of 'external' sources - MySQL Storage Area Network (SAN) solution & even Excel spreadsheets <for very old data that will be older than five years - to free disk space on the DW, etc...> 2. Implement Redis cache / cluster to offload (default) queries from both the 'source' db servers & DW for the majority of the queries - being the more requested data set / queries - will (now) be in-memory for each client. Note: Redis cache will NOT load those client's data sets until said client (any member of that group) 'successfully' authenticated into the system. 3. (still speaking on Redis) - data sets that have been sitting idle for a specified period of time (short # of days) would be flushed from cache to free of memory. Note: the cluster will be defined so that there is ample redundancy aside from persistence of the DW which will have 50G of disk space along with the use of an external SAN solution which will house another 50G of disk space - acting as source / destination to house Excel files along with the oldest sets of data within MySQL. Using data mgt services, the MS-SQL (DW) will pull any 'requested' data back into the DW from the SAN (destination) to serve the application, dynamically. Thus, this solution would make the current db solution more efficient & robust to expand both - horizontally & vertically - based on biz / data demands. Ok, those are my thoughts to make the current db design more efficient with the objective of thinking 'short & long-term' based on expected growth of data. Let me know your thoughts - suggestions / comments - all welcome. Thx in advance for responses!
  7. Hi & thanks for response. Maybe my initial comments a bit unclear but you gave me the insight i needed to make this work. Yes, I've concluded i need MTA software to manage the MTA (pools of ip's) and this setup is not for a fourm but a 'bulk email server' which sending currently 1/2 million or more transactional emails, monthly. I've figured it out - thanks for your response. brgds, Thounda
  8. Hi, Can someone advise or provide an example PHP script where I can create (based on a pool of five dedicated IP addresses I have assigned to a virtual private server) so that I can - via MySQL table - assign one of those five ip addresses to each 'outgoing' email being sent using the mail function? I've searched the Net and found nothing that really explains how to do this and no examples. Would appreciate any insight to help me figure out this solution. I know its possible b/c apps like Interspire & PowerMTA does it and you can assign the block of IP's via their app's GUI - just saying, i know its possible. I found some info regarding Postfix and EXIM but nothing in details - reason I'm asking for help here to figure this out... Objective: I just want to set (rotate) the ip addresses - that's bound to that server - as each 'outbound' email is being sent out from said server. thx, Craig
  9. Thanks - I will implement tomorrow morning and update ticket with result (appreciate your help!)
  10. Hi, I'm working on implementing the Payflow api into a php-based website and having issues with the payments processing at PayPal. Note: the PayPal api is using the Payflow class - that class can be found via this link: https://github.com/rcastera/Paypal-PayFlow-API-Wrapper-Class/blob/master/Class.PayFlow.php Paypal is now asking for me to produce the actual API call - from the cURL prior to that cURL process firing so I need to - some how - echo out that prepared URL statement from the 'public' function to which I can then save to a file or in a MySQL database table. I need some guidance to help me get that URL echo'ed or assigned to a local var from the OOP class named 'payFlow and method named, 'processTransaction' - see excerpt of that class below: lines 609 thru 641 from the provided link: https://github.com/rcastera/Paypal-PayFlow-API-Wrapper-Class/blob/master/Class.PayFlow.php public function processTransaction() { // Uses the CURL library for php to establish a connection, // submit the post, and record the response. if(function_exists('curl_init') && extension_loaded('curl')) { $request = curl_init($this->getEnvironment()); // Initiate curl object curl_setopt($request, CURLOPT_HTTPHEADER, $this->getHeaders($this->NVP)); curl_setopt($request, CURLOPT_HEADER, 1); // Set to 0 to eliminate header info from response curl_setopt($request, CURLOPT_RETURNTRANSFER, 1); // Returns response data instead of TRUE(1) curl_setopt($request, CURLOPT_TIMEOUT, 45); // times out after 45 secs curl_setopt($request, CURLOPT_FORBID_REUSE, TRUE); //forces closure of connection when done curl_setopt($request, CURLOPT_SSL_VERIFYPEER, FALSE); // Uncomment this line if you get no gateway response. curl_setopt($request, CURLOPT_POST, 1); //data sent as POST curl_setopt($request, CURLOPT_POSTFIELDS, $this->getNVP()); // Use HTTP POST to send the data $postResponse = curl_exec($request); // Execute curl post and store results in $post_response // Additional options may be required depending upon your server configuration // you can find documentation on curl options at http://www.php.net/curl_setopt curl_close($request); // close curl object // Get the response. $this->response = $postResponse; $this->response = $this->parseResults($this->response); if(isset($this->response['RESULT']) && $this->response['RESULT'] == 0) { return TRUE; } else { return FALSE; } } else { return FALSE; } } Again, what i need is the ability to (some how) capture $request and echo that out (return that value) outside the function so I can assign to a local variable to then save to MySQL database or echo at a later date. Any help accomplishing this task - which is prior to line 628 - appreciated (thx) brgds, Craig
  11. Hi, I need some insight on how to go about fixing some broken code due to host upgrading PHP from 5.3 to 5.4 - see error below: Warning: mysqli_real_escape_string() expects exactly 2 parameters, 1 given in... Yes, I aware, the two parameters that are required which would be as follow - with $db being the connection string: mysqli_real_escape_string($db, $value); Right now, the $db arguement is NOT in the 'mysqli_real_escape_string() function - read below to know why (that's what i need help to fix): My problem is this function - itself - is being called within a function which (with PHP 5.3 used MYSQL extensions but PHP5.4 deprecated those functions and MYSQLi requiring 2 parameters as stated.... See that entire piece of code to see the issue which involves the 'quote_smart' function which executes the mysqli_real_escape_string() function inside 'quote_smart' function: Here's the defined function, currently: function quote_smart($value){ // Stripslashes if magic quotes is on if(get_magic_quotes_gpc()) { $value = stripslashes($value); } // Quote if not a number or a numeric string if (!is_numeric($value)) { $value = mysqli_real_escape_string($value); } return $value; } and it's being called as such: ${$key} = quote_smart($value); Thus, my problem - I'm not sure how to pass the mysqli link (arguement) into the function - correctly - or if i should just make the $db var 'global' within the quote_smart function itself - now that PHP is 5.4. FYI: Yes, the objective is to rewrite all this code with PDO and prepared statements but need to get this up, quickly, with temp fix due to sudden issues due to host upgrade. Would really appreciate some guidance on this one - thx!
  12. Yeah, i see it - and that will work (Thank you both)!
  13. Hi, Thanks to both of you for your quick responses - appreciated. 1st response: that's what i'm trying to do (hopefully i did it right) in setting conditional logic on process() - see it? That will determined if the pop up should show policy or not - if submit button not clicked. which lead to 2nd response: thanks (I will try the popup but the presentation / structure wasn't well liked by mgt but you right - most browsers/clients block popups so the redirect won't show anyway... To that, just adhering to mgt - they want the redirect. So again, how can I adjust the code i have to update 'var jscheck' if 'checkfire()' triggered by the submit button. That's my issue - that (global) variable remains as 'fire' regardless.
  14. Hi, I have the 'onbeforeunload' event set on the 'body' tag to set a JS function to trigger ONLY if a conditionis set - two var's equaling one another. Note: that all works just fine - no issues with the default setup. However, This function should NOT trigger the event if the 'submit' button is click (therefore suppressing the 'process()' function set on 'body' tag. Reason: that means staff completed the transaction and data actually saved to the database. The 'cancel.html' required to inform them their action canceled the process and no record created as a result. ---- Here's what i have thus far - appreciate some insight to get this final piece to work ('process' function should NOT fire when 'submit' button click which triggers 'checkfire' function). That's my problem - not able to pass local var out of 'checkfire' function to update global var which then read by 'process' function. (Brain dead!) ===== Here's the code for review: * This is the snippet that sets the default var values and the 'checkfire' function that will be called from the 'submit' button <script type="text/javascript"> var viewer = "fire"; var jscheck = viewer; // if function called, then update var jscheck function checkFire() { jscheck = "nofire"; alert("jscheck value is now: " + jscheck); return jscheck; } </script> * Here's the part that fires the 'cancel.html' page if staff member clicks away from the page without submitting the form data. This is where we need to display the cancelation policy advising they did not complete the process by submitting info. <script type="text/javascript"> // Process to advise member by leaving page without clicking "Submit Record" button, canceled the entire process / update if (viewer == jscheck) { function process() { window.open('http://www.domain/canceled.html', '_blank'); self.blur(); } } </script> and finally, the Body tag showing 'process()' function and the 'submit' button showing onclick event calling 'checkfire()' func. <body onbeforeunload="process()"> <input name="sumbit" value="Submit Record" onclick="checkFire();" /> Again, If the 'submit' button is clicked, then the 'process()' should NOT trigger at that point. ISSUE: this is what's happening - process() function firing even when the form submitted (it should ONLY fire if staff leaves the page prior to submitting the form which is requirement to close out work order). ---- I'm not sure how to dynamically 'remove' the onbeforeunload event off the 'body' tag or pass the 'checkfire()' value of 'jscheck' out of that function to global jscheck var to cause the conditional logic in 'process()' to fail - resulting in no action. Any help with this appreciated - thx!
  15. Never mind - think I got it figured out... Need to clean up the results but not got value from var_dump into local var as shown below: ob_start(); var_dump($api_response1['RESULT']); $var1 = ob_get_contents(); ob_end_clean(); echo $var1; Thanks again for help!
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.