Jump to content


  • Posts

  • Joined

  • Last visited

Profile Information

  • Gender

n1concepts's Achievements

Regular Member

Regular Member (3/5)



  1. I agree with gizmola comments & to add one note: * If you or client doesn't want the hassle of trying to figure out alot of the configurations that's required with WP (itself) and WooCommerce, then Shopify a good option (pretty easy to setup & manage). However, there comes the difference w/the two - you'll pay (Shopify) for that ease-of-use vs getting WP & WooCommerce (pretty much) setup at little to no cost.
  2. Hi, I need to modify some code found on Github to redirect if the user is not already logged in as the set 'wp user' from the script or 'admin'. Github link: https://gist.github.com/cliffordp/e8d1d9f732328ba360ad <?php function auto_login() { // @TODO: change these 2 items $loginpageid = '1234'; //Page ID of your login page $loginusername = 'wpusername'; //username of the WordPress user account to impersonate // get this username's ID $user = get_user_by( 'login', $loginusername ); // only attempt to auto-login if at www.site.com/auto-login/ (i.e. www.site.com/?p=1234 ) and a user by that username was found if (!is_page( $loginpageid ) || ! $user instanceof WP_User ) { return; } else { $user_id = $user->ID; // login as this user wp_set_current_user( $user_id, $loginusername ); wp_set_auth_cookie( $user_id ); do_action( 'wp_login', $loginusername, $user ); // redirect to home page after logging in (i.e. don't show content of www.site.com/?p=1234 ) wp_redirect( home_url() ); exit; } } add_action( 'wp', 'auto_login', 1 ); ?> Basically, I want to define in the function php file the logic to redirect viewer to the login page if the following conditions not met: 1. is_page NOT '1234' and/or wpusername not found 2. No user currently logged in (either wpusername or any wp admin) Again, if either of those conditions not met, then the script should redirect to 'wp_login' ----- Can anyone advise what edits can be made to this code to accomplish that goal - thx
  3. Disregard, using the code generated by the tool (RFG) that is setting the favicon. The issue I'm seeing is when I minimize the brower (itself) - THEN - that browser's favicon / title name shows which is correct being it's the parent containing the (multiple tabs, inside). Thanks for those that reviewed post - topic closed.
  4. Hi, I used Real Favicon Generator to create the required files to show the favicon on web pages which works. However, if/when I minimize the browsers (Chrome, Firefox, etc...) in this case Chrome - the website's favicon & title changes to the browser's favicon & title. Note: I see from other websites where their 'coded' favicon remains even when the browser is minimized. Q: How can I accomplish this - keep the site's favicon & title showing in brower tab, even when minimized? Thx!
  5. Thx for response requinix && maxxd - both your comments (I agree & on point). I would prefer to handle this on the backend with php, however, not my call - lead devs prefer JS so reason I was inquiring (thx) For the code (copying from WWW) - yeah, use to boiler plate customization (certainly not using 'as is' - thx again for respnose). We got it working but not how I would have preferred - but always multiple ways to accomplish similar results.
  6. Hi, I'm working with an example set of code to create a Stripe (Token) and that script works Great! Example code found at https://jsfiddle.net/ywain/5yz4z2yn/ However, I need to capture the 'token' which is found at line 38 within the Javascript code. My issue: how can I pass that var (result.token.id) to an external PHP file? If you want to see the simulation - just fill out the form (use 4242 4242 4242 4242 for the test card & current of future 'two-digit' year) other values are random. In the HTML code (simulated form) you'll see that the 'token' class display the token when (result.token) JS condition is met / TRUE <div class="success"> Success! Your Stripe token is <span class="token"></span> </div> I'll suppress the token from showing the var once I am able to grab the value & pass into the, external, PHP (Stripe api's for processing). Again, here's the code snippet in question from the 'Javascript file found at https://jsfiddle.net/ywain/5yz4z2yn/ if (result.token) { // Use the token to create a charge or a customer // https://stripe.com/docs/charges successElement.querySelector('.token').textContent = result.token.id; successElement.classList.add('visible'); Again, I need to (somehow) grab 'result.token.id' and (securely) pass that value to an external PHP file. Note: I do not want to use cookies b/c that not a stable solution for all browsers nor safe. Appreciate any suggestions - thx!
  7. Hi Kicken, my apologizes - i can see how that explanation is confusion so let me provide the answer (it came to me right after I posted for help...) Answer: UNION - I just joined the two queries based on the UNION clause to get the results I needed (worked perfectly after some adjustments to each query). Yes, you are correct - I was after the 'member_id' which is the primary_key in table which is also 'receiver_id' 2nd key. thx & problem sovled!
  8. Hi, I need some help to define the correct SQL query that will accomplish the following: VERSION: 10.0.38-MariaDB Select the 'member_id' based on the COUNT function using the 'receiver_id' column in the 'members' table Conditions are: WHERE 'astatus = 2' for 'member_id' (this condition 1st to filter before next condition actioned COUNT(receiver_id) is LESS than 4 or IS NULL The goal is to list the member_id that meets those two conditions. Now, I have two queries defined which giving me the results I want but I'm trying to combined those two into one query AND add the WHERE condition for 'astatus = 2'). I need help with this part as I can't see the logic to combined the two queries with OR statement and also (for both) add the condition for the 'astatus = 2'. Any help, surely, appreciated - thx! Screen shot of table excerpt below along with queries # Get Count of donors assigned to a receiver_id SELECT m.receiver_id, COUNT(m.receiver_id) AS donor_count FROM members AS m WHERE (m.receiver_id IS NOT NULL) GROUP BY m.receiver_id HAVING COUNT(m.receiver_id) < 4 LIMIT 1; # Find member that's not yet assigned a receiver_id SELECT m.member_id FROM members AS m WHERE (m.receiver_id IS NULL) Limit 4;
  9. Yes, there was a problem - i stated the objectives but think those issues wasn't understood (no worries & thanks for replying, anyway). To that, I went ahead & applied the changes that I thought would correct this issues to produce the, expected, results (and indeed that fixed the issues). I just removed the L flag on the 1st Rewrite rule to allow the remaining rules to be parsed for a match, then i added that 1st rule again, at the very end of the script with the L flag to stop processing should NO match found from previous rules. Results? working 100% now - thx again but not sure (again) anyway for response (problem fixed - My post was just to validate that I thought would fix the issue). PROBLEM SOLVED - thx! BTW: I like your post / reply...
  10. Hi, I'm starting a rebuild on an existing website & need to 'force' HTTPS to a subfolder entitled, 'dev' along with redirect specific 'HTML' files to their (new) 'PHP' extensions. FYI: only queries to that subfolder should be of focus of this particular .htaccess file - i.e. path/dev/.htaccess I have an .htaccess file defined in the 'dev' subfolder with the below content & that's forcing HTTP to HTTPS (see that code below) RewriteEngine on RewriteCond %{HTTPS} off RewriteCond %{HTTP:X-Forwarded-Proto} !https RewriteRule ^(.*)$ https://%{HTTP_HOST}/dev/$1 [R=301,L] # The remaining, below, rules are to map old (HTML) file names to their (new) php file names found in 'dev' folder RewriteRule ^contact-us.html$ contact-us.php [L] RewriteRule ^register.html$ register.php [L] RewriteRule ^welcome.html$ welcome.php [L] RewriteRule ^log-in.html$ log-in.php [L] RewriteRule ^log-out.html$ log-out.php [L] RewriteRule ^my-account.html$ my-account.php [L] Note: I know there is one, immediate, error (L flag stopping the remaining Rewrites from firing, right?) However, before I make any more edits, I thought best to ask (here) to get someone's insight on how to accomplish the following: 1st, i need to enforce HTTP to HTTPS for all content going to that subfolder - i.e. http://domain/dev or http://domain/contact-us.html, etc... - and that appears to be working based on the 1st 'ReWrite' rule. However, I think that rule is the problem so asking: Q1: should I remove the L flag & will that allow the remaining rules to parse to see if a match? Q2: if answer to 1st question, No, then please advise how best to modify the .htaccess file, specifically, for the 'dev' subfolder which is where that .htaccess file reside (there are others in / and other subfolders which should remain 'as is'. Appreciate the input in advance. Craig
  11. FYI: I left out one important feature with what I'm thinking may be a more suitable db design - Partitioning Note: I'm thinking to address the lag time / latency if a particular table grows - i.e. millions or even billions of records. Answer: by partitioning the table(s) by week, month, etc... that will reduce the search (latency); limiting the amount of data required to be searched / sorted to form the query. Again, the question is which is better based on the suggestions I've posted? 1. Stay with the (individual) db's per client or 2. compile all the client data into one MASTER db then implement partitioning and ensure proper indexing on tables to segment searches to keep load times to a minimum. Again, thanks, again, for your comments / suggestions!
  12. Hi, I have some questions based on a MS-SQL project I'm, now, apart of to see if the current db structure the most efficient - basically, asking about db provisioning 101 / theory as I'm getting push back on recommended edits to improve the current db design based on 'expected' future growth. Here's the current db structure for the application: There is one MS-SQL database server, running one instance, to which the application is (currently) creating a new database for each (new) client that signs up to the service - all on that one db server. The current devs stated that they designed the db process in this manner so that each client's data would be, completely, separate from the other clients' data & this way db administrative efforts would be less hectic & more efficient My view: I think there is a better way to address the db / data requirements - see below to which I would like feedback which approach is more efficient, short & long-term based on resource requirements & load / latency. Issue / Concern: (Giving a scenario of having - i.e 1k or even 10k clients at some point to which will accumulate, total, of around 30G of data over time) Q: Won't having that number of (individual) db's - all, literately, an exact replicate of the other db - actually create more (unnecessary) latency / load on the db server / resources then just having ONE 'master' database & housing ALL the clients' data within that (optimized) database (of course Indexing crucial to keep that db optimized to handle the total load for all client data; it's same as having multiple db's in my view based on proper indexing - just a more efficient design)? OR (even better approach based on the expected growth of data, overall) Distribute the, pertinent, data sets based on their role / service - individually - across separate database servers - so that the load / latency issues minimized for each db server (itself).For example: all email marketing (results) data would be housed on db1 server whereas all client specific data would be housed on db2 server. Reason: the bulk email marketing application demands a tremendous amount of resources & disk space so by isolating those services / operations removes contention between the two db's (Yes / No?) To hook the separate sets of data when required (client or email data)? Answer: - the SQL queries would just JOIN / UNION data sets from the (separate) specified MS-SQL db servers based on the required data to produce SQL results (JOIN / UNION already the method with current BI queries using to date; I'm just suggesting the data be segmented to different db servers based on their role / service to the application). Note: what I'm wanting to get other's views on is this: Q: does it make sense to have multiple copies of the same (exact) db schema for client data vs one 'master' db schema which can house ALL the data so there is less db administrative work (1st off) & (2ndly) reducing the actual load on the 'one server' itself. Note: I know storage pools can be implemented to expand disk space for the MS-SQL service but that's another topic regarding 'limited' disk space issues (per server) as the data grows, collectively. I would appreciate feedback on those questions to get others opinion(s) based on their experience in working with a 'global' / distributed cloud-based applications & MS-SQL. I have some ideas - 1. compiling all the (current), individual, client dbs to the one 'global' clients db then implementing a data warehouse (DW) to pull all data that's over seven days into the DW to which I'll then be able to manage said data using a variety of 'external' sources - MySQL Storage Area Network (SAN) solution & even Excel spreadsheets <for very old data that will be older than five years - to free disk space on the DW, etc...> 2. Implement Redis cache / cluster to offload (default) queries from both the 'source' db servers & DW for the majority of the queries - being the more requested data set / queries - will (now) be in-memory for each client. Note: Redis cache will NOT load those client's data sets until said client (any member of that group) 'successfully' authenticated into the system. 3. (still speaking on Redis) - data sets that have been sitting idle for a specified period of time (short # of days) would be flushed from cache to free of memory. Note: the cluster will be defined so that there is ample redundancy aside from persistence of the DW which will have 50G of disk space along with the use of an external SAN solution which will house another 50G of disk space - acting as source / destination to house Excel files along with the oldest sets of data within MySQL. Using data mgt services, the MS-SQL (DW) will pull any 'requested' data back into the DW from the SAN (destination) to serve the application, dynamically. Thus, this solution would make the current db solution more efficient & robust to expand both - horizontally & vertically - based on biz / data demands. Ok, those are my thoughts to make the current db design more efficient with the objective of thinking 'short & long-term' based on expected growth of data. Let me know your thoughts - suggestions / comments - all welcome. Thx in advance for responses!
  13. Hi & thanks for response. Maybe my initial comments a bit unclear but you gave me the insight i needed to make this work. Yes, I've concluded i need MTA software to manage the MTA (pools of ip's) and this setup is not for a fourm but a 'bulk email server' which sending currently 1/2 million or more transactional emails, monthly. I've figured it out - thanks for your response. brgds, Thounda
  14. Hi, Can someone advise or provide an example PHP script where I can create (based on a pool of five dedicated IP addresses I have assigned to a virtual private server) so that I can - via MySQL table - assign one of those five ip addresses to each 'outgoing' email being sent using the mail function? I've searched the Net and found nothing that really explains how to do this and no examples. Would appreciate any insight to help me figure out this solution. I know its possible b/c apps like Interspire & PowerMTA does it and you can assign the block of IP's via their app's GUI - just saying, i know its possible. I found some info regarding Postfix and EXIM but nothing in details - reason I'm asking for help here to figure this out... Objective: I just want to set (rotate) the ip addresses - that's bound to that server - as each 'outbound' email is being sent out from said server. thx, Craig
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.