Jump to content

kicken

Gurus
  • Content Count

    3,575
  • Joined

  • Last visited

  • Days Won

    98

Everything posted by kicken

  1. Try disabling xdebug.remote_connect_back, that is intended for when you can't set xdebug.remote_host to a specific IP. Also make sure you're configuring the correct PHP setup. The PHP run by apache and the PHP run on the command line can and usually do have different configuration files. If your only editing the CLI configuration but testing via apache that would explain things. Run your phpinfo() function by creating a page and loading it to check what configuration apache is using.
  2. You probably saw the behavior you did due to using sessions. When you start a session PHP will lock the session data so that it's not affected by other processes. This lock exists until you either call session_write_close or the process ends. So if you're long-running process doesn't need to update any session data, call session_write_close prior to starting it. That said, lots of concurrent long-running processes could block a server. Your HTTP server will process each request using a thread or worker process (depends on the configuration). I'll only spin up a certain number of these based on the configuration and if that limit is reached it'll stop responding to requests. Your long-running processes would tie up some of these threads. The number of threads available on an actual server will probably be relatively high though, so unless you expect a lot of these processes to be running concurrently it likely won't be an issue. If the server is setup with something like PHP-FPM or a CGI setup though, the number of allowed PHP instances may be smaller. You limit would be the smaller of the the http server's limits or PHP's limits. If you want to keep your site responsive though, the way to manage that would be to offload the work to a background process so that your website can continue to respond to requests. The user would then go to the page which would trigger the processing and you would respond with a message like "We're working on your request, check back in a bit". When the process is complete give the user the results. There are many ways to accomplish this, such as using services like redis, gearman, beanstalkd, etc or simply adding records to your database and having a background service checking for new records periodically.
  3. I'd say it depends on the type of book. For something like a technical reference that's marked up and linked properly then sure, online may be a better scenario as people would probably be less inclined to be reading start to finish and instead jumping around to the information they need or want. For something more story/native that's intended to be read front to back in order, I'd gather most people would not be reading it on a PC. Most readers I know either stick to physical books or their phones/e-readers. I'm not trying to sell stuff, but that's my general point of view on things as well. Most attempts at stopping piracy I've experienced are counter-productive. They don't do a great job at preventing piracy (someone breaks it eventually, and usually in relatively short order), but limit your non-pirating customers severely. For example pretty much nothing the movie industry has tried has worked for very long but their attempts prevent me from easily creating a digital copy of the movies I buy for use on my HTPC though, which is a perfectly legal and reasonable thing to do. In my opinion, the best way to try and tackle the problem would be to 1) Make sure your product is easy to acquire for those who want it (reasonably priced, no region locks, etc) 2) Monitor the web for pirated copies and respond appropriately when you find them.
  4. I personally don't and haven't used PostgreSQL (most using SQL Server for work), but I see references to it enough that I would expect the community around to to be large enough that support shouldn't be a big issue. The [postgresql] tag on stack overflow seems reasonably active at least. Most of my knowledge about it is outdated at this point. I seem to recall back in my early days people recommending it over mysql because it had proper foreign key and sub-query support but mysql has since add both of those. I think I saw something recently about it having a json datatype, but might be mis-remembering. I remember that sounding potentially useful though.
  5. Users being able to download the book for offline reading is probably an important feature that could make/break sales, so it'd be a good idea to support it I think. That also probably means providing PDF's. I'm not familiar with all the options, but I'm not sure there is much you can do to prevent someone from stealing a PDF. I think adobe's reader supports some kind of DRM system but not sure. Also not sure how that'd affect other readers. The only thing you could really do I think is monitor the internet for copies of the PDF and submit take down requests when it's found. I suspect you may be worrying about it more than you really need to. In general people aren't that prone to stealing stuff. Most people only resort to that kind of stuff if obtaining the item legally is made difficult (ie, too expensive, region locked, etc). The most likely problem you'd have is just people sharing logins or the pdf among their friends/family, which is something you could probably do even less about. If you could lock down the PDF in some way to a device or something, the user could just let their friend barrow their device. Like with the security stuff, it's all a trade off. This time between protecting your IP and ease of use. If you wanted to make them searchable, putting them in the database could help, but there are also other solutions for that. As far as the contents of the books go I don't really see any compelling reason to go one way or the other. It's kind of hard to provide much advice here without knowing more about what exactly your offering and in what way. For example if you are going to go the route of PDF downloads your database could just consist of some metadata regarding the files for your site to use, then a location to the file on disk for downloading. If you're going to go an HTML route with some templates, then storing the content in the DB may be easier and you could setup some kind of online editor where you could edit/create new content.
  6. How are you referencing the mp3 files in your detail.php file? You should reference them using a domain relative URL (with a leading slash), that should remove any need for a rewrite rule for them. <audio src="/sf/sound.mp3">
  7. That's the fundamental issue, and very problematic with any shared hosting solution. With shared hosting, the only real option is to store it in a file somewhere, and like with the sessions, if they run everyone's code under the same account then anyone with a site on that server could read that file and get your key. PHP running scripts under the same account was common back in the day when I used the occasional host, but it may not be anymore. You'd want to check with your hosting provider. If they do run your scripts under a unique account, and you're not overly paranoid you could possibly get by. With a VPS you can still store the key in a file for convenience, but since no one else is sharing the server you don't have to worry about someone getting in that way. Your worry here is from faults in the software you run that may allow someone to access the server, and that's pretty much a worry you'll have no matter what you do. Just keep things updated and audit your code to find problems. The more ideal solution is to require the key to be entered any time the system is started, allowing the key to be stored only in the memory of the system and nowhere on disk. Someone would have to be able to gain access to the system memory without causing a reboot to get your key at that point, which is harder to do. This kind of setup could be done with a VPS or dedicated server. If you're really paranoid, a dedicated server is best as a VPS could technically be paused and have a snapshot taken which would then contain your key. That comes down to trusting your hosting provider.
  8. Everything is a trade off between security and usability. That's why things like cryptocurrencies haven't really caught on. The security on them is generally pretty good, coins can only be accessed using praticaly impossible to break private key. The usability is awful for the average person. Didn't back up your key and your HDD caught fire? Well, there goes your life savings up in smoke. The question is where do you want to draw the line, and that's usually determined by the risk and consequences of a breach. If you were running a bank, a breach would be bad news. In most sites though, a breach is both unlikely to occur and unlikely to cause major problems. So the question is where do you want to draw that line? I've known some companies that require you to email/fax a copy of a government issued ID to their support and they will only give you access to the account if the name on that matches the name on the account. That'd always be an option. Seems a bit overkill for a simple e-book store site imo though. Again, if it were me I'd probably solve the above two scenarios by just using something like having the person verify their billing details or some other bit of information. For example, "What's your First & Last name, and the last 4 of the card you used?" If they can answer that, it'd probably be ok to just assume they are the owner of the account and change the email. Practically, how likely is it that someone would contact support to try and break into a different account and actually know that info? Fairly unlikely. What's the worst that can happen if someone actually succeeds? They get access to someone else's books, and possibly inconvenience the real owner? Not that big of a deal.
  9. If you're just using the default files setting, it's controlled by the session.save_path setting. You can check what this is by creating a page that calls phpinfo() and loading it up in your browser. You could implement a session handler that stores the session data into a database instead which could offer a little more control over the data access. However, if the different sites on the host all execute scripts as the same user then other users could still access your session data. Privacy and shared hosting are generally incompatible. If keeping the information protected is important, you should invest in some non-shared hosting such as a VPS or dedicated server.
  10. If your goal is ease of use for the end user to maximize sales, then you could reduce the friction some while still gaining most of the benefits. As was mentioned before, don't make the verification required to process the transaction, but rather do it after the fact. There's really no need for the user to enter a code, most places just send a link with a unique code in the URL. User clicks the link and the email is verified. Show the user a notice on login that their email needs to be verified still, and possibly lock out their account until verification is complete after some time. If it were me, I'd probably have a process such as: User selects a subscription type Gather account & payment information Create account, send verification email Let the user access what they paid for. After 3-7 days, restrict the account if the email is still unverified. That works around potential email delays and gets the user to their content the quickest. You could do the double email field as an additional measure to try and catch people fat-fingering their email. In most cases though the user will probably enter it correctly, get the verification email, and click the link on their own time without issue. If the user does fat-finger their email (intentially enter an invalid one) they will get their content for a while, but then have to correct the problem. Allow a user to change their email if their account is restricted due to no verification. Depending on the level of security you want, maybe require the password as well to re-start the verification process. A third option to minimize end-user friction would be to let them use something like google, facebook or twitter to login instead of creating an account. A lot of people prefer that these days as it's one less account/password to have to remember. I know I personally am far more likely to hit the 'Login with Google' button than I am the 'Create an account' button.
  11. I would, possibly. Depends a lot on what your trying to sell and how badly I need it. I've left quite a few sites that I otherwise might have given money too simply because they required me to create an account. IMO, the best sites are the ones that let you just "get your shit and get out", with an option to create an account later if you so desire. Most of the time I'm just trying to get one thing and have no intention of ever coming back to said site, so making an account is an extra step that provides zero value. If I do end up back at the site a second or third them then I may go ahead and setup an account. Another downside to requiring even email verification is that email isn't always nearly instant. I've had multiple times where a site sends me an email either for verification, password reset, whatever and ten minutes later I'm still waiting for it. Somewhere either on there end or at one of their providers the email got delayed. If I'm trying to go through your process and that verification email is lost/delayed any longer then about a minute, there's a high probability I won't bother finishing the process. Unless an accurate email/account is vital to the operation of your site, it's best to avoid such things if possible, leaving it to the user to decide if they want it's worth it to them to set that stuff up. All that said, if you feel such a task is necessary for your site or what to do it anyway, implementing the actual verification isn't all that hard. As I mentioned above, the best way would be to use some Javascript to trigger the email when they click the button. Using jQuery, such a script might look something like: Email: <input type="email" name="email" id="email"> <button type="button" id="sendVerification"> Send verification code </button> Enter code: <input type="text" name="verification"> $('#sendVerification').click(function(){ var email = $('#email').val(); $.post('sendVerification.php', {email: email}).then(function(){ alert('Email sent'); }); }); Your sendVerification.php script would then generate the random code and send the email. Store the code in $_SESSION and when the user submits the form you can check the code they entered against the code that was sent.
  12. You should upgrade your PHP install. 5.6 is pretty old and outdated at this point, and is no longer receiving any kind of development work (security or otherwise). For the certificate, you can tell cURL what certificates to trust by setting the CURLOPT_CAINFO option to the name of a file that contains your certificate. curl_setopt($ch, CURLOPT_CAINFO, 'certificate.pem');
  13. Your webserver is responding the the browser with a redirect (Status 302) so the browser is trying to follow that redirect. If you're creating a new resource you should be responding with status code 201. If you're updating an existing resource you should respond with status code 200.
  14. Not always as good as I should be at keeping up to date on the latest PHP stuff. Having it at hand here might be kind of nice.
  15. Did you obtain a SSL certificate somewhere or generate your own? If you created your own then it won't be trusted and SSL won't work until you add that certificate as a trusted certificate. If you obtained one from a major known vendor then the certificate should be fine and trusted, but you may have a configuration problem. What software and their versions are you using? OS, Webserver, PHP, cURL, etc.
  16. I added a systemd drop-in to create the directory prior to launching the service. /etc/systemd/system/php7.3-fpm.service.d/prestart.conf [Service] ExecStartPre=/bin/mkdir -p /var/run/php7-fpm
  17. Rewrites are handled by apache at the early stages of the request, prior to any of the PHP stuff. No need to adjust anything there. Using PHP-FPM you're effectively separating apache and php. A request comes in and apache does everything it needs to do up to the point it determines that the request is for a *.php file. Once it determines that it pass the request over to PHP-FPM which will process it and pass back the results which apache then can run any configured post-processing on before sending on to the user.
  18. PHP will end up running as user kicken, userx, usery etc so the files don't need g+w. The PHP files actually don't even need group www-data as apache doesn't really mess with them, but non-php files need it and g+r so they can be served. $pool does resolve to whatever the name of the pool is in the brackets (minus the brackets themselves). It's all there, I just trimmed out the more or less standard/irrelevant stuff. My actual config is much longer with rewrite rules, redirect, special paths, etc. To enable source highlighting. Ignore it if you don't need that feature.
  19. It's better to put your Apache configuration inside apache's main (or vhost) configuration file then disable .htaccess file processing. However that's the kind of optimization that you only really need to do if you're trying to run a high traffic site and need every bit of performance you can get. Making a change such as a new redirect or rewrite rule would then require editing the configuration and reloading apache. Not a terrible thing but more cumbersome than editing the site-specific .htaccess. Yes, if you want multiple versions of PHP then you need to have them all installed separately in a way that doesn't conflict. Hopefully your package management system takes care of that for you. For example, I use ubuntu and the ondrej/php PPA which installs the configurations into /etc/php with a sub-folder for the version and api type. I use FPM so my configuration is in /etc/php/7.3/fpm/ with pools defined in /etc/php/7.3/fpm/pools.d/ Yes. You could do a per-site setup, per user, or something different. That's pretty much up to you and what you find most convinent. What I prefer to do is create a separate pool for each system user that hosts sites. I configure that pool with settings appropriate for their site and also configure it to run using their user account so there's no need really to deal with permission problems when the site whats to manipulate the files. For example, a typical pool configuration might look something like: [kicken] prefix = /var/run/php7-fpm user = $pool group = www-data listen.owner = www-data listen.group = www-data listen.mode = 0600 listen = $pool.sock php_value[error_log] = /var/log/php7-fpm/$pool-error.log php_value[session.save_path] = /var/lib/php7/$pool/sessions php_flag[log_errors] = on php_value[memory_limit] = 32M php_value[upload_max_filesize] = 500M php_value[post_max_size] = 501M php_value[max_input_vars] = 5000 I made a copy of the default www pool when I first set things up and made some general changes (like error log, session path, using $pool instead of hard-coded names) and save that as a template. Each time I need to add a new pool I just copy that template, rename the pool and make tweaks if necessary. I prefer using unix sockets vs TCP sockets as /var/run/php7-fpm/kicken.sock is easier to remember than 127.0.0.1:9038 or whatever port you want to use for that pool. On the apache side of things, I have a separate configuration file for all the PHP stuff that looks like Define PHP7_POOL_DEFAULT "proxy:unix:/var/run/php7-fpm/www-data.sock|fcgi://localhost" Define PHP7_POOL_KICKEN "proxy:unix:/var/run/php7-fpm/kicken.sock|fcgi://localhost" Define PHP7_POOL_USERX "proxy:unix:/var/run/php7-fpm/userx.sock|fcgi://localhost" Define PHP7_POOL_USERY "proxy:unix:/var/run/php7-fpm/usery.sock|fcgi://localhost" Define PHP7_POOL_USERZ "proxy:unix:/var/run/php7-fpm/userz.sock|fcgi://localhost" ScriptAlias /phpsource.cgi /usr/lib/cgi-bin/php7-source Action php-source /phpsource.cgi virtual <FilesMatch "\.phps"> SetHandler php-source </FilesMatch> <Directory /usr/lib/cgi-bin> Require all granted </Directory> <FilesMatch ".+\.ph(ar|p|tml)$"> SetHandler ${PHP7_POOL_DEFAULT} </FilesMatch> The pool's are setup as variables that can be used in the vhost configurations, and it sets up a default configuration. I set mine up to support highlighted .phps extensions also using a traditional CGI setup, but that can be skipped/ignored. In each vhost I set it to the right pool by giving it it's own FilesMatch with the appropriate handler. <VirtualHost *:443> ServerName aoeex.com ServerAlias www.aoeex.com <FilesMatch ".+\.ph(ar|p|tml)$"> SetHandler ${PHP7_POOL_KICKEN} </FilesMatch> </VirtualHost> Setting up multiple versions is just a matter of installing each version's php-fpm package, configuring a pool for that version, and pointing apache to whatever socket you setup for that pool. Fairly simple. I used to have a 5.6 setup going as well due to some old third-party software but I was finally able to move to single 7.3 version recently. Ideally you'd want to only have a single version going, and only use multiple versions if you really need to for some specific reason.
  20. Here's info on how to set the lid close option for ubuntu. How to change lid close action in ubuntu Other systemd based distros are probably the same. Just google distro_name lid close option and you can probably find how to do it for your distro.
  21. If you don't want the page changed, you have to use both javascript and PHP. PHP has to send the email from the server. Javascript needs to make a request to the server so it knows when to send the email. Your button would trigger a javascript function that makes a request to the server using AJAX. You would pass whatever information the server needs to send the email (ie, their email address). If your using jQuery, this is relatively simple using jQuery.post(). If your using a different library search for it's ajax functions. If you're not using a library, I suggest you do. If you're stubborn and don't want to use a library, look up XMLHttpRequest. If you don't mind a reload so long as the info is preserved, the post your form data to a PHP script that will send the email. Have that script then output the same form will all the fields pre-filled using the data from the $_POST variables.
  22. https://github.com/concrete5/concrete5/issues/7671 Apparently it was attempted to be fixed but is still goofed up. Could file another bug report on it if you wanted.
  23. You had RewriteEngine On, just inside your directory section instead of the VirtualHost. Didn't think it would have mattered so didn't mention that. Not sure if your saying using Let's Encrypt made you change or not, but I use Let's Encrypt and cetbot just fine with the simple redirect.
  24. Your RewriteCond is unnecessary. %{SERVER_NAME} resolves to whatever the ServerName directive is, so that condition is always true. %{HTTP_HOST} would be the name the request uses. Unless this is your default vhost that should also always be equal to test.example.com though as you have no ServerAliases.. If you just want every non-https request to go to https, there is a simpler way: <VirtualHost *:80> ServerName test.example.com Redirect permanent / https://test.example.com </VirtualHost> This is what I use for pretty much all my sites.
  25. Authorizing the servers involves setting up SPF records on the bidsoliciationapp.com domain which is something you should be able to do yourself. Google has a help article detailing the configuration you need. Setting up the SPF may or may not make a difference. Spam filters are free to interpret a SPF lookup result with whatever weight they want compared to other factors, but having a positive response can only help. Know that if you send email from places other than google using the same domain, you'll need to add those server addresses to your SPF record in addition to google's, otherwise they will become unauthorized and more likely to get rejected as spam. Beyond that, as Psycho suggests the best thing you can do is just make sure the content of your messages doesn't look like it might be spam. Avoid words/phrases commonly associated with spam if possible. Fun anecdote: GoDaddy's email servers used to flat out reject all my emails as spam. I had a link to my personal website in my signature and at the time was hosting it on a home server. Their spam filter scanned the links in the email, resolved the domains and check the IP against IP black lists. Since it was a home server the IP of my domain was in a "Residential ISP" blacklist. Apparently that single link was enough for their system to refuse delivery. Removed the link and suddenly my stuff went through no problem.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.