-
Posts
4,704 -
Joined
-
Last visited
-
Days Won
179
Everything posted by kicken
-
why do you want to try and generate a shared sequence? Why not just use random names and/or give each user their own separate sequence?
-
If you're using mysql 8.0.19 or newer you can generate your temporary table inline like so: select userList.flower from (values row('rose') , row('carnation') ) userList (flower) left join pretty_flowers on pretty_flowers.flower=userList.flower where pretty_flowers.flower is null see example.
-
You should specify the full path to your php file in the cron task. Relative paths are not useful as you don't know what the base directory will be when the task is executed. Rather than adding/removing tasks from your cron file via code, manually configure a task to run at a specific interval and then have that task's code check for work either in a DB or in a file somewhere. Then have your code add/remove tasks by updating that DB/file.
-
I usually define functions that correspond to the types/blocks in the defined file format and use those to read it piece by piece. For this, I created some readDWORD, readWORD, and readGUID functions. The GUID took a bit to figure out, and I'm still not 100% sure it's correct but it makes some sense and matches your example. <?php $file = 'test.utx'; $fp = fopen($file, 'rb'); $sig = readDWORD($fp); if ($sig !== 0x9E2A83C1){ die('Invalid file format'); } else { echo 'Valid file.' . PHP_EOL; } echo 'Version: ' . ($version = readWORD($fp)) . PHP_EOL; echo 'License mode: ' . readWORD($fp) . PHP_EOL; echo 'Package flags: ' . readDWORD($fp) . PHP_EOL; echo 'Name count: ' . readDWORD($fp) . PHP_EOL; echo 'Name offset: ' . readDWORD($fp) . PHP_EOL; echo 'Export count: ' . readDWORD($fp) . PHP_EOL; echo 'Export offset: ' . readDWORD($fp) . PHP_EOL; echo 'Import count: ' . readDWORD($fp) . PHP_EOL; echo 'Import offset: ' . readDWORD($fp) . PHP_EOL; if ($version >= 68){ echo 'GUID: ' . readGUID($fp) . PHP_EOL; } function readDWORD($fp) : int{ return read($fp, 4, 'V'); } function readWORD($fp) : int{ return read($fp, 2, 'v'); } function readGUID($fp) : string{ $time_low=readDWORD($fp); $time_mid=readWORD($fp); $time_high_and_version=readWORD($fp); $clk_seq_hi_res=read($fp, 1, 'C'); $clk_seq_low=read($fp, 1, 'C'); $node=fread($fp, 6); return sprintf('%s-%s-%s-%s%s-%s' , bin2hex(pack('N', $time_low)) , bin2hex(pack('n', $time_mid)) , bin2hex(pack('n', $time_high_and_version)) , bin2hex(pack('C', $clk_seq_hi_res)) , bin2hex(pack('C', $clk_seq_low)) , bin2hex($node) ); } function read($fp, int $length, string $code){ $bytes = fread($fp, $length); $parsed = unpack($code . 'parsed', $bytes); return $parsed['parsed']; } The file specification you linked says the file is encoded in Little Endian. The UUID RFC says the representation is in Big Endian so we read the multi-byte components as little endian then convert them to big endian for display. output Valid file. Version: 127 License mode: 29 Package flags: 33 Name count: 31 Name offset: 72 Export count: 7 Export offset: 753 Import count: 6 Import offset: 711 GUID: e484d857-00b7-4107-a58a-36ff29f6a3a5
-
Cron is fine, you just have to use some care when implementing it. You have to devise some way of passing information to your cron job, for example by writing things out to a DB table which the cron job then checks when it runs. There will be some amount of delay between when you add the task to the queue and when then cron job next runs to process it. Running the job every minute to check for would would minimize that delay. If you're running your job every minute, you need to make sure that while one instance is processing the task, later instances do not attempt to pick up and process the same task. You'll probably want some way to re-try the task in case the instance processing it crashes and does not complete the task. Make sure you use the php cli interface to run your job and you can configure different limits. Max execution time is disabled by default for CLI.
-
That's a symptom of session locking. You can work around that by having your long-running process call session_write_close before it starts. Assuming your long delay is an actual problem and any a symptom of poor code, there are a variety of ways to handle moving the task to a background process. Cron is a common and generally easy way to handle it. Another way is to use a job queue / message queue and separate worker services. Examples would be gearman, beanstalkd, rabbitmq, zeromq, and others.
-
How to send html formatted email with attachment
kicken replied to wezken's topic in PHP Coding Help
Don't try and do this manually, use a library such as PHPMailer or Symfony Mailer. There are a lot of specific formatting rules you need to follow for doing proper mime emails and it's not worth the time and effort to try and figure it all out yourself when libraries such as those can handle it all for you in an easy to use interface. For example, with Symfony mailer you would have code like this (using SMTP): $dsn = 'tls://smtp.example.com:587/'; $mailer = new Mailer((new EsmtpTransportFactory())->create($dsn)); //Create a new email object. $email = new \Symfony\Component\Mime\Email(); $email->from($from); //Set the from address //If $from is not an address you control, set the return path to one you do. //$email->returnPath('system@example.com'); $email->to($to); //Set recipient. if ($cc){ $email->cc($cc); //If you need to CC someone } if ($bcc){ $email->bcc($bcc); //If you need to BCC someone. } $email->subject($subject); //Subject line $email->html($message); //HTML Body //$email->text($message); //Plain-text alternative if you have one, or don't want HTML. if ($attachment['error'] === UPLOAD_ERR_OK){ //Attach uploaded file. $email->attachFromPath($attachment['tmp_name'], $attachment['name'], $attachment['type']); } //Send email $mailer->send($email); -
problem with hacker, i need a way to encrypt my gmail or something
kicken replied to alexandre's topic in Miscellaneous
If you think your PC might be compromised in some way, the best thing to do is to wipe your hard drives and re-install your OS and other programs from scratch. Don't bother with trying to find and fix the infection. -
It's unclear to me whether you'd modify that or add a new type since your Permission vs PermissionEnum is confusing to me. I've not used Enums personally yet, so I was unsure if Doctrine had generic support for them or not. From that bug report, it looks like it does and you might just need to downgrade for a bit and wait for the fix to get released.
-
As far as I am aware, no. If you want the numeric value, you access the ->value property. I believe what you'll have to do is change your doctrine configuration to not use a generic small int type and instead use specific PermissionEnum type.
-
Because you're asking for the number of rows, not the actual data. You want to fetch the row data and use that. Some searching suggests that you'd do that using this code: return $query->row()->total_paid_amount Also, if you're trying to get a sum of all payments, you probably don't want to be using GROUP BY, and should remove p_id from your select list.
-
Making Composer provide untagged git file
kicken replied to NotionCommotion's topic in PHP Coding Help
You can tell composer to pull a specific commit by adding #hash after version. "api-platform/core": "3.0.x-dev#14c7cba" Normally just doing 3.0.x-dev would pull the latest version, but I think because both the main branch and 3.0 branch use this alias it causes an issue and for some reason composer prefers the main branch. -
For what it's worth, and maybe it'll help with your conception of this being a "problem", the database won't do some complex re-index operation with each insert. The database maintains the index in a particular sorted structure and would just insert the new row data into that structure where it needs to go. I have a system setup that is pretty much exactly as requinix is describing. When a user visits a particular discussion I insert a row recording the time they visited that discussion. Table is simple, just the discussion id, login id, and the time with an index across the three columns. CREATE TABLE discussion_view_history ( DiscussionId int NOT NULL, LoginId int NOT NULL, ViewedOn datetime NOT NULL, index IX_last_viewed (DiscussionId, LoginId, ViewedOn) ) To obtain a list of discussions with a count of total and unread posts, the query looks a little something like this. select d.Id , d.Title , COUNT(p.Id) as totalPosts , sum(case when vh.lastViewedOn is null or p.PostedOn > vh.lastViewedOn then 1 else 0 end) as totalUnread from discussion d inner join forum f on f.DiscussionId=d.Id inner join discussion_post p on p.DiscussionId=d.Id left join ( select DiscussionId, LoginId, MAX(ViewedOn) as lastViewedOn from discussion_view_history group by DiscussionId, LoginId ) vh on vh.DiscussionId=d.Id and vh.LoginId=$viewerLoginId where f.Id=$forumId group by d.Id, d.Title For some context that might help alleviate that fear as well, my discussion table has 250,050 rows, discussion_view_history table has 2,104,958 rows, and discussion_post table has 4,226,291 rows and that query returns results in 44 milliseconds. Focus on making well structured tables with appropriate indexes and good queries. Don't worry about how big your tables or indexes might end up being.
-
Configure PHP Mail to access a server on a different host
kicken replied to Mikheil's topic in Third Party Scripts
Make sure your mail server accepts email via SMTP and then configure phpBB to use SMTP and enter in the details for your server. -
PHP shell_exec uci not working but without errors
kicken replied to stealthrt's topic in PHP Coding Help
You don't have any new lines in your command string, so when shell_exec pass it to the shell it'll be as if you'd typed it all out on a single line in your shell, e.g.: blah@example:$: uci add firewall rule uci set firewall.@rule[-1].name='my iphone' ... service firewall restart Presumably that'd just result in some error as it's not really valid. Add some new lines or semi-colons to your command string to separate the commands and try again. If it still doesn't work, make an array of individual commands and run them one by one in a loop. -
<label> is an inline element. Inline elements don't have a specific width, they just take up whatever space is necessary. If you want to assign a width them, you need to change the display to either block or inline-block.
-
I'd probably just do that if it were me. Regenerating the user on every request isn't really going to be an issue I think, and might be necessary anyway more often than you'd think. I think what this error means is that you're trying to persist the App\Security\TokenUser object, but that object does not correspond to any known entity. Likewise for the other class. I've never used this library so really don't know anything about it. My initial comment was based on a quick look at the class where it seemed like it just extracted a string value from a user object. Doing a little more googling, I'm thinking maybe doctrine's reference proxies might be a solution for you. Since I don't know how things are structured, I can't really provide a good example but whever you were passing a fully hydrated user before, you'd change to just passing a reference, something like: $reference=$em->getReference(User::class, $jwt->getUserIdentifier()); $blamable->setUser($reference);
-
The BlameableListener only wants the username / identifier. If you store that in your JWT, then just pass it along. There's no need to re-generate the full user object just to pass it into the listener.
-
Yea, I usually just do something like this: $storageDir = '/wherever/you/want'; do { $name = bin2hex(random_bytes(16)); } while (file_exists($storageDir.'/'.$name)); The loop probably isn't really necessary, the chance of a collision is statistically insignificant I think, but I throw it in anyway just in case. I'll usually create a few sub-directories as well rather than literally have everything in one directory. Take the first few characters of the name and make a sub-directory based on that. For example, if the result were c6c1ce8c2bcf000daea561cbcca4a671 then I'd end up saving the file to: /wherever/you/want/c6/c1/c6c1ce8c2bcf000daea561cbcca4a671. Keeps the directory sizes more manageable just in case you need to manually browse to some file.
-
It sounds like what you're describing / debating is a how to create a multi-tenant application, is that right? Essentially you have an application that is dynamically configured and has separate data for each user instance (ie, several different wordpress blogs sharing the same code base). I'm not sure I see the need for such complexity based on what is mentioned. That just sounds like basic user management / access to me. You have your users table and a files table and associate the files with their users. How you physically store the files on disk isn't really that relevant. Just ensure you code your editor so it can only load files the user has access to. Is there some public aspect to this where you need to isolate the user's files based on just the URL and not the user being logged in? Last time I did something like this, it was based on the host name, but a URL directory would work as well. Using the host name could have allowed the end-users to apply their own domain name to application instead of having to use mine (though that was never actually done). Based on the info provided so far, I'd probably just be doing essentially what I think you're already considering, just without the creation of separate folders for users bit. I'd just store all the files for all users in one directory structure somewhere with random names. The database would contain the original file name/mime type and user association information. I'd use either mod_rewrite or FallbackResource in apache to handle public access to the files and normal login procedure for the editing/uploading bits.
-
This can be a frequent issue, which is why it's important to test on multiple browser. You can also reference caniuse.com to get an idea of what is supported on which browser versions. If you look up position: sticky there for example, it says for chrome (up to 90, which is your version) it's partially supported with the note: Modern chrome fully supports it.
-
Not entirely sure what you're asking about, however it sounds like you're wondering about the variable names? The names you assign in the function are irrelevant to the names (if any) or what you end up passing in when you call the function. function choose($x, $z){ //$x is whatever value is passed in first //$z is whatever value is passed in second. } You can pass in whatever variables you want when you call it, or just some static value and not a variable at all. They don't have to have the same name (though it's common that they do since they usually represent some type of data). $a = 1; $b = 3; choose($a, $b); // $x = 1, $z = 3 choose(6, $a); // $x = 6, $z = 1 Any changes you make to $x and $z inside the function will not affect the original variable passed into it. Generally that is the desired behavior, and you just return some value from the function as it's output. If you have a specific need to change that behavior and want the original variables to be modified, then you must indicate that by declaring the parameter as a reference parameter using &. function accumulate(&$x, $y){ $x += $y; } $a = 1; accumulate($a, 2); echo '$a = '.$a; // $a = 3 accumulate($a, $a); echo '$a = '.$a; // $a = 6 accumulate($a, 5); echo '$a = '.$a; // $a = 11 Don't use references unless you really have a specific need to. They can get messy and cause you debugging headaches just like globals. If you're wondering about the discrete variables vs array thing, the difference is instead of defining a bunch of variables as parameters, you just define one which receives an array of values. So instead of this: function doStuff($colA, $colB, $colC, $colD, $colE, $colF){ //Do stuff with $colA, $colB, $colC, $colD, $colE, $colF } $row = $db->query($sql)->fetch(); doStuff($row['colA'], $row['colB'], $row['colC'], $row['colD'], $row['colE'], $row['colF']); You'd do this: function doStuff($row){ //Do stuff with $row['colA'], $row['colB'], $row['colC'], $row['colD'], $row['colE'], $row['colF'] } $row = $db->query($sql)->fetch(); doStuff($row); It keeps the argument list smaller which is good, but more importantly it lets the function be responsible for accessing the data it needs rather than you having to hand it out individually. If in the future you need to add something to doStuff that needs access to colZ, you won't have to go around your application updating all the function calls to include colZ.
-
Php cookies and session data expiring at different times
kicken replied to oz11's topic in PHP Coding Help
Your code can be simplified in many ways. See the comments in the refactored code below. <?php function setRememberMeToken($pdo, $user_id) { //$length wasn't a great name and is an unnecessary variable. $token = bin2hex(random_bytes('25')); $expirationDate = time() + (86400 * 7); // <-- 7 days later (make sure your comments are accurate) setcookie("token", $token, $expirationDate, "/"); $_COOKIE["token"] = $token; //$_COOKIE['remember'] is unnecessary, just get rid of it //--deleted //You calculated your expiration timestamp above already, no need to do it again. $to = date('Y-m-d', $expirationDate); //Assuming token_id is an auto increment column, you can just omit it from the insert. $sql = "INSERT INTO `user_token` (`user_id`, `expires`, `tokenHash`) VALUES (?, ?, ?);"; $stmt= $pdo->prepare($sql); $stmt->execute([$user_id, $to, sha1($token)]); } function getRememberMeCheck($pdo) { //I find spacing out your queries makes them easier to read and understand. $stmt = $pdo->prepare(" SELECT users.name, users.user_id FROM user_token, users WHERE tokenHash = ? AND expires > NOW() AND users.user_id = user_token.user_id "); $stmt->execute([sha1($_COOKIE["token"])]); $db_query = $stmt->fetch(); //Your token and expiration date are validated as part of the query //All you need to do is check if you got a result or not. if (!$db_query){ //If you didn't get a result, either the token is invalid or it has expired. //header("location: login.php"); return false; } //Otherwise, if you did get a result, the token is valid. $_SESSION["loggedin"] = true; $_SESSION["username"] = $db_query['name']; $_SESSION['the_usr_id'] = $db_query['user_id']; return true; } //This method seems to just be a copy of the method above, why does it exist? //The only difference is $_SESSION["loggedin"] = true; which you could just do above. //function setSessionVarables($pdo) { //... //} //--deleted function isRemembered() { //Instead of a separate remember cookie, just check if the token cookie exists. //if ($whatever){ return true; } else { return false} can be simplified to just return $whatever return isset($_COOKIE['token']); } -
If you're using the syntax dbconnect::query then there is no need to either pass in your dbconnect instance or extend that class because you're using the class like a singleton. In that scenario you can just remove the constructor from your secondClass class. Your overall code would look something like this: class dbconnect { private static $mysqli_handler; //Private constructor to prevent new dbconnect() private function __construct(){ } private static function connect(){ //Reuse the existing connection if it exists. //Use self::$var to reference static variables. if (self::$mysqli_handler){ return self::$mysqli_handler; } //Otherwise connect. try { mysqli_report(MYSQLI_REPORT_STRICT); self::$mysqli_handler = mysqli_connect(DB_HOSTNAME, DB_USERNAME, DB_PASSWORD, DB_DBNAME); } catch (mysqli_sql_exception $e) { throw new Exception('Error: Could not make a database link using ' . DB_USERNAME . '@' . DB_HOSTNAME . '!'); } //Other stuff return self::$mysqli_handler; } public static function query($query){ return self::connect()->query($query); } } class secondClass { public function selectDetails($id){ $sql = 'some sql here'; return dbconnect::query($sql)->rows; } } $sc = new secondClass(); $sc->selectDetails(); This type of code is not ideal as your secondClass is now directly linked with your dbconnect class, but it's fine for small/simple projects. Passing in your connect and updating the code to reference the provided connection is better as it allows you to provide alternative connections (such as a mock connection for testing).