Jump to content

DeX

Members
  • Posts

    258
  • Joined

  • Last visited

Everything posted by DeX

  1. I'm not sure if I phrased that question properly but I'll give an example: I have a User class and I have a Car class. I have users on a website and they own cars. So if I want to get all users from the database, I use the User class to do that fetch and the same if I want to get all cars from the database. My question is when I want to get all cars owned by a specific person. Should I do something this? $user = new User(123); $carsOwnedByUser = $user->getCars(); Or should I do something like this? $carsOwnedByUser = \CarNameSpace\getCars(123); In the first one I have a function in the User class to get all of their cars and in the second one I have a function in the Car class that gets cars when you pass in a user ID. Is one more proper than the other? Each one will return an array of Car objects. Thanks a lot.
  2. I have models for all of my database objects. For instance, I have a User class and in that class I have functions like so: getFirstName() getLastName() getAddress() And of course I have setters for each as well. My question is whether I should also have some helper functions inside this model for something like getFullName() which will return a concatenation of getFirstName() and getLastName() or if we should simply concatenate the data every time we want to display it on each page. So we will be displaying the user's name on the top of each page, in their account profile and maybe some other places of this portal type system. Is it proper object oriented programming to use helper functions like this? Should they go into the controller? Should we create separate logic namespaces for each model to control this logic? Another example would be if I wanted to get the most recent user that signed up, where would this function go? In the user model? In the controller? Thanks.
  3. Sorry, I should have included that. We will be running reports based on which quotes have certain criteria: - how many are over a certain length - how many have item X selected - how many of item X we sold These are all based on user entered inputs and wrapped up in the JSON. The reports are generated whenever anyone loads the reports page to view them, they're not done on a schedule like nightly.
  4. The inputs are related to quotes being sent to customers so often times a customer will want to modify their quote, in which case the user will open an old one, load all of the input values from the database and change some, only to re-save it with the new inputs. The re-save does not modify the old table row, it simply creates a new row under the same account.
  5. I have a system where users enter about 100 inputs, not all are required. We capture every input when the user clicks save and they have the ability to look back at any save they have and edit the inputs to re-save. Currently we serialize the inputs, wrap it up in a JSON string and save it into a column in the MySQL table. I'm wondering if there would be a speed / storage advantage to specifying a column for each input and storing each value in its own column. If you're wondering how often the inputs change, it's infrequent. We might add / remove / modify about 20 inputs a year, probably fewer. If you're wondering how often saves are done, it's about 8 users doing 40 saves each, per day.
  6. Every item has a corresponding quantity calculation that's input manually by the developer, this calculation never changes. Then when a final price for the building is needed, it does the following: - get all product ID from the database - loop through all product ID quantity calculations and multiply this quantity by the price in the database (price updated once a month) - adding all these totals together and multiplying by a markup percentage gives us our final price Is that all you needed?
  7. There are only 400 items total in the products table, plus however many of these items there are.
  8. I have a Bill Of Materials (BOM) and we buy a building product from a third party supplier that custom makes them for us. They charge us a certain price based on these factors of the product: - width of building (in static increments: 24, 30, 36, 42, 44, 50...) - building type it's being used in (each type requires certain design specifications to meet code requirements) - snow load (depending on where in the country the building will be, we need to make it more durable if there will be a higher snow load on the building) Based on all of these, they build the product for us and charge us accordingly. I'm wondering if I need to have 400 items in my BOM to account for each combination of length, building type and snow load for this one product. Is this normal practice? The price differences don't have a pattern and don't overlap in any way, it's random pricing set for each combination. It's not like the price goes up $30 for each length, the pricing gaps are all different for each setting.
  9. Never mind. I was trying to run the script on the wrong server, it appears it was working the whole time on the proper server.
  10. bitnami@ip-172-26-***.***:~$ echo "subjectt" | sudo -u daemon mail -s "bodyy" d*****@******.com /home/bitnami/Maildir/sent: Permission denied Failed to save message in "/home/bitnami/Maildir/sent" - message not sent
  11. Is this correct? Daemon is the web user for Bitnami Apache. bitnami@ip-172-26-***-***:~$ sudo -u daemon echo "subjectt" | mail -s "bodyy" d*****@*****.com
  12. How can I do that if the command is being run through PHP and Apache?
  13. I just set up my first mail server ever using Postfix and when I run echo testing | mail -s subjecttttt da***@*********.com it sends fine and I receive the email at that address. The problem is when I create a php page on the server and write mail("da***@*****.com", "test", "test"); then run the page in my browser, it doesn't send the email. From what I read online, it's because the root user on the command line has permissions to send mail while the www-data user in the browser does not. Is there a way I can specify which user Postfix uses? Any other suggestions? Thanks a lot.
  14. I actually figured it out after 3 weeks of frustration. I disabled a NAT policy on my firewall for outbound requests and it solved everything. The script works great now and I can ping without issue.
  15. More discoveries. I remembered I still have the old virtual machine we were previously using as our web server, I have since replaced that with a new virtual machine on the same VMWare server. Both virtual servers are almost identical, the only difference is the new one has more disk space. I spooled up the old server, copied the test script to it and it runs flawlessly every time. I'm also able to ping out of that server no problem so it's some setting in the new server which is preventing me from making outbound requests. I'm going to continue my search there and not mess with any firewall settings.
  16. I get it, so I need to forward ports 80 and 443 from the ISP 2 static IP as well. Is that what you're saying? I thought a ping would always come through but I guess that makes sense. Something else I'm exploring is the fact that I can't really ping anything from this server which leads me to think it's something with the DNS or gateway not being set properly. That way inbound traffic works fine but outbound not so much. I'm just trying to figure out what I would set the gateway, DNS and subnet mask to right now to see if that's the issue. I'm not sure if it's the settings I got from my ISP or if it's an internal default for the firewall.
  17. It sounds like you're referring to incoming connections, is that right? Everything incoming is perfect because the domain points to the ISP 1 static IP and the router (in the firewall) forwards all port 443 andn 80 requests to that specific server internally. Perfect. The issue is with the outbound requests, I can't ping anything from this server and can only succeed on the Google API requests if the packet goes out through X1 on the firewall (ISP 1). I have the load balancing set to Round Robin which means packets get sent out randomly through whichever ISP line it happens to pick up and it seems to affect whether or not these API requests succeed for some reason. I don't know why. I feel like setting up a whole new Ubuntu server to see if that can ping. Very important, I just discovered something else. I copied the script to a separate EC2 server I have with Amazon which is completely unrelated to our network here at the office and the script works 100% of the time on that server. It works perfectly, I just need to figure out why it is not working from my office server.
  18. New discoveries. The /etc/network/interfaces file is unmodified which makes me think I never set any sort of static IP for this server, I just set port forwarding for port 80 on the firewall to go to this machine. It is in the DMZ which led me to believe it may not have permissions to access the WAN but I ran the same ping test (ping google.ca) on a Windows machine on the DMZ and it returned all packets without problem. When I run the ping test on the web server (the one in question), it loses all packets. I can't ping anything from the web server. This is weird. And just to clarify, this is our production web server, it's working inside and outside the network perfectly otherwise.
  19. Our IP is static, we have a static IP from ISP 1 and a different static IP from ISP 2. I only have the static IP from ISP 1 set on the server and also the ISP 1 specific nameservers as well. The script connects 100% of the time when only on ISP 1 and 0% of the time when only on ISP 2. Can I change the DNS and still get access to the internet? I thought I had to use the DNS provided by my ISP. The server is running Ubuntu 16.04 and /etc/resolv.conf contains the following with numbers instead of *: # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf( # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 64.*.135.* nameserver 207.*.226.* nameserver 64.*.128.* I'll have to run this last test for you outside work hours to test.
  20. We currently have 2 separate ISP providing us internet at our office and I set up our firewall to load balance the traffic on both networks. We have a production server behind the firewall and it's set to a static IP given to us by ISP 1. I set up a PHP script on the server to connect to our Google Drive through their API and it works great 50% of the time. The other 50% it times out so I tried taking ISP 2 out of the load balancer and having the company run completely on ISP 1, this caused the script to work 100% of the time and complete in under 1 second. When I remove ISP 1 and have the company running on just ISP 2, the script fails 100% of the time with the same timeout, saying it couldn't resolve the hostname of the Google Drive server. While running on either network, our internet works just fine, no problems at all. I run speed tests and they're perfect, we're supposed to get 10 down from one ISP and 20 down from the other but the tests show around 28 down at each terminal. How do I set the server to always make its outbound API requests on ISP 1? Can I add a second static IP for ISP 2 and will that work? I don't know what to do next, thanks. API error:
  21. I got it, though it's incredibly slow and sometimes times out. I'll leave it here because I know others will be looking for this in the future. 0. Install the Google API using Composer, this is quite easy with some online tutorials. 1. Create a service account inside the Google Administrator interface. There are lots of other resources on this and it's easy. 2. Go to admin.google.com where you manage API client access. Enter your service account client ID in the first box and then "https://www.googleapis.com/auth/drive" into the second box. This is for full Drive access, you can also restrict the type by using something else. 3. Then you run your code, this is what I used: <?php include_once __DIR__ . '/vendor/autoload.php'; // location of credentials file downloaded from Google Drive $credentialsFile = '/var/client-service.json'; // you manually copy this file to your server // fail if our configuration file does not exist if (!file_exists($credentialsFile)) { throw new RuntimeException('Service account credentials Not Found!'); } putenv('GOOGLE_APPLICATION_CREDENTIALS=' . $credentialsFile); $client = new Google_Client(); $client->useApplicationDefaultCredentials(); // add full Drive scope, other options can be read only or metadata $client->addScope('https://www.googleapis.com/auth/drive'); // full Drive access $client->setAccessType('offline'); // select user you want to impersonate $client->setSubject('your_user@email.com'); $httpClient = $client->authorize(); $service = new Google_Service_Drive($client); // specify parameters we want to work with, including folder name // xxxxxxxxxxxxxxxxxx is the folder ID you want to read from // get the folder ID from logging into that user's Google Drive $optParams = array( 'pageSize' => 10, 'fields' => "nextPageToken, files(contentHints/thumbnail,fileExtension,iconLink,id,name,size,thumbnailLink,webContentLink,webViewLink,mimeType,parents)", 'q' => "'xxxxxxxxxxxxxxxxxx' in parents" ); // list the files in the specified folder $files = $service->files->listFiles($optParams); // for testing, print file data so we can see it works foreach ($files as $file) print_r($file); // if desired, create a folder in this user's Drive $fileMetadata = new Google_Service_Drive_DriveFile(array('name' => 'Invoices3','mimeType' => 'application/vnd.google-apps.folder', 'folderId' => 'xxxxxxxxxxxxxxxxxx')); $newFile = $service->files->create($fileMetadata, array('fields' => 'id')); ?> You have to create that service account and I believe I also created a project to link it to, these steps are in just about every tutorial I followed, what I could not find previously was how to impersonate the user so you weren't just interacting with the service account's Drive and this code shows how to do that. After creating your service account, it forces you to download the JSON file and this is the client-service.json file you will need to copy to your server at the location you specify near the top. Good luck, it's working for me.
  22. Close, but that's for creating a client ID for an Installed Application, I've already gotten it working that way but the access token keeps expiring and the user needs to log in every time. I'm trying to create a service account which has the benefit of automatic login without user interaction. With the service account specifically there is no place to add acceptable redirect URI.
  23. Slight problem. I created the service account, downloaded the JSON key file and pointed my application to it. Now when I try to open my web application, I get the error "Error: redirect_uri_mismatch." This is quite a common error on user accounts where you have not set your redirect URI in the permissions for that client ID account and you can easily fix it by adding the redirect URI, however, for a service account I cannot figure out anywhere to add redirect URI. Where to I add them? Should I even be getting this error on a service account?
  24. It appears I need to set up a service account. I was trying to do it on my main user account.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.