Jump to content

gizmola

Administrators
  • Content Count

    4,817
  • Joined

  • Last visited

  • Days Won

    39

gizmola last won the day on January 2

gizmola had the most liked content!

Community Reputation

152 Excellent

1 Follower

About gizmola

  • Rank
    Prolific Member

Contact Methods

  • AIM
    gizmoitus
  • Website URL
    http://www.gizmola.com/

Profile Information

  • Gender
    Male
  • Location
    Los Angeles, CA USA

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Integers are the most obvious and performant solution to primary keys in small to mid size systems where you can have a single database. MySQL makes it easy with AUTO_INCREMENT. Make sure that you always define your columns as UNSIGNED so you don't waste half your key space. Assuming MySQL, in most cases the INT type is perfect, using 4 bytes and supporting up to 4,294,967,295 keys. If you have other support tables, you can save space (and increase overall performance) by using tinyint, smallint or mediumint types. The more compact your db the better it will perform over time (assuming proper indexing of required values). You do not need to add indexes for primary key or key columns. Also, make sure you used INNODB for all your tables.
  2. Pretty sure. The only people who would know for a fact are the people who already built the sites you referenced. With that said, as an example of something similar, I was the lead engineer for a project that built a social network application some years ago. It was similar in some ways to instagram, in that users could share streams of media with friends, and was designed to be a mobile application, although it also had a web client. As part of the architecture we accepted music, video and images. The subsystem that handled this accepted most every type of video there is, and then processed and transcoded that to mp4 video at various bitrates. It also extracted a number of different screenshots from the video in order to have cover images in the stream. It also accepted images and sound files. It was essentially a version of what youtube and other streaming video sites do. The primary tool that did the complicated video transcoding was ffmpeg. Because the site backend was written in PHP, I found and enhanced a PHP component library to help with the php integration and make it easier to fit it into our overall backend infrastructure. The library which has been enhanced and added to over the years since is called php-ffmpeg. At the time there were a number of things the library didn't have that I wanted, so I forked it on github and added the things I needed. Primarily it just automated the various command line settings you can send to ffmpeg, and called exec(), but when you're building something that you want to be scalable, there's a lot of work that typically needs to go into making it work beyond the actual generation of transcoded video files. There were queues and organized storage of the files and other things needed so that the individual conversion jobs could run asynchronously and be kept seperate. Without a doubt, ffmpeg was the foundation of the pipeline, but there was also a lot of work put into all the associated requirements that come along with having a business or service, as well as the DevOps work that you need to make sure your system works to an acceptable degree when you have more than one user using it. When people ask about frameworks, my standard answer at this point is to use either Laravel or Symfony. You could certainly use Yii but is not one of the PHP standard bearers at this point. I don't advise people to use Cake or Codeigniter or Zend framework either.
  3. When you GROUP BY you will get one row per group in the result. I like to think of this as compacting all the rows that were part of the group into one row, along with the property that summary/group operators can be used relative to the initial underlying grouping, to compute things like SUM, AVG, COUNT etc. A classic example is a sales table where you GROUP BY YEAR(sales_date) and have SUM(order_total) AS total_sales to get a report of gross sales by year. If I understand your question correctly you want to ORDER the entire result set in a way that is incompatible with the natural property of ORDER by. What you would like is: ALL results ORDER BY campus_name_assignment_table.edited_timestamp DESC Except that once you have a particular campus_id displayed, you want the other campus_name_assignment_table rows for that same campus_id to follow. This can't be done without employing some form of manipulation. The simple solution that uses the ORDER BY, is to order results using your join and ORDER_BY campus_name_assignment_table.campus_id, campus_name_assignement_table.edited_timestamp DESC. This gets you the order of campus_name_assignments to be most recent first for any one campus_id, but it is going to be ordered by campus_id first. Am I right in assuming that this is the problem you are having?
  4. Does this provide the ordering you are trying to achieve?
  5. I think it's safe to assume a few things here: These sites are using a Cross platform Mobile app development framework. There is quite a list of these, many of which use standard web assets (html, css and javascript) for their source. This is an excellent wrap up and summary of the many cross platform mobile development options out there, along with pros and cons for each. They an account system that handles the customization options, storage of assets, builds, developer accounts etc. They are wrapping up and automating the process of building an app in their system They have canned solution code that can be integrated into the application Can PHP be used to create a software as a service site like this? Absolutely What framework should you use? By far the 2 leading modern PHP frameworks are Symfony and Laravel. Either one is capable. Requirements for your server(s) I would suggest that you build around a cloud service that will allow you to expand and contract your capacity as needed, and most likely have queues and micro service architecture so that individual tasks can be performed. For example, since your system would need to compile/bundle/process at various times, you probably want that to be asynchronous. The meat of your system will be built around whatever framework you choose, and the build tools they provide running in the OS. You also will have the need to store data for each client, so a distributed storage system like Amazon S3 would be extremely valuable for keeping track of assets, source and generated application files on a per user basis. Without a doubt, in order to create a system like this, you will need to be an expert in the use of the cross platform framework in order to understand how to automate its use.
  6. I use a commercial solution now mainly because I was never happy with the OSX options for Keepass once I started using a Macbook as my workstation. With that said, I stored the file in a shared dropbox folder, but really any cloud file storage service will work.
  7. In general, this is called "Diff" or "Diffing". You will probably have more luck in the future when searching for tools to reference "diff tool" or "visual diff tool"
  8. The KeePass database employs strong encryption. Assuming you used a good password, storing the db in a cloud service should not be a problem.
  9. gizmola

    super newbie here

    Hey Jim, Don't worry about your learning curve -- this community thrives on questions, and without them, it would have no reason to exist.
  10. Try platformio-ide-terminal instead. Does it work?
  11. Rather than trying to invent your own solution to stateless REST API tokens, I would suggest you take a look at JSON Web Tokens (JWT). Here are some resources to help you understand what they are: https://jwt.io/ Integrating into PHP: https://www.sitepoint.com/php-authorization-jwt-json-web-tokens/ (Don't get too caught up in the specific libraries he used.) Another PHP Article by a PHP JWT Library author: https://dev.to/robdwaller/how-to-create-a-json-web-token-using-php-3gml
  12. gizmola

    MyISAM to InnoDB?

    Great tips from Barand. You also will want to make sure that your InnoDB configuration parameters are set to empower InnoDB's architecture. In particular, InnoDB differs from MyISAM from a performance standpoint, largely because it has a data cache whereas MyISAM does not. Basically, when you do queries, the actual result data will be stored in this cache, and repeated queries can use the cache rather than reading the data from disk again. Whether this is of benefit to you depends on your setup, and where you are running MySQL, as well as your ability to reconfigure MySQL to make use of it effectively, but for many small to medium size data sets, it is not unusual for people to have enough memory allocated to MySQL to keep the entire data set in cache 99% of the time. This cache is called the InnoDB Buffer Pool and this article and calculator can help you figure out what you might be able to do.
  13. Not that I'm aware of. With that said, docker only uses the .env file to set variables that are used during the docker-compose pre-processing stage. You also might be well advised to have some directory structure where your laravel app code is not in the root of the project where it would conflict with docker files. I would also suggest taking a look at how Laradock does things.
  14. gizmola

    multiple INSERT commands

    I thought that Barand did a pretty good job showing how to do this with the PDO Api. If you don't understand some part of it you should reply with that portion of his code. Here's the underlying fundamentals from a SQL standpoint: Insert 2 values (name, age) into a table: INSERT INTO some_table (name, age) VALUES ('Bob', 22) The obvious syntactic matching is that after some_table you have a list of the columns that you will be specifying values for - (name, age). Then you have the VALUES keyword, and a parens which will contain the values in the order specified in the column list (name, age): VALUES ('Bob', 22) SQL will also allow you to specify more than one set of values, which is what you are looking to do. In order to do so it's as simple as adding additional parens sets, seperated by a comma: VALUES (name1, age1), (name2, age2), (name3, age3) .... INSERT INTO some_table (name, age) VALUES ('Bob', 22), ('Fred', 25), ('Sam', 19) So the question now becomes, how can you do this with PDO using a prepared query? Well PDO is nice in that it will let you pass an array variable with the values so Barand's code is creating 2 queries. His code assumes that your form provides a form element for each name using the attribute name="firstname[]" When you utilize this, PHP will automatically put multiple values into a $_POST['firstname'] variable in the form of an array. He then traverses that array and sets up an accompanying placeholder in the query: $params[] = '(?)'; In the same loop he sticks the data into the $data array. To finalize everything, he uses JOIN to turn the array of placeholders into a string that matches the number of names you had. Let's assume you had 'Manny', 'Moe', 'Jack': $sql becomes $sql = "INSERT INTO tablename (first_name) VALUES (?),(?),(?)"; This gets prepared, and then executed with the array of actual values. Try implementing this code with your form processing script and if you have problems let us know.
  15. The Hackernoon link is probably the least authoritative article you will find -- written by someone who admitted they were new to Python. I would try this which essentially turns Atom into a Python IDE: https://atom.io/packages/ide-python
×

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.