-
Posts
579 -
Joined
-
Last visited
Posts posted by Drongo_III
-
-
I'm working on a new project and need to create DTOs for use in views (this is a Laravel project so MVC). The DTO will get created from subsets of values from 2 other models (Price, Quote). Obviously, it wouldn't be ideal to build the DTO from those models in multiple places so I want to seat that logic in a specific class. I figure an approach is to use a factory - i.e. as a place to capture the creation of the DTO. But when I read into Factory Method and Abstract Factory and Simple Factory this use case doesn't sound like it fits.
So my questions are:
- Is a factory the right pattern?
- If so, which flavour of factory would you suggest?
- Is this the way you would approach it? I appeal to the more experienced minds here for some insight. I work alone so it's hard sometimes to figure out the right approach.
thanks,
Drongo
- 1
-
Hello
Trying to find MaxRequestWorkers apache setting but cannot locate it anywhere. I have tried grepping the entire httpd directory but there is no mention of it.
Running Apache/2.4.48 mpm-prefork-module.
So two questions:
- Can apache configuration values NOT be explicitly set?
- In the event that it's not explicitly set does Apache fallback to an 'invisible' default?
- Is there a way to list specific apache config settings from the command line?
Pulling my hair out so any help is greatly received.
Thanks,
Drongo
-
So in my example above then the story would be marked as having 19 points because that's the sum of each respective staff member's estimate. Is that conventional?
-
Not sure if this the place to ask or not. Hoping someone can give me a steer.
I'm implementing agile scrum for the first time at my organisation. I'm trying to understand story point estimation but it's a little confusing.
I've read many articles stating that you take a reference story (one for which project participants can easily reason about to estimate it accurately) and place points on that reference story using it as a yard stick for the estimation of further tickets. And the points are usually selected from a number in the Fibonacci sequence.
The bit I don't understand is that many guides suggest the teams all put in their estimates. Dev may say a story is 8 points, QA may say the same story is 3 points, System Architects may say it's 8 points.
What do you place on the final story? Is it 8 points because that's the highest estimate? Or do you sum all the points (in this case 19 points) so the story is representative of the collective effort?
-
Thanks guys - it helps to get another perspective. I quite like the idea of view-models/business-models/DTOs whatever you want to call them as I too subscribe to the idea that views should do as little thinking as possible - especially when the view is comprised of multiple components from different database models. But I take your points that coupling has to happen somewhere and database change is likely to be infrequent. I suppose it's a matter of consistency on the project too so for this project I'll stick to the current method.
-
Hi there
I've recently been working on a Laravel project. It's a project I've been drafted in to work on and I've not used Laravel before.
My question is one of what is considered good practice. In this project the controllers pass whole models back to the view. For instance, its the norm on the project to pass back a whole booking model just to get at a couple of attributes like booking->id and booking->name. As these attributes represent database columns it feels a bit wrong to pass these around because you're effectively coupling the view to the structure of the database. I know behind the scenes Laravel is using a magic method to access the properties so you could argue they are wrapped in a method and therefore don't expose the direct database structure to views.
So the question is, would you pass database models directly to views in this way? Or is it 'better' to map those models to an intermediary model that is dedicated to the view (a sort of dto)?
Thanks,
Drongo
-
3 hours ago, requinix said:
Well there ya go. The system temp directory should be writable by virtually everyone - it's the temp directory, it's supposed to be usable for creating temporary files.
That's great. But I was hoping to gain some insight into what drives that buffering behaviour. Does it just kick in after post size is over a certain amount? I can't seem to find any settings that state they make post data buffer.
-
Hi Requinix
Thanks for the fast response.
I'm not too worried about the error as it sounds self explanatory to fix. The full error was:
PHP Warning: Unable to creeate temporary file, Check permissions in temporary files directory. in unknown PHP Warning: unknown: Post data can't be buffered; all data discarded in unknown
There's no temp directory in php.ini so it seems to be falling back to the system temp directory for which apache doesn't have permissions.
But the essence of what I'm trying to figure out is what may influence it buffering post data to a temp directory. As this may in turn let me figure out the best course of action for fixing the issue.
Also, I'm using mod_php.
-
Hi Guys
I'm getting the following error when sending large post data to a PHP 7 application:
PHP Warning: Unknown: POST data can’t be buffered; all data discarded in Unknown on line 0
I think I have a fix for this now, so the error isn't the source of my question. It just got me wondering two things:
- Does PHP always buffer data to a temporary file? Or is it only when post data is of a certain size?
- Are there any php.ini settings that directly influence this behaviour? I have trawled through the common ini settings but nothing (that i can find) mentions buffering.
Thanks,
Drongo
-
Sorry if this post doesn't belong here or anywhere else on this site.
I'm trying to build a Jenkins pipeline using the declarative Jenkinsfile. I've used the Git SCM plugin to pull down the main GitHub repo for the build. It gets triggered off a GitHub Webhook. This works great. The next thing I want to do is pull in another repository for the same build step that contains the build code (i.e. the steps for building the application that came down with the original Git SCM trigger). I want to separate it because the build code is common to a number of applications i'll be building so makes no sense to store it with each application.
The problem comes when I try to checkout another repo in my build step. It effectively wipes out the repository files from the original SCM checkout that happened when it was triggered. What i was expecting to happen is to be able to checkout the additional build repository files to a sub-directory leaving the original files in tact. But that doesn't happen. Am I using this incorrectly? Or is it just how the plugin works that you're only allowed to checkout one repo at a time?
I've added an excerpt from my Jenkinsfile below. It's the checkout step below that seems to wipe out the repo that was pulled down when the build was triggered.
stages { stage('Building') { steps { sh "mkdir test && cd test" //use this to checkout the repo for build script checkout([$class: 'GitSCM', branches: [[name: '*/master' ]], extensions: scm.extensions, userRemoteConfigs: [[ url: 'git@github.com:XXX/build-code.git', credentialsId: 'XXX-XXX-XXX-XXX-XXX' ]] ])
-
1) Is this in fact the best practice way of building containers and environments?
Yes. The property of a container is that it is lightweight. So you want it to be self contained and good at doing the one thing you need it to do.
2) How do they work together and talk to each other to form an environment?
The same way that you would have them talk and work together if they were on a network-- via docker networking and docker DNS. You can also create a
docker compose file, which can be used with docker swarm in a production environment to orchestrate a group of containers that are designed to work together.
3) Should you have multiple dockerfiles for each container housing a service (e.g. apche+php)
No. You can create any number of containers from an image. Images are built from dockerfiles. A single image can support a variety of options as well. You see this when you specify a tag when starting a container.
4) Should your application code go on the apache+php container? Or should that be a container in itself?
It should be in a Docker volume, just as you use docker volumes for things like the filesystem you are going to attach to your running mysql server.
Hi Gizmola
Thank you for your detailed response. I really appreciate the help as wrapping my head around this is taking precious time.
I'm still a little confused on the 'single responsibility' approach to creating services. For instance, looking on Dockerhub there are images for Centos, httpd, PHP. And presumably it's preferable to use the official Dockerhub images than spinning up centos and installing apache/php myself?
Assuming using the Dockerhub images is the correct method then should I have three dockerfiles as follows?
- one dockerfile with: FROM centos:7
- one dockerfile with: FROM httpd:latest
- one dockerfile with: FROM PHP:7
Then I just network these (guessing bridge network) and the services automatically know how to talk to each other? Sorry if I'm labouring this one, just want to get the concepts right before diving in.
Thanks,
Drongo
-
Hi Guys
I have a basic question around Docker, which I'm right at the start of learning. I'm building a basic LAMP environment and I've come across a few pointers in the docks which seem to indicate (though don't explicitly state) that you should create containers so they run a single service per container.
What this appears to mean is that for a LAMP environment I should have Apache+PHP on one container and MySQL on another container. So my questions...
1) Is this in fact the best practice way of building containers and environments?
If yes,
2) How do they work together and talk to each other to form an environment?
3) Should you have multiple dockerfiles for each container housing a service (e.g. apche+php)
4) Should your application code go on the apache+php container? Or should that be a container in itself?
Sorry if these are really noob questions but I'm more used to paradigm of VMs so docker is a bit confusing.
Thanks,
Drongo
-
Hi Guys
I'm about to start a project building a lot of APIs. I've built basic REST APIs in the past but I mostly made up the standards for the API.
I want to do this project properly from the start but I don't know whether there is an official 'standard' you should follow when creating a REST APIs? Like JavaScript has ECMA but I can't seem to find an api governing body as such and to be honest I've never known of one even after many years being a dev.
Alternatively if that doesn't exist can anyone recommend good websites which highlight best practice?
Thanks,
Drongo
-
Hello
I'm really green when it comes to bash...
Trying to create a script which will SSH into multiple servers and report back a list showing the status of a deployment on each. The intention is for the script to run from my local machine.
I read using heredoc is the way to go but I don't think its possible to store variables which are accessible outside of the sub-session - i.e. you can't store variables inside heredoc and have them accessible outside
So my question is does anyone know a good way to get the variables from heredoc into the 'higher level' bash script so I can aggregate the results from each server?
Pseudo code to illustrate what I'm aiming for:
ssh me@11.11.11.11 << EOF # Get server Status to variable EOF ssh me@22.22.22.22 << EOF # Get Server Status to variable EOF echo $SOME_VARIABLE_OF_STATUS_FROM_EACH_SERVER
-
Hello
I want to deploy an application hosted on Git Hub. It's my first time doing it this way. ()
I've cloned the application on my web server and everything works great. I've done some additional changes locally and pushed them up to Git Hub along with a tag (called testTag). I understand tags are common practice to mark a release point.
The bit I'm confused by is how I deploy that specific tag to my server? I want to be able to checkout the tag to my server's master branch but when I checkout the tag (code below), it kicks me off the branch. Not what I was hoping for.
git fetch --all --tags --prune git checkout testTag
Any advice appreciated! And I know this isn't strictly PHP but I couldn't see a forum relevant to this. If it's any consolation it is a PHP application I'm trying to deploy
Drongo
-
Thanks for the reply Jacques.
I'll certainly give your advice some consideration. Out of interest though, and in an effort to understand lookbehinds more, can u answer me:
1) is it possible to follow a lookbehind with a character class? Or is this not permitted?
2) what do you mean when you say the regex I've used will always be true. Doesn't that regex mean 'match a slash, then match anything that is not precedes by admin'? Therefore /admin/somepage shouldn't match. I say this because I thought lookbehinds acted at the point where they are defined so in my example why does the opening slash have any impact on the lookbehind and the expression that follows it?
Thanks,
Drongo
You first consume a slash, and then you check if the previous characters aren't “admin”. Since a slash isn't “admin”, that whole check if obviously always true.
Why do you even want a lookbehind? Just use a lookhead to exclude paths starting with “/admin”, then match all other paths
$path_pattern = '~\\A(?!/admin)/[a-z\\d/-]*~';
Note that you're going to exclude paths like “/administrative_leave” which just happen to start with “/admin” as well, which may or may not be what you want.I any case, I would strongly recommend you avoid complex regex gymnastics which even you as the application author barely understand. Keep it simple. Admin pages in particular should be on a separate (sub)domain, because then you can properly isolate them from the public site.
-
Hi Guys
Stuck on what is probably a very easy thing.
I'm adding wildcard route to an application and I'm using regex to match anything UNLESS it's preceded by /admin/ in which case I don't want the match to succeed.
I want to match: /foo/bar/
I don't want to match: /admin/foo/bar
I figured this was a job for Negative Lookbehind but it doesn't work when I follow the lookbehind with a character class like so:
^\/(?<!admin)[a-z0-9\-\/]*
This doesn't work as /admin/test/ still matches the full string - even though I would expect anything following the word 'admin' to fail.
So either you cannot use a character class after a lookbehind, or I'm doing it wrong. Any advice would be most welcome!
Drongo
-
Thanks Requinix
That makes a lot of sense. Appreciate the advice.
Drongo
-
Having to maintain a mapping is cumbersome. Do you have any attachment to being able to separate the model from the form? Would it be so bad to consider the model (this model) as a "form model" and have form fields map directly to object properties?
Thanks for picking this up Requinix
Well the thinking was I could use mapping classes to:
1) Shuttle data from forms to the model (and model to forms)
2) Shuttle DB model data to an instance of a model
And thereby have it all sort of neatly managed in one spot.
I guess the thing that drove me towards considering a dedicated mapper class is that the model would need to retain references to form field names which won't always have a 1:1 relationship with model property names. And the question in my mind was 'should the model need to know anything about the form?' because there will be a few properties on the model which don't all come from the form. This sort of left the model feeling like a mixed bag.
Is it not considered good practice to utilise mapping classes? And if you do use a mapping class is it ok to list out the getters and setters as I have done?
-
Hello...(again)
I'm hoping for some advice from the good folks here. Sorry I keep asking questions but I work alone so it's kind of hard to code review your own ideas...
Anyway I have a User Model and I want to be able to pass data submitted via $_POST ( which has been validated) to the model and map each piece of form data to its intended property.
I was thinking of creating a mapping class (basic outline below). Essentially I'd have an array with form field names as a key and then a sub array mapping to the name of it's counterpart setter/getter method on the user model.
In one respect I think this is good in that the model remains ignorant of the form. On the other hand the array could become quite cumbersome for forms with lots of questions.
Does anyone have a view on whether this is a good/bad approach? And whether there's actually a better way to shuttle $_POST data to a model and getting form data back from a model?
/** * Array which has each form field name as it's primary key that maps to it's * equivalent getter/setter on the desired model */ $formFieldToModelMapping = array( 'name'=>array( 'modelSetter'=>'setName', 'modelGetter'=>'getName' ), 'email'=>array( 'modelSetter'=>'setEmail', 'modelGetter'=>'getEmail' ), /* Repeated for each form field which has an associated proptery on the user model*/ ); /** * @param $model - some User Model * @param $data - array of validated $_POST data e.g. $_POST['name'], $_POST['email'] */ public method transferFormDataToModel($data, $model){ // $_POST foreach($formFieldToModelMapping as $formField=>$accessor){ if( isset($data[$formField]) ){ $methodName = $formFieldToModelMapping[$formField]['modelSetter']; $model->$methodName( $data[$formField] ); } } } //NOTE THIS IS JUST TYPED IN AND NOT TESTED CODE
-
I recently went through this.
In respect of speeding up load time you may wish to consider some of these too:
- Place your scripts at the bottom of your html above the closing body tag
- Consider using the 'async' attribute on scripts which aren't essential
- Consider aggregating your scripts into a single file for production (something like gulp could be automated to do this). This means fewer server calls.
- Create image sprites where appropriate and aggregate SVGs into an icon font (sites like icomoon are handy here). Again, fewer server calls.
- Consider loading assets from CDN because some browsers can only maintain a certain number of parallel resource calls against the same domain. So distributing your resource calls across multiple domains/subdomains can speed up load times
- Run all images assets through something like https://tinypng.com/ (equivalents for jpeg etc) as this can strip huge amounts from image file sizes.
- Make sure you have the right cache controls as these can have a huge impact.
I know some of that is off point but might be helpful.
Drongo
-
If I were to do as you are trying to do, I would pass the object in rather than have to add another stage of fetching the data from the object to then pass into the table layer.
One thing which you should certainly change is that you should use bind variables for your PDO statements! Do not just dump data directly into any SQL statement.
Thanks Nigel. The example was simplified but I'd definitely bind the variables.
-
Well, if you are GETTING then you can just easily SET the data. If you set the data then you can easily save the data.
Doing it this way
class userModel{ private name; private email; /*lots more properties for user */ public function getName(){ return $this->name; } public function getEmail(){ return $this->email; } }
would be more secure in my opinion.
I might not have been very clear. I was specifically interested in whether its best to pass the model data as an array to the Table Data Gateway or whether its better to pass the model and user it's getters. I'm not sure I follow why the setters in the model (which were omitted for brevity) have a bearing on this. Maybe I'm missing something?
-
Hello
I have what might be a really basic question.
Lets say I have a table data gateway which is dedicated to a 'users' table.
When it comes to saving a user is it better to pass the User Model to the database layer or collapse the User Model into an associative array and pass that instead? Code example below (just typed out as an example) - methods Insert and Insert2 demonstrate the two options.
In one respect I think collapsing the model to an array makes it a little less coupled (which seems like a good thing) but on the other hand passing an associative array still feels somewhat error prone and possibly harder to understand for anyone editing the code in the future.
So my question is, what would you advise is the best practice in this scenario?
<?php class userTableGateway { /* * Insert option one : Just pass in array of data */ public function insert($data){ $sql_statement = "INSERT INTO userTable (name, email) VALUES ($data['name'], $data['email'])"; /*PDO prepare, execute etc...*/ } public funciton insert2(userModelInterface $model){ $sql_statement = "INSERT INTO userTable (name, email) VALUES ($model->getName(), $model->getEmail() )"; /*PDO prepare, execute etc...*/ } public function update(){ ...} public function delete(){...} } class userModelInterface { /* some interface for user model */ } class userModel{ private name; private email; /*lots more properties for user */ public function getName(){ return $this->name; } public function getEmail(){ return $this->email; } }
To factory or not to factory?
in PHP Coding Help
Posted · Edited by Drongo_III
I actually dusted off my copy of Patterns of Enterprise Application Architecture and there was a chapter on DTOs. In that chapter it recommends using a 'DTO Assembler' which is an type of Mapper - i.e. an intermediary to keep the DTO ignorant of the Models that supply it's data. And this seems to solve the very problem I'm trying to solve, I just didn't have a name for it and incorrectly thought 'factory must be what i need'.