Jump to content

Drongo_III

Members
  • Posts

    576
  • Joined

  • Last visited

Profile Information

  • Gender
    Male

Drongo_III's Achievements

Advanced Member

Advanced Member (4/5)

3

Reputation

  1. So in my example above then the story would be marked as having 19 points because that's the sum of each respective staff member's estimate. Is that conventional?
  2. Not sure if this the place to ask or not. Hoping someone can give me a steer. I'm implementing agile scrum for the first time at my organisation. I'm trying to understand story point estimation but it's a little confusing. I've read many articles stating that you take a reference story (one for which project participants can easily reason about to estimate it accurately) and place points on that reference story using it as a yard stick for the estimation of further tickets. And the points are usually selected from a number in the Fibonacci sequence. The bit I don't understand is that many guides suggest the teams all put in their estimates. Dev may say a story is 8 points, QA may say the same story is 3 points, System Architects may say it's 8 points. What do you place on the final story? Is it 8 points because that's the highest estimate? Or do you sum all the points (in this case 19 points) so the story is representative of the collective effort?
  3. Thanks guys - it helps to get another perspective. I quite like the idea of view-models/business-models/DTOs whatever you want to call them as I too subscribe to the idea that views should do as little thinking as possible - especially when the view is comprised of multiple components from different database models. But I take your points that coupling has to happen somewhere and database change is likely to be infrequent. I suppose it's a matter of consistency on the project too so for this project I'll stick to the current method.
  4. Hi there I've recently been working on a Laravel project. It's a project I've been drafted in to work on and I've not used Laravel before. My question is one of what is considered good practice. In this project the controllers pass whole models back to the view. For instance, its the norm on the project to pass back a whole booking model just to get at a couple of attributes like booking->id and booking->name. As these attributes represent database columns it feels a bit wrong to pass these around because you're effectively coupling the view to the structure of the database. I know behind the scenes Laravel is using a magic method to access the properties so you could argue they are wrapped in a method and therefore don't expose the direct database structure to views. So the question is, would you pass database models directly to views in this way? Or is it 'better' to map those models to an intermediary model that is dedicated to the view (a sort of dto)? Thanks, Drongo
  5. That's great. But I was hoping to gain some insight into what drives that buffering behaviour. Does it just kick in after post size is over a certain amount? I can't seem to find any settings that state they make post data buffer.
  6. Hi Requinix Thanks for the fast response. I'm not too worried about the error as it sounds self explanatory to fix. The full error was: PHP Warning: Unable to creeate temporary file, Check permissions in temporary files directory. in unknown PHP Warning: unknown: Post data can't be buffered; all data discarded in unknown There's no temp directory in php.ini so it seems to be falling back to the system temp directory for which apache doesn't have permissions. But the essence of what I'm trying to figure out is what may influence it buffering post data to a temp directory. As this may in turn let me figure out the best course of action for fixing the issue. Also, I'm using mod_php.
  7. Hi Guys I'm getting the following error when sending large post data to a PHP 7 application: PHP Warning: Unknown: POST data can’t be buffered; all data discarded in Unknown on line 0 I think I have a fix for this now, so the error isn't the source of my question. It just got me wondering two things: Does PHP always buffer data to a temporary file? Or is it only when post data is of a certain size? Are there any php.ini settings that directly influence this behaviour? I have trawled through the common ini settings but nothing (that i can find) mentions buffering. Thanks, Drongo
  8. Sorry if this post doesn't belong here or anywhere else on this site. I'm trying to build a Jenkins pipeline using the declarative Jenkinsfile. I've used the Git SCM plugin to pull down the main GitHub repo for the build. It gets triggered off a GitHub Webhook. This works great. The next thing I want to do is pull in another repository for the same build step that contains the build code (i.e. the steps for building the application that came down with the original Git SCM trigger). I want to separate it because the build code is common to a number of applications i'll be building so makes no sense to store it with each application. The problem comes when I try to checkout another repo in my build step. It effectively wipes out the repository files from the original SCM checkout that happened when it was triggered. What i was expecting to happen is to be able to checkout the additional build repository files to a sub-directory leaving the original files in tact. But that doesn't happen. Am I using this incorrectly? Or is it just how the plugin works that you're only allowed to checkout one repo at a time? I've added an excerpt from my Jenkinsfile below. It's the checkout step below that seems to wipe out the repo that was pulled down when the build was triggered. stages { stage('Building') { steps { sh "mkdir test && cd test" //use this to checkout the repo for build script checkout([$class: 'GitSCM', branches: [[name: '*/master' ]], extensions: scm.extensions, userRemoteConfigs: [[ url: 'git@github.com:XXX/build-code.git', credentialsId: 'XXX-XXX-XXX-XXX-XXX' ]] ])
  9. Hi Gizmola Thank you for your detailed response. I really appreciate the help as wrapping my head around this is taking precious time. I'm still a little confused on the 'single responsibility' approach to creating services. For instance, looking on Dockerhub there are images for Centos, httpd, PHP. And presumably it's preferable to use the official Dockerhub images than spinning up centos and installing apache/php myself? Assuming using the Dockerhub images is the correct method then should I have three dockerfiles as follows? one dockerfile with: FROM centos:7 one dockerfile with: FROM httpd:latest one dockerfile with: FROM PHP:7 Then I just network these (guessing bridge network) and the services automatically know how to talk to each other? Sorry if I'm labouring this one, just want to get the concepts right before diving in. Thanks, Drongo
  10. Hi Guys I have a basic question around Docker, which I'm right at the start of learning. I'm building a basic LAMP environment and I've come across a few pointers in the docks which seem to indicate (though don't explicitly state) that you should create containers so they run a single service per container. What this appears to mean is that for a LAMP environment I should have Apache+PHP on one container and MySQL on another container. So my questions... 1) Is this in fact the best practice way of building containers and environments? If yes, 2) How do they work together and talk to each other to form an environment? 3) Should you have multiple dockerfiles for each container housing a service (e.g. apche+php) 4) Should your application code go on the apache+php container? Or should that be a container in itself? Sorry if these are really noob questions but I'm more used to paradigm of VMs so docker is a bit confusing. Thanks, Drongo
  11. Hi Guys I'm about to start a project building a lot of APIs. I've built basic REST APIs in the past but I mostly made up the standards for the API. I want to do this project properly from the start but I don't know whether there is an official 'standard' you should follow when creating a REST APIs? Like JavaScript has ECMA but I can't seem to find an api governing body as such and to be honest I've never known of one even after many years being a dev. Alternatively if that doesn't exist can anyone recommend good websites which highlight best practice? Thanks, Drongo
  12. Hello I'm really green when it comes to bash... Trying to create a script which will SSH into multiple servers and report back a list showing the status of a deployment on each. The intention is for the script to run from my local machine. I read using heredoc is the way to go but I don't think its possible to store variables which are accessible outside of the sub-session - i.e. you can't store variables inside heredoc and have them accessible outside So my question is does anyone know a good way to get the variables from heredoc into the 'higher level' bash script so I can aggregate the results from each server? Pseudo code to illustrate what I'm aiming for: ssh me@11.11.11.11 << EOF # Get server Status to variable EOF ssh me@22.22.22.22 << EOF # Get Server Status to variable EOF echo $SOME_VARIABLE_OF_STATUS_FROM_EACH_SERVER
  13. Hello I want to deploy an application hosted on Git Hub. It's my first time doing it this way. () I've cloned the application on my web server and everything works great. I've done some additional changes locally and pushed them up to Git Hub along with a tag (called testTag). I understand tags are common practice to mark a release point. The bit I'm confused by is how I deploy that specific tag to my server? I want to be able to checkout the tag to my server's master branch but when I checkout the tag (code below), it kicks me off the branch. Not what I was hoping for. git fetch --all --tags --prune git checkout testTag Any advice appreciated! And I know this isn't strictly PHP but I couldn't see a forum relevant to this. If it's any consolation it is a PHP application I'm trying to deploy Drongo
  14. Thanks for the reply Jacques. I'll certainly give your advice some consideration. Out of interest though, and in an effort to understand lookbehinds more, can u answer me: 1) is it possible to follow a lookbehind with a character class? Or is this not permitted? 2) what do you mean when you say the regex I've used will always be true. Doesn't that regex mean 'match a slash, then match anything that is not precedes by admin'? Therefore /admin/somepage shouldn't match. I say this because I thought lookbehinds acted at the point where they are defined so in my example why does the opening slash have any impact on the lookbehind and the expression that follows it? Thanks, Drongo
  15. Hi Guys Stuck on what is probably a very easy thing. I'm adding wildcard route to an application and I'm using regex to match anything UNLESS it's preceded by /admin/ in which case I don't want the match to succeed. I want to match: /foo/bar/ I don't want to match: /admin/foo/bar I figured this was a job for Negative Lookbehind but it doesn't work when I follow the lookbehind with a character class like so: ^\/(?<!admin)[a-z0-9\-\/]* This doesn't work as /admin/test/ still matches the full string - even though I would expect anything following the word 'admin' to fail. So either you cannot use a character class after a lookbehind, or I'm doing it wrong. Any advice would be most welcome! Drongo
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.