Jump to content

Making webapplication secure


Darghon

Recommended Posts

Hello all,

I have this web application which is basicly a management tool for several servers.

It has access to the applications on each server, manages the databases, has functionality for mass updates etc,

even for validating sensitive information and force takeover of users on the other applications.

Because of the functionality this application provides we need to make it as secure as possible.

But we do not have a static IP address, so only allowing access from 1 or a few IP will eventually lock us out.

 

The current security redirects all traffic to the login page, unless you are logged into the system.

Passwords are encrypted with a salt.

 

Any suggestions on what I can and should do to make this application as secure as possible?

 

Thx in advance.

Link to comment
Share on other sites

It's hard to give a catch-all solution.

 

SSL is huge (https). Getting a security expert to audit your code is a great idea. Following the advice given in the article in my signature is another. Making sure there's no injection holes, or that any output that comes from the database/user strips HTML tags before output is another good step.

Link to comment
Share on other sites

It's hard to give a catch-all solution.

 

SSL is huge (https). Getting a security expert to audit your code is a great idea. Following the advice given in the article in my signature is another. Making sure there's no injection holes, or that any output that comes from the database/user strips HTML tags before output is another good step.

 

I agree with Xyph. You can spend hundreds of hours trying to harden your application against attacks - both externally and internally. It all depends on what level of security you are wanting to achieve and how much you are willing to invest. If you were wanting to limit by IP addresses but cannot implement that, I am assuming that the user base is somewhat limited and you are mostly concerned about rogue users attempting to access your application. I have a few suggestions if that is the case:

 

1. Go ahead and implement an IP check. If the IP is one that the user has used previously let them log in normally with username/password. If it is a different IP make the user go through an extra level of authentication. E.g. send a passcode to their email address that they have to use in addition to their username/password to authenticate and to add the IP for that user.

 

2. Be sure to have a lockout process if a particular user fails login x number of time in a row

 

3. Have a check to see if a particular IP address has x number of login failures (for any username) within a time period. Then lock out any more submissions from that IP address.

Link to comment
Share on other sites

2. Be sure to have a lockout process if a particular user fails login x number of time in a row

 

I prefer the throttling method. Keep track of the last failed log-in time, and don't allow another attempt until 1 second has elapsed.

 

Rather than an attacker just sending 4 bad requests to get a user locked out for 15 minutes, the attacker has to constantly be sending bad requests to lock out the user. Combine this with your step 3, and it becomes very easy to weed out those kind of goofs.

 

Only allowing 1 log-in attempt per second is enough to destroy most brute-force attacks, leaving you only open to a slow dictionary attack. These can be defeated by keeping a list of the most common 100,000 passwords or so, and not allowing them (http://download.openwall.net/pub/wordlists/).

 

 

Link to comment
Share on other sites

2. Be sure to have a lockout process if a particular user fails login x number of time in a row

 

I prefer the throttling method. Keep track of the last failed log-in time, and don't allow another attempt until 1 second has elapsed.

 

Rather than an attacker just sending 4 bad requests to get a user locked out for 15 minutes, the attacker has to constantly be sending bad requests to lock out the user. Combine this with your step 3, and it becomes very easy to weed out those kind of goofs.

 

Only allowing 1 log-in attempt per second is enough to destroy most brute-force attacks, leaving you only open to a slow dictionary attack. These can be defeated by keeping a list of the most common 100,000 passwords or so, and not allowing them (http://download.openwall.net/pub/wordlists/).

 

#2 Isn't primarily directed at scripted/automated attacks. It is designed to prevent a person (external or internal) from trying to get into someone else's account This is pretty standard in the industry. A coworker could try and get into a person's account using known information such as anniversary date, kids names, etc. After three attempts the account would be locked (while also logging the offending IP address).

Link to comment
Share on other sites

2. Be sure to have a lockout process if a particular user fails login x number of time in a row

 

I prefer the throttling method. Keep track of the last failed log-in time, and don't allow another attempt until 1 second has elapsed.

 

Rather than an attacker just sending 4 bad requests to get a user locked out for 15 minutes, the attacker has to constantly be sending bad requests to lock out the user. Combine this with your step 3, and it becomes very easy to weed out those kind of goofs.

 

Only allowing 1 log-in attempt per second is enough to destroy most brute-force attacks, leaving you only open to a slow dictionary attack. These can be defeated by keeping a list of the most common 100,000 passwords or so, and not allowing them (http://download.openwall.net/pub/wordlists/).

Not entirely true. There is something called themed combolists. Those are usually very effective, even more so if it's recently updated and/or you can get a local copy of the memberslist somehow.

 

If it's a big site with many members, and you got the entire members list and a good themed combolist that was recently updated, it's extremely easy to bruteforce a site that allows only 1 attempt per second. It usually takes only a few attempts. :) Of course, if you make them follow some strict password policies, then it usually breaks these combolists, because people will need to create a new password to use with your log in, but the rules would need to be unusual, but yet of course not limit too much. Also e-mail as log-in name, and not the usual username, is clever.

Link to comment
Share on other sites

I understand. What I'm saying is a malicious user could intentionally lock accounts. This can be annoying.

 

Yes it is, but I've been through a number of security audits with companies that use our software who are storing their customers' financial data. They would rather a user user be inconvenienced than expose the data. It's all a matter of how sensitive the data is and to what lengths you feel are necessary to safeguard it.

Link to comment
Share on other sites

Wow, thx to everyone for the wonderfull idea's about the security issue.

 

So I'm using a username instead of a email, and the password has no pass policy. But no biggy because the application is for 3 people that know they need to make a complex password.

The solution(s) that I'll be using are:

The lockout procedure for starters.

The 1 sec delay between login attempts

Logging of failed logins / ip with ip blocking if a ip has more then x fails in x time

linking an ip address to a user by adding a additional auth layer when a new IP is detected for that user.

 

I especially like that last one.

And I believe when all those are implemented, that the application will be as secure as it needs to be :)

 

Thx for the responds.

Link to comment
Share on other sites

Just as important as all the above, is to make sure that you've validated all input and escaped all output. This to make sure that you've ensured that there are no way anyone can run a injection attack against the site, to either insert malware into the HTML, gain access to the database, or whatever else you don't want happening.

While the actions you've described are fairly straight forward, and easy to implement, general security relies a lot more upon extensive knowledge of the language and environment. That's why so many sites are attacked using these methods, that and lazy developers.

 

So if you want to write a really secure web app you need to study, and study a lot. Security is an ever ongoing process, in which you constantly need to keep up to date. It's not a product, to be tacked on later, as some people think.

Link to comment
Share on other sites

I do agree, and I've check each input field with an mysql real escape string.

Also use a database wrapper, so I first retrieve a specific object, and match passwords afterwards, which in my oppinion is safer than escaping username and pass, and checking if a record with both values identical exists.

 

I've also implemented a password policy.

The internal application isn't as secure, and doesn't check each input field, because it's only for internal use.

As long as I'm able to keep external people from loggin into the system, nothing can happen.

 

The "application routing" that handles where your request takes you is the following:

public function deploy(){
	//render main part
	ob_start();
	$db = new Database();
	//check if last login attempt is at least 1 sec ago
	$db->setQuery(sprintf('SELECT * FROM failed_attempt WHERE ip="%s" and timestamp >= "%s"',Config::getIP(),date('Y-m-d H:i:s',strtotime('-1 days'))))->execute();
	if($db->getNumRows() >= 10){
		Route::deploy('login','ip_blocked');
	}
	else{
		//make sure every action is secure
		if(Session::get('_someinternalvarname/name',null) !== null && Session::get('_someinternalvarname/ip_valid', false) == true){
			$module = isset($_GET['m']) ? $_GET['m'] : 'home';
			$action = isset($_GET['a']) ? $_GET['a'] : 'index';
			Route::deploy($module,$action);
		}
		else{
			if(Session::get('_someinternalvarname/name',null) !== null && Session::get('_someinternalvarname/ip_valid', false) == false){
				Route::deploy('login','registerip');
			}
			else{
				//redirect to login
				Route::deploy('login','index');
			}
		}
	}

	$this->main_buffer = ob_get_contents();
	@ob_clean();
	$this->renderTemplate();
}

 

I think this will block all unwanted requests :)

Link to comment
Share on other sites

You seem to have thought about the basics at least, yep. :)

 

Though, there are a couple of things in that code that might be problematic. Depending upon how you've handled it in the called functions.

The first item is whether or not you've validated the IP, before using it in your system. Since this is hidden in the Config::getIP () call I can't say either way. Second item is the $module and $action variables, seeing as you're (apparently) using the raw $_GET values without any validation/sanitation.

 

Just two areas where I'd have a closer look at, to be on the safe side, based upon the code above. ;)

 

PS: You can use ob_get_clean () instead of ob_get_contents (); ob_end_clean ();. Shortens down the code a bit, and removes a function call. Though, if you ask me you should be doing this without using output buffering at all. :P

Link to comment
Share on other sites

Well the output buffering allows me to break the request "mid parse" and redirect through the header to a different page.

It is indeed so that I use the "raw" values for module and action, but again they are only used after you have logged in as a valid user, so the only problem there is if one of my co-workers tries to f* it up, but then again, the tool allows for a lot more screwing up than the manager itself :) (since the tool allows you to run any sql statement over several servers/databases at the same time to name just 1 functionality)

 

About the config::getIP() function, I use the following =>

 

public static function getIP(){
	return $_SERVER['REMOTE_ADDR'];
}

 

Could possibly use "filter_var($_SERVER['REMOTE_ADDR'],FILTER_FLAG_IPV4)", but I don't think it's really needed, the IP address is only used to group attempts together, and also to link a user to a ip through email validation

Link to comment
Share on other sites

Hmm.. Sounds like you might be open to CSRF attacks, and SQL injections via the getIP () function. So I would strongly recommend validating the IP-address, to prevent spoofing invalid IPs at least.

 

Remember: Even if you trust your users, you should never trust all that happens on their computers.

Link to comment
Share on other sites

This thread is more than a year old. Please don't revive it unless you have something important to add.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.