Fixing insecure Web code is up to site owners
Printer-friendly version Email this CRN article
The Web servers are all patched, configured to perfection, strong passwords and encrypted communication all around. Nothing but a divine act is going to compromise the security of the Web site. That is, nothing except a Web browser and a well-placed single quote.
That is precisely how hackers plugged themselves into the Web sites of Guess Jeans, PetCo, Tiffany's and scores of others. Unfortunately for PetCo, 500,000 customers' credit-card numbers were left vulnerable. After 90 days of embarrassing media coverage and apologetic press releases, the Federal Trade Commission served PetCo with a Civil Investigative Demand.
These attacks were not the result of a virus or worm or even a failure to patch against well-known security glitches. These attacks were under the control and responsibility of the Web sites themselves.
First, let's understand how these attacks work. Web sites respond to users who request URLs. A URL specifies what piece of information the user wants. The Web site's job is to fulfill the request. For example, let's say a user wants to read a particular news article. A typical URL would look like this: http://www.foo.com/news.asp?story=100
The URL essentially asks the "news.asp" Web application, located on www.foo.com, for news story 100 in the database. Breaking the URL down to its parts, "news.asp" is the actual software that's being invoked. To identify the requested article, "story" is a variable name with a value of 100.
After the Web application receives the request, the news.asp software will use the story value to create a SQL statement for querying the news database. A typical example of such a statement in its simplest form would be:
SELECT story from news where id ='100'
This SQL statement requests the story, which has an ID of 100, from the news database. Once news.asp receives the data, the Web page is created and sent to the user.
It is entirely possible that the news.asp Web application did not properly sanity-check the incoming "100" value, meaning the application did not verify whether 100 fell in the range of expected values. If the user-supplied data is not properly sanity-checked, an attacker could alter the resulting SQL statement to one of his choosing. Repercussions of this oversight could include complete access to the back-end database, including credit-card numbers.
Let's analyze this further and see how this incident could happen. Instead of just a value of 100, what happens if an attacker were to put the following string into the story value:
http://www.foo.com/news.asp?story='100'UNION SELECT number from creditcards where type='visa'
This effectively makes the SQL statement:
SELECT story from news where id='100' UNION SELECT number from creditcards where type='visa'
The resulting SQL statement not only asks for story 100, but also asks for all the Visa credit-card numbers as well. Because the attacker's user-supplied input was not properly sanity-checked before being used, the SQL statement was maliciously altered.
Patching a Web server against the buffer overflow vulnerability of the week will not protect your clients against becoming another PetCo. The Web server software vendors, such as Apache and Microsoft, do not supply the online banking software. For many organizations on the Web, Web site code is unique and custom-written. This means Web-server vendors are not able to supply a security patch for buggy Web-application code. The application developers must write and implement their own fixes.
Web-site security administrators are left to defend themselves without any resources beyond their own development group. So with firewalls, SSL and patches only offering limited protection for the Web application, what's the diligent course of action? The answer is simple, yet not easy to implement. Web site code should at least be reasonably secure before being released to the public and, of course, should remain secure once it gets there.
When you get right down to it, there simply is no substitute for secure coding practices. An ounce of prevention is worth a pound of cure, and software development is no different. We need to pragmatically approach the development process when it comes to security. Web-application code must undergo quality assurance (QA) testing for security as well as usability. This includes code already serving customers. If you're not sure of the security of your Web site, security assessments can be a low impact and inexpensive way to estimate today's risk.
Organizations handle QA testing either by internal staff or external consultants. Developers themselves should not perform their own QAs. A new set of eyes is essential for an independent security review of software. Whoever is tasked with the security QA function should be an experienced Web-security expert. This ensures that during the review, the Web site gets the most bang for the buck. As a lasting measure, organizations should have independent experts perform regular security assessments on their Web sites as the code is updated. These assessments help prevent the negative security impact of new code by presenting a current hacker's-eye view of the Web site.
Hackers can take all day and all night for as long as they want to find one single chink in the armor. Web-site owners, on the other hand, must protect against every conceivable attack all the time, no matter what. It's not hard to see why systems are constantly compromised. Given the circumstances, no one is asked to be perfect, just to do the best they can to limit the risks. But nothing should be as easy as what happened in the cases of Guess, PetCo and Tiffany's.
For the moment, we still have the opportunity to dictate our own security policies before they are forced upon us by an unfortunate incident. Fixing insecure Web code is a corporate responsibility, not a vendor expectation.
Jeremiah Grossman is the founder and CEO of WhiteHat Security. He can be reached at email@example.com.