Virus attacks continue to plague businesses, and most of the solutions available rely on snapshots of the signatures of the bad code. Here, a Cryptzone exec outlines how techniques such as application whitelisting can be used to prevent virus attacks.-- Jennifer Bosavage, editor
Thirty years ago, IBM launched the XT5160 – the first hard- drive, DOS-based PC. In less than three decades, the "personal computer" has evolved to form the backbone of the networked world that we all rely on.
The computer virus, nowadays so seemingly tied to the PC, was in actually existence almost a decade earlier. It took until 1986 for these two threads had come together and the first PC virus "Brain" was born. By 2000, networks we spreading and so were worms such as ILOVEYOU, which was considered one of the most damaging.
[Related: How To Maintain Security In a BYOD world]
So where are we today? Many would argue that nothing much as changed in the last 10 years with new vulnerabilities appearing regularly, patching becoming the norm and anti-virus doing its best to keep our networked world working. However, there has been a big change, or, perhaps, more of a addition: targeted attacks and the involvement of nation states. Those two additions have often been linked because of viruses such as Stuxnet and DuQu, but each points to separate new and worrying developments.
Targeted attacks can be engineered to seek out a very specific machine, infrastructure or geography. They could be used to target one company, maybe with the intention of stealing trade secrets or discrediting that company. For example, look at the infection map for Flame; it is tightly grouped around the gulf states. The other development is the apparent involvement of the nation state. The scale of the enterprise behind such attacks is worrisome: In less than 20 years, the resources that can be deployed has grown from "two brothers" to "nation state."
Today, most of us use anti-virus that relies on a snapshot of the signatures of the bad stuff. Its major disadvantage is that it does not know about new viruses, which is how some of these targeted attacks have managed to be so successful.
So what of the future? There is some light at the end of the tunnel in the form of application whitelisting. The technique has two parts. First, a snapshot of the computer is made that will contain signatures for all the programs, operating system elements, drivers, etc., installed at that time. Second, an agent is installed that checks everything just before it runs to make sure it was in the snapshot. Although that technique still uses signatures, it has the major advantage of being able to block unknown code and prevent "zero-day threats."
Why do we still have to put up with anti-virus when we have application whitelisting? Both techniques use signatures (in part) and signatures need to be generated and managed – so what is the issue?
Back in 1986, when the first PC virus came along, PCs had been around for a few years and programs already contained thousands of executables. Each PC at a company probably contained different executables, because one was in engineering and another in finance, for example. So, do you look for the one piece of static bad stuff that is the same everywhere or a variable amount of good stuff which is different everywhere?
Next Page: What's changed in the last couple of years
Two changes have occurred in the last couple of decades, which point toward reconsidering the options. First, the numbers game: The amount of bad stuff grows daily and some anti-virus signature files contain approximately 20 million signatures. The good stuff has not grown as fast and a signature file for a standard operating system such as Windows XP Professional will contain nearly 50,000 signatures. Second, the rate of change has increased. Viruses used to be static and did not change, but nowadays they are written to self-adapt or operate in a command-control mode where they can be remotely updated.
Now what? Does a solution provider look for the 50,000 relatively static signatures of the good stuff or the growing 20 million adapting signatures of the bad stuff?
Most companies hope they never see any bad stuff and have no expertise in the dark science of understanding them. So, it is sensible that both the generation and updating of anti-virus signatures be ‘outsourced’ to the experts, and that is how the industry has developed. Application whitelisting appears to require the opposite approach. Because PCs are unique to every organization, then the organisation itself would be required to both generate and update the signatures of the good stuff. That might take quite a lot of time and effort – and appears counter to the current trend of increasing amounts of IT outsourcing. There is also the issue of diversity to handle as well. With anti-virus the same signature file can be applied to every machine, but with application whitelisting the worst-case scenario might be that the signature file of every PC is different.
Today the concept of "signing" software is becoming commonplace and will contain metadata such as the software author, a checksum to verify that the object has not been altered and versioning information. Signing involves a process using a pair of keys, similar to SSL or SSH sessions. The private key used to sign the code is unique to a developer or company. Those keys can be self generated or obtained from a trusted certificate authority (CA). When the public key used to authenticate the code signature can be traced back to a trusted root authority CA using secure public key infrastructure (PKI), then the user knows that the code is genuine. We see this most commonly today in environments where the source of a given piece of code may not be immediately evident - for example a Java Web Start application accessed from your browser.
In the context of application whitelisting, the most interesting use of signed code to provide updates and patches for software. Most OS manufacturers now provide signed updates to ensure that bad stuff cannot be distributed via the patching system.
That same signing process can now be used by application whitelisting solutions, such as Cryptzone’s SE46. The agent, which checks everything just before it runs, clearly trusts the signatures generated for that PC in the first place, especially if they have been signed in a way similar to the above). But the trust model can be extended to include other signing authorities. That means it would now be possible to have a Windows PC which has the trust model extended to include, for example, Microsoft, Adobe and Cryptzone, so it can now self update without any need to in-house manage the changing signatures. Effectively, the management of the signatures of the good stuff has now been outsourced in much the same way as for anti-virus.
Who is in control of your infrastructure today? With certificate-based application whitelisting we have a way of replacing anti-virus without imposing a significant time/management overhead. So, the answer would be, just you and any developers you choose to allow -- and that's it!