Virus attacks continue to plague businesses, and most of the solutions available rely on snapshots of the signatures of the bad code. Here, a Cryptzone exec outlines how techniques such as application whitelisting can be used to prevent virus attacks.-- Jennifer Bosavage, editor
Thirty years ago, IBM launched the XT5160 – the first hard- drive, DOS-based PC. In less than three decades, the "personal computer" has evolved to form the backbone of the networked world that we all rely on.
The computer virus, nowadays so seemingly tied to the PC, was in actually existence almost a decade earlier. It took until 1986 for these two threads had come together and the first PC virus "Brain" was born. By 2000, networks we spreading and so were worms such as ILOVEYOU, which was considered one of the most damaging.
[Related: How To Maintain Security In a BYOD world]
So where are we today? Many would argue that nothing much as changed in the last 10 years with new vulnerabilities appearing regularly, patching becoming the norm and anti-virus doing its best to keep our networked world working. However, there has been a big change, or, perhaps, more of a addition: targeted attacks and the involvement of nation states. Those two additions have often been linked because of viruses such as Stuxnet and DuQu, but each points to separate new and worrying developments.
Targeted attacks can be engineered to seek out a very specific machine, infrastructure or geography. They could be used to target one company, maybe with the intention of stealing trade secrets or discrediting that company. For example, look at the infection map for Flame; it is tightly grouped around the gulf states. The other development is the apparent involvement of the nation state. The scale of the enterprise behind such attacks is worrisome: In less than 20 years, the resources that can be deployed has grown from "two brothers" to "nation state."
Today, most of us use anti-virus that relies on a snapshot of the signatures of the bad stuff. Its major disadvantage is that it does not know about new viruses, which is how some of these targeted attacks have managed to be so successful.
So what of the future? There is some light at the end of the tunnel in the form of application whitelisting. The technique has two parts. First, a snapshot of the computer is made that will contain signatures for all the programs, operating system elements, drivers, etc., installed at that time. Second, an agent is installed that checks everything just before it runs to make sure it was in the snapshot. Although that technique still uses signatures, it has the major advantage of being able to block unknown code and prevent "zero-day threats."
Why do we still have to put up with anti-virus when we have application whitelisting? Both techniques use signatures (in part) and signatures need to be generated and managed – so what is the issue?
Back in 1986, when the first PC virus came along, PCs had been around for a few years and programs already contained thousands of executables. Each PC at a company probably contained different executables, because one was in engineering and another in finance, for example. So, do you look for the one piece of static bad stuff that is the same everywhere or a variable amount of good stuff which is different everywhere?
Next Page: What's changed in the last couple of years
- How To Protect Customers From Online Fraud
- How to Choose a Next-Generation Firewall
- How To Batten Down Network Security and Increase ISP Customer Satisfaction
- How to Successfully Help Customers Mitigate Application Issues Around Windows 7
- How To Accelerate Cloud Adoption Through Windows 8
- How To Prepare for Deploying 100GigE In 6 Steps
- How To Secure Mobile Devices
- How to Prepare for a Microsoft Exchange Migration
- How To Develop Reliable IT Applications While Reducing Costs
- How To Keep Data Safe In the Cloud
- How To Integrate High Performance Computing On the Cloud
- How to Successfully Execute IT Projects Without Fail
- How to Cost Effectively Migrate to a Network Fabric