Intel, Asus, Cooler Master, Corsair Memory, Dell, Compaq, Gigabyte, Mercury, zebronics, AMD, Nexus, Delta, IBM, HP, Apple, Acer, BenQ, Sony, Samsung, LG, Philips, Transcend, Nvidia, SiS, Logitech, Alps Electric Corporation, Creative Technology, ASRock, Asus

Blog Archive

Tuesday 27 December 2011

Explained: How Windows protects your PC

Explained: How Windows protects your PC

How Windows protects your PC

Your Windows computer should, if you're sensible, already have formidable defences in the form of advanced anti-malware.

However, we're now entering an age of cloud computing, filled with downloadable applets, at a time when we're also fighting an intense arms race against ever more ingenious malware writers.

Because of this, your PC's defences have to handle code from many sources and still protect you from outside threats.

How is this possible, and what ways have the bad guys already found to thwart these efforts? Most importantly, how can you be sure your computer only runs what you think it's running?

Signing in

unsigned code warning

Unlike applications purchased from trusted vendors, when you download applets or freeware made by individuals, there's no way the average user (or even the security professional) can be sure that the software they think is running isn't actually something else entirely.

With the arms race between developers and hackers now gathering pace, it could take more than just advanced antivirus software to match the state of the art in malicious programming. Luckily, Windows has an ingenious way to protect itself. This is Microsoft's Authenticode system.

You'll have seen evidence of Authenticode in action when you try to install software that hasn't been through a process known as signing. A warning popup appears, explaining that the installation program's publisher could not be verified and offering you a chance to stop it running.

Unique signature

Programs are signed using an encryption algorithm like RSA. This uses a publicly available encryption key to create a globally unique signature code that describes the program being verified. Being public, anyone can get the key used to sign the code.

The operating system obtains it, generates its own copy of the signature and compares it with the one supplied with the program. If they match, the code is what the supplier says it is, and it can be allowed to run.

These encryption keys are held by several large and trusted security companies (certificate authorities) in the form of publicly accessible digital certificates that contain the details of the company that issued the program your operating system is trying to verify, as well as the unique signature of the program itself.

If you're a developer, having a certification authority generate a digital certificate to show that you and your code can be trusted is usually a costly and involved process. Obtaining a commercial certificate for your company means proving beyond any doubt that you are who you claim to be.

According to Microsoft, your physical presence may be requested as a representative of your business to verify your identity against photo ID. The business itself must also have a suitable Dun and Bradstreet rating. This rating is a measure of your company's financial stability and indicates that among other things that the company is still in business. This prevents hackers from simply posing as a company that has quietly stopped trading in order to gain a false certificate for malicious purposes.

Finally, applicants must also pledge not to distribute malware. Whether this final measure is any more than lip service is up for debate.

Individual developers can also obtain a personal certificate to sign their products. In this case, no Dun and Bradstreet rating is required, but your credentials will be checked against consumer databases to make sure you are who you say you are. In both cases, obtaining a certificate will usually cost you a fair amount of money.

With so many certificate authorities, it pays to shop around for the best deal. A one-year Microsoft Authenticode certificate from VeriSign, for example, will cost you $499 (about £250). If you expect people to download and install your software beyond one year, you'll have to renew the certificate or your code will become unsigned again.

Public keys

Authenticode

Eagle-eyed readers might have spotted that the use of public keys is the same general mechanism used in encrypting email and SSH, where it's used to prove that the server to which you're trying to make a secure, encrypted connection is the real thing and not a dummy that's been set up to skim usernames and passwords.

The main problem with signing code is that the average user has no idea what the associated popup means when the code's publisher cannot be verified. There's a tendency to take a chance on becoming infected and to leave everything to the antivirus software, but what if this fails to spot malicious code?

Sandboxing explained

In the sand

Sandboxing

One solution is to use a 'sandbox' to quarantine running code so that any attempt to make unauthorised changes to the system can be prevented caught before they are carried out. Some antivirus software (even free versions) now provides the facility to automatically run suspect or unsigned programs in a sandbox.

Perhaps more importantly, web browsers are also beginning to use sandboxes, which is another good reason to abandon that old version of Internet Explorer. The ability of malicious or hijacked sites to silently install code on your computer simply by surfing to them will be greatly reduced.

A sandbox operates a little like a virtual machine, in that it provides the running code with a virtual environment containing everything it needs to believe it's running on real hardware. However, it actually runs in a carefully crafted simulation with severe limits placed on it. Any changes to the actual operating system are never allowed to propagate beyond the sandbox.

The sandbox used in Google's Chrome browser is a good example of the concept in action. Rather than write a complete virtualisation product, the developers used Windows' own security model to help Chrome achieve its renowned speed.

Chrome's sandbox works because malware needs to write to unauthorised parts of RAM or to the hard disk to install itself so that it can run again after a reboot. In Windows, this can only be done using a system call to the kernel's I/O functions, all of which check the privileges of the process calling them. Chrome's sandbox is set up so that write operations never have the correct privileges and therefore fail. Return codes are faked, so the malware believes it's installing itself, but never does.

For developers, Chrome's sandbox is particularly useful because it isn't deeply embedded in the browser. Developers can use it to test their own programs and make sure that they don't try to do anything they shouldn't, or which could be construed as malicious.

Chrome tarnished

Chrome's sandbox is among the most secure. In the first three years since the browser's release, it resisted all attempts at subversion during the prestigious Pwn2Own hacking competition. Held during the annual CanSecWest security conference, the competition has seen IE8 and Firefox hacked wide open. However, Chrome's sandbox may have been breached, if the claims of one French security company are true.

Researchers at VUPEN Security recently issued a security advisory giving details of what it claims is a simple, two-step process for breaking out of Chrome's sandbox and making unauthorised changes to the operating system. The news of Chrome's 'pwning' via its sandbox has been met with concern in the online security community, not least because VUPEN Security has chosen not to share its findings with Google, which would be more usual.

When a security researcher finds an exploitable bug, he or she usually contacts the developer with the details and perhaps a suggested fix. Only when the developer has implemented a fix and issued new code does the researcher exercise their bragging rights by publishing full details of the bug online.

However, in a statement, VUPEN Security says that, "We did not alert Google as we only share our vulnerability research with our Government customers for defensive and offensive security". This stance hints at the commercialisation of so-called 'zero day' exploits – those not reported to the developer so that they can be fixed but instead kept for private exploitation or sale.

At a time when governments are talking openly about their preparations for cyber warfare, exploitable bugs, packaged and ready to use with exploit code, can command serious money. However, several pundits have questioned VUPEN Security's announcement. If, they ask, VUPEN is planning to sell its Chrome exploit to a government customer to use as a weapon, why publicise it and put potential adversaries on their guard?

Take the blue pill

Virtualbox

We take the idea of virtualisation for granted as a cheap or free path to creating networks on a single physical computer, but it is far from being a software-only technique. Since the mid 2000s, Intel and AMD have included hardware virtualisation inside their chips.

Both companies aimed to make the creation of virtual machine software easier, but their technologies had an unexpected side effect. When you create and run a virtual computer in a package like Oracle's free VirtualBox, for example, the entire simulation runs under the control of a process known as a hypervisor.

The hypervisor (which is also referred to as a virtual machine manager) makes sure that the entire physical computer is apparently available to the virtual machine. It handles access to everything from the BIOS to the USB ports, and resolves any resource access conflicts with other virtual machines it may be controlling at the same time.

However, not long after AMD and Intel released chips supporting virtualisation, Polish security guru Joanna Rutkowska created an ingenious hacking technique that ensures that people can see anything the chips are doing, including everything you type in.

For this, Rutkowska created a simple hypervisor that tells the processor to run under its control. However, the chip itself and the operating system that's running on it have no way of knowing that it has been flipped into this malicious hypervisor. They simply continue running as if nothing had happened.

Rutkowska called her approach the Blue Pill after the concept of the same name in cult sci-fi movie The Matrix. Running the Blue Pill exploit puts the chip into a simulation of the computer, which is indistinguishable from the real thing. Once inside this simulation, everything the running operating system does is laid bare.

Because the simulation is indistinguishable from the real thing, malware using the same concept (like a rootkit, for example) can be created that, potentially, cannot be detected. Other researchers have pointed out flaws in Rutkowska's approach, but there's no doubting that the sheer convenience of virtualisation may ultimately prove the downfall of current processor protection measures, and lead to ever more ingenious defence mechanisms.



0 comments:

Post a Comment

TOP PRODUCTS

Related Posts Plugin for WordPress, Blogger...
Design by ROCKY| computer hardware by ROCKY