Cssembly · 2015/03/03 10:59

0 x00 preface


This is an article by James Forshaw on Project Zero about the principle of CVE-2015-0002. The original link http://googleprojectzero.blogspot.com/2015/02/a-tokens-tale_9.html.

I really enjoy the process of bug research and sometimes find a significant difference between how hard it is to exploit and how hard it is to exploit. The Project Zero blog contains many complex exploits of seemingly trivial bugs. You may ask, why do we try to prove that vulnerabilities are exploitable, and do we really not need to? Hopefully, by the end of this blog, you will have a better understanding of why we always go to great lengths to develop a POC to prove a security problem.

Our primary target for POCs is suppliers, but there are other benefits to developing POCs as well. Customers using vendor systems can use PoC to test their systems for this vulnerability and ensure that all patches are in place. Security vendors can use it to develop mitigation measures and vulnerability signatures even if vendors are unwilling or unable to patch vulnerabilities. Instead of providing PoC, only reverse patch people are likely to know about it, and they may not have your best interests in mind.

I don’t want this blog to get into too much technical detail about the vulnerability (CVE-2015-0002). Instead, I focused on the availability of relatively simple vulnerabilities and the PoC development process. This PoC is sufficient for the vendor to make a reasonable assessment of the vulnerabilities described to reduce the workload. I’ll also explain the reasons for the various shortcuts I’ve taken in PoC development, and why.

0x01 Vulnerability Is Reported


The biggest problem with vulnerability research on closed or proprietary systems is the actual reporting process for fixing vulnerabilities. Especially in the case of complex or non-obvious vulnerabilities. If the system is open source, a patch can be developed and submitted, which represents an opportunity to fix it. For closed source systems, you have to go through the reporting process. To understand this, consider what a typical large vendor might need to do when receiving external reports of security vulnerabilities.

This is a very simple view of vulnerability response handling, but suffice to explain the handling principles. For a company that develops most of their software in-house, I can’t influence the patching cycle, but I can influence the triage cycle. The easier it is to impact suppliers, the shorter the triage cycle, and the faster patches can be released. Everyone benefits except those who already use the bug. Don’t forget, just because I didn’t know about the bug before doesn’t mean it wasn’t known.

In an ideal bug research world (where I would do the minimum amount of non-scientific work), if I found a bug, all I had to do was write some notes about it, send it to the vendor, they would know the system, take immediate action to develop a patch, and the job would be done. Of course, that won’t work, and getting vendors to recognize that this is a security issue is an important first step. This can be a major barrier to moving from a triage cycle to a patching cycle, especially since they are often independent entities within the company. To get the best possible result, I might do two things:

  • 1 Provide sufficient details in the report so that the supplier can understand the vulnerability
  • 2 develop PoC to clearly indicate the impact of security vulnerabilities

0 x02 writing the report


Although not sufficient in many cases, reporting is critical for vendors to fix security issues. As you can imagine, if I write something like “Error in ahcache.sys, please fix it, LOL”, that doesn’t really help the vendor. At a minimum, I need to provide some background, such as which systems the vulnerability affects (not), what the vulnerability affects (to the best of my knowledge) and where the problem exists in the system.

Why isn’t reporting enough? Think about how large, modern software products are developed. It may have been developed independently by team members in modules. Depending on how long the bug code has existed, the original developer may have moved on to other projects or left the company altogether. Even relatively new code written by people you can talk to around you doesn’t mean they remember how it works. Anyone developing software of any size will come across code they wrote a month, a week, or even a day ago and have no idea how it works. There is a real possibility that security researchers who spend their time analyzing software, instruction by instruction, may know more about software than anyone else in the world.

You can also think about the report in a scientific sense, which is the vulnerability hypothesis. Some vulnerabilities are provable, such as buffer overflows, often mathematically, such as trying to fit 10 items into a space that only fits five. But in many cases, there’s nothing better than developing exploitable proofs. If done correctly, it allows reporters and vendors to experiment, which is the value of proof-of-concept. Properly developed proof-of-concept allows vendors to observe the effects of experiments, turning hypotheses into theories that no one can refute.

0x03 Availability proved by experiment


The hypothesis assumes that vulnerabilities have a real security impact, and we will use PoC to prove it objectively. In order to do this, we need to provide vendors with not only a mechanism to prove the vulnerability is real, but also a clear view of why this is a security issue.

What phenomena are required depends on the type of vulnerability. For memory corruption vulnerabilities, it may be enough to prove that the application crashes in response to certain inputs. But not always, some memory corruption does not provide the attacker with any useful control. Therefore, you need to demonstrate that you can control the current flow of execution, such as, ideally, the EIP register.

Logic bugs can be more subtle, such as writing a file to a location where it could not be written or running a calculator program with elevated privileges. There is no one-size-fits-all approach, but at the very least it should show some safety impact that can be observed objectively.

Keep in mind that I didn’t develop the PoC as a usable exploit (from an attacker’s point of view), just enough to prove that it was a security issue to make sure it could be fixed. Unfortunately, it’s not easy to tell the two apart, and sometimes its seriousness doesn’t get the attention it deserves without demonstrating local permissions or remote code execution.

0x04 Developing a proof of concept


Now let’s look at the challenges I faced in developing the PoC for the Ahcache bug I discovered. Let’s not forget that there is a tradeoff between the time spent developing a PoC and the chance that a bug will be fixed. If I don’t spend enough time developing a working PoC, the vendor may not fix the bug, on the other hand, the longer I take, the bug may be harmful to the user.

Technical details of the 0x05 vulnerability


A little knowledge of this vulnerability will help our discussion later. Here (https://code.google.com/p/google-security-research/issues/detail?id=118), you can see this loophole and additional PoC I sent to Microsoft. The vulnerability is in the AhCache.sys driver, which was introduced by Windows8.1, but is essentially in the Windows local system call NtApphelpCacheControl implemented by this driver. This system call is used to handle the local cache of application compatibility information that corrects application behavior on newer versions of Windows. You can (https://technet.microsoft.com/en-us/windows/jj863248) here to read more information about application compatibility.

Some operations of this system call require permissions, enabling the driver to check the currently invoked application to ensure that they have administrator rights. This is done in the function AhcVerifyAdminContext, which looks like the following code:

#! c++ BOOLEAN AhcVerifyAdminContext() { BOOLEAN CopyOnOpen; College BOOLEAN EffectiveOnly; College SECURITY_IMPERSONATION_LEVEL ImpersonationLevel; College PACCESS_TOKEN token = PsReferenceImpersonationToken (NtCurrentThread (), college & CopyOnOpen, college & EffectiveOnly, college &ImpersonationLevel); 
 if (token == NULL) {token = PsReferencePrimaryToken(NtCurrentProcess()); 
 PSID user = GetTokenUser(token); College if (RtlEqualSid (user, LocalSystemSid) | | SeTokenIsAdmin (token)) {return TRUE; } college return FALSE. }Copy the code

This code queries whether the current thread impersonates another user. Windows allows a thread to simulate other users on the system so that security operations can be properly evaluated. If the thread is emulating, a pointer to the access token is returned. If from PsReferenceImpersonationToken returns NULL, code the query access token of the current process. Finally, the code checks whether the user accessing the token is a local system user or whether the token is a member of the Administrators group. If the function returns TRUE, the privileged operation is allowed to continue.

That all seems right. What’s the problem? Although full emulation is a privileged operation that is limited to users with token emulation privileges, ordinary users do not have permission to simulate other users to perform non-security-related functions. The kernel distinguishes privileged and non-privileged emulation by assigning a security level to the token when emulation is enabled. To understand this vulnerability, there are only two levels of concern, SecurityImpersonation means that impersonation is privileged and SecurityIdentification is not.

If the token is assigned as SecurityIdentification, only operations to query token information, such as the user of the token, can be performed. If you try to open a protected resource, such as a file, the kernel will deny access. This is a potential vulnerability, if you look at the code, PsReferenceImpersonationToken return assigned to a copy of the token level of security, but the code does not verify whether it is SecurityImpersonation. This means that an ordinary user with access to a local system token can simulate on SecurityIdentification and still pass the check to allow the user to be queried.

0x06 Proves basic exploit


To exploit this vulnerability, you need to capture the local system’s access token, simulate it, and then invoke the system call with the appropriate parameters. This must be done through ordinary user permissions, otherwise it is not a security hole. The system call is not public, so if we want to take a shortcut, maybe we just need to show that we can capture the token, and that’s all we can do?

No, this PoC will prove that the documented possible is indeed possible. That is, ordinary users can capture the token and simulate it as a simulation system design, which does not cause security issues. I already know that COM supports emulation and that there are a number of complex system privilege services (such as BITS) that we can interact with as normal users and have communicate with our application for emulation. This does not prove that we can reach the kernel AhcVerifyAdminContext method that contains the vulnerability, let alone successfully bypass the inspection.

So, a long process of reverse engineering begins to determine how the system call works and what parameters you need to pass to make it do something useful. Some work from other researchers is also used here (e.g. http://www.alex-ionescu.com/?p=39), but certainly nothing is readily available. The system call supports many different operations, not all of which require complex parameters. For example, the AppHelpNotifyStart and AppHelpNotifyStop operations are easy to call and rely on the AhcVerifyAdminContext function. You can now construct a PoC to verify that the check is bypassed by looking at the return code of the system call.

#! c++ BOOL IsSecurityVulnerability() { ImpersonateLocalSystem(); 
 NTSTATUS status = NtApphelpCacheControl(AppHelpNotifyStop, NULL); College return status! = STATUS_ACCESS_DENIED; }Copy the code

Is that enough proof that the vulnerability can be exploited? History has told me that you can’t, for example the problem (https://code.google.com/p/google-security-research/issues/detail?id=127) contains almost exactly the same operation, which can simulate to bypass the administrator to check. In this case, I don’t have enough evidence to say whether it causes problems other than disclosure. So it has not been fixed, even though it is indeed a security issue. To prove usability, we need to spend more time on PoC.

0x07 Improved proof of concept


To improve on the first PoC, I need to better understand what the system call is doing. The application compatibility cache is used to store query data in the application compatibility database. The database contains rules that tell the application compatibility system which executables need to apply shims to implement custom behavior, such as relying on the version number of the operating system to avoid incorrect checks. The query is performed at process creation time, and if a suitable match is found, it is applied to the new process. The new process will query the SHIM data it needs to apply from the database.

Because this is done each time a new process is created, each query of the database file incurs a significant performance overhead. Caching helps reduce this impact, and database queries can be added to the cache. If the executable is created later, cached queries can quickly eliminate time-consuming database queries while applying or not applying a series of shims.

Therefore, we should be able to cache existing queries and apply them to any executable file. So I spent some time getting the format of the system call’s parameters in order to add my own query cache. For 32-bit Windows 8.1, the structure looks like this:

#! c++ struct ApphelpCacheControlData { BYTE unk0[0x98]; DWORD query_flags; DWORD cache_flags; HANDLE file_handle; HANDLE process_handle; UNICODE_STRING file_name; UNICODE_STRING package_name; DWORD buf_len; LPVOID buffer; BYTE unkC0[0x2C]; UNICODE_STRING module_name; BYTE unkF4[0x14]; };Copy the code

You can see in the structure that there are a lot of unknowns. This can cause problems if you want to apply it to Windows 7 (which has a slightly different structure) or 64-bit (which has a different structure size), but it’s not important for our purposes. We don’t need to write exploit code that works on all versions of Windows, we just need to prove to vendors that this is a security issue. This can be done by simply informing the supplier of the PoC limitations (they will notice the limitations). Vendors should be able to determine if the PoC can be used across operating system versions; it is their product, after all.

So, now that we can add any cache entry, what are we really adding? I can only add entries to existing query results. You can modify the database to do something like runtime code patches (application compatibility systems are also available for patches), but this requires administrator privileges. So, I need an existing SHIM to reuse.

I built a copy of the SDB Explorer tool (https://github.com/evil-e/sdb-explorer) so THAT I could dump any useful ShiMs in existing database queries. I found that having a SHIm for 32-bit programs causes the process to launch the executable regsvr32.exe and pass the raw command line. This tool will load a DLL passed on the command line, perform specific export methods, and if we can control the command line of the privileged process, we can redirect it to promote permissions.

This again limits PoC to 32-bit processes, but that’s good. The final step is to choose which process to redirect. I spent a lot of time looking at ways to start a process and control its command-line arguments. I already know one way, which is UAC automatic promotion. Automatic promotion was added as a feature to Windows 7, reducing the number of UAC dialogs that typical users see. The operating system defines a fixed list of applications that allow automatic promotion. When UAC is the default setting and the user is an administrator, promotion requests for these applications do not display a dialog box. I can abuse this by adding cache entries to an existing application that promotes automatically (in this case, I chose Computerdefaults.exe) and requiring the application to run the promotion. The promoted application redirects to RegsVR32 and passes the command line over which we have full control, RegsVR32 loads my DLL, and we now have code that executes with promoted permissions.

Plus, the PoC doesn’t provide anything else, Impossible through various mechanisms (such as the Metasploit module of https://github.com/rapid7/metasploit-framework/tree/master/external/source/exploits/bypassuac ), but not always. This problem was brought to light by providing an observable result (running arbitrary code as an administrator) so That Microsoft could reproduce and fix it.

0x08 Last interesting point


Since it’s easy to confuse whether it can only bypass UAC, I decided to spend a little time developing a new PoC that can acquire local system permissions without relying on UAC. Sometimes I like to write about exploits, just to prove that it can be done. To convert the original PoC to a PoC with local system permissions, I need a different application to redirect. I think the most likely target is registered scheduled tasks, where you can sometimes pass arbitrary parameters to the task handler. Therefore, there are three limitations to this task: an ordinary user must be able to start it, it must start a process with local system privileges, and the process must have an arbitrary command line specified by the user. After a bit of searching, I found my ideal target, the Windows Store Maintenance Task. As we can see, it runs as a local system user.

By looking at the DACL of the task file using a tool like ICACls, we can be sure that ordinary users can start it. Notice in the screenshot below that NT AUTHORITY \Authenticated Users has read and execute (RX) permissions.

Finally, by examining the XML task file, we can check whether ordinary users can pass arbitrary parameters to the task. In WSTask, it USES a custom COM handler (https://msdn.microsoft.com/en-us/library/windows/desktop/aa381370 (v = versus 85). Aspx), but allows the user to specify two command line parameters. This causes the executable c:\ Windows \ System32 \taskhost.exe to be executed as a local system user with arbitrary command-line arguments.

This simply requires modifying the PoC to add the Taskhost.exe cache entry, which takes the path of our DLL as an argument to start the task. This has some limitations, especially since it only works on 32-bit Windows8.1 (there are no 32-bit taskhost.exe redirects available on 64-bit platforms). But I’m sure with some effort it will work on 64-bit as well. Due to the vulnerability has been repaired, so I have provided new PoC, after it is attached to the original problem (https://code.google.com/p/google-security-research/issues/detail?id=118#c159).

0 x09 conclusion


I hope I’ve demonstrated some of the efforts that bug researchers go to to make sure bugs are fixed. It is ultimately a trade-off between the time spent developing PoC and the vulnerability being fixed, especially if the vulnerability is complex or not obvious.

In this case, I think I made the right tradeoff. Although from the PoC I sent to Microsoft, it was ostensibly just bypassing the UAC, combined with the report, they were able to determine the true severity and develop a patch. Of course, if they want to push back and claim that this is not exploitable, I will develop a more powerful PoC. As a further demonstration of the severity, I also developed an exploit that could be used to gain access to the local system through a regular user account.

Disclosure of POCs is valuable for users or security companies to develop mitigation technologies for open vulnerabilities. Without PoC, it is difficult to verify that a security problem has been fixed or mitigated. It also helps inform researchers and developers about the types of problems to look out for when developing security-sensitive applications. Bug digging isn’t the only way Project Zero is helping software improve security. Education is just as important.

Project Zero’s mission is to solve software bugs, and developing proof of concept to help software vendors or open source projects take sensible action to fix bugs is an important part of our mission.