Original address: www.objc.io/issues/6-bu…

Original author: twitter.com/chriseidhof

Release date: November 2013

What does a compiler do?

In this article, we’ll look at what compilers do and how we can use them to our advantage.

Roughly speaking, the compiler has two tasks: converting our Objective-C code to low-level code, and analyzing our code to make sure we’re not making any obvious mistakes.

These days, Xcode ships with Clang as the compiler. Wherever we write the compiler, you can think of it as Clang. Clang is a tool that takes Objective-C code, analyzes it, and converts it to a lower-level representation, similar to assembly code. LLVM INTERMEDIATE notation LLVM IR is low-level and operating system independent. LLVM receives instructions and compiles them into local bytecode for the target platform. This can be done in time or at compile time.

The advantage of having these LLVM directives is that you can generate and run them on any platform supported by LLVM. For example, if you write your iOS app, it will automatically run on two very different architectures (Intel and ARM), and LLVM is responsible for translating IR code into native bytecode for those platforms.

The advantage of LLVM is that it has a three-tier architecture, meaning that it supports many input languages (such as C, Objective-C, and C++, as well as Haskell) at the first layer, and then has a shared optimizer (which optimizes LLVM IR) at the second layer, The third tier has different targets (such as Intel, ARM, and PowerPC). If you want to add a language, you can focus on the first layer, and if you want to add another compilation target, you don’t have to worry too much about the input language. In his book The Architecture of Open Source Applications, LLVM founder Chris Lattner has a wonderful chapter on The LLVM Architecture.

When compiling a source file, the compiler goes through several stages. To understand the different phases, we can ask clang what it does when it compiles the hello.m file.

% clang -ccc-print-phases hello.m

0: input, "hello.m", objective-c
1: preprocessor, {0}, objective-c-cpp-output
2: compiler, {1}, assembler
3: assembler, {2}, object
4: linker, {3}, image
5: bind-arch, "x86_64", {4}, image
Copy the code

In this article, we will focus on the first and second phases. In the Mach-O executable, Daniel explains phases three and four.

pretreatment

When you compile a source file, the first thing that happens is preprocessing. The preprocessor deals with a macro processing language, which means it will replace macros in your text with macro definitions. For example, if you write the following.

#import <Foundation/Foundation.h>
Copy the code

The preprocessor will take this line and replace it with the contents of the file. If the header contains any other macro definitions, they will also be replaced.

That’s why people tell you to try not to import headers, because as soon as you import something, the compiler has to do more work. For example, in your header file, don’t write the following

#import "MyClass.h"
Copy the code

You can write

@class MyClass;
Copy the code

By doing so, you promise the compiler that there will be a class, MyClass. In the implementation file (.m file), you can import myClass.h and use it.

Now suppose we have a very simple pure C program called hello.c.

#include <stdio.h>

int main(a) {
  printf("hello world/n").return 0;
}
Copy the code

We can run a preprocessor on it and see what happens.

clang -E hello.c | less
Copy the code

Now, look at that code. There are 401 rows. If we also add the following line to it.

#import <Foundation/Foundation.h>.
Copy the code

We can run the command again and see that our file has expanded to an astonishing 89,839 lines. Some entire operating systems have fewer lines of code.

Fortunately, there has been some improvement recently. Now there’s a feature called modules that makes this process a little more advanced.

The custom macros

Another example is when you define or use custom macros, like this.

#define MY_CONSTANT 4
Copy the code

Now, as long as you write MY_CONSTANT after this line, it will be replaced by 4 before the rest of the compilation starts. You can also define more interesting macros with parameters.

#define MY_MACRO(x) x
Copy the code

This article is too short to discuss the full scope of using the preprocessor, but it is a very powerful tool. Typically, preprocessors are used to inline code. We strongly discourage this. For example, suppose you have the following seemingly innocuous program.

#define MAX(a,b) a > b ? a : b

int main() {
  printf("largest: %d\n", MAX(10.100));
  return 0;
}
Copy the code

That will do. But what about the procedure below.

#define MAX(a,b) a > b ? a : b

int main() {
  int i = 200;
  printf("largest: %d\n", MAX(i++,100));
  printf("i: %d\n", i);
  return 0;
}
Copy the code

If we compile with clang Max. C, we get the following results.

largest: 201
i: 202
Copy the code

This is obvious when we run the preprocessor and extend all the macros by issuing clang-e max.c.

int main() {
  int i = 200;
  printf("largest: %d\n", i++ > 100 ? i++ : 100);
  printf("i: %d\n", i);
  return 0;
}
Copy the code

In this case, it’s an obvious example of how macros can go wrong, but things can also go wrong in more unexpected and difficult to debug ways. Instead of using macros, use static inline functions.

#include <stdio.h>

static const int MyConstant = 200;

static inline int max(int l, int r) {
   return l > r ? l : r;
}

int main() {
  int i = MyConstant;
  printf("largest: %d\n", max(i++,100));
  printf("i: %d\n", i);
  return 0;
}
Copy the code

This prints the correct result (I: 201). Because the code is inline, it has the same performance as the macro variant, but it has a much lower error rate. In addition, you can set breakpoints, have type checking, and avoid unexpected behavior.

The only time macros are a reasonable solution is for logging, because you can use __FILE__ and __LINE__ as well as assertion macros.

Mark (Lexing)

After preprocessing, each source.m file now has a bunch of definitions. This text is converted from a string to a tag stream. For example, take a simple Objective-C Hello World program.

int main() {
  NSLog(@"hello, %@".@"world");
  return 0;
}
Copy the code

Clang-xclang-dump-tokens hello.m

int 'int' [StartOfLine] Loc=<hello.m:4:1> identifier 'main' [LeadingSpace] Loc=<hello.m:4:5> l_paren '(' Loc=<hello.m:4:9> r_paren ')' Loc=<hello.m:4:10> l_brace '{' [LeadingSpace] Loc=<hello.m:4:12> identifier 'NSLog' [StartOfLine] [LeadingSpace] Loc=<hello.m:5:3> l_paren '(' Loc=<hello.m:5:8> at '@' Loc=<hello.m:5:9> string_literal '"hello, %@"' Loc=<hello.m:5:10> comma ',' Loc=<hello.m:5:21> at '@' [LeadingSpace] Loc=<hello.m:5:23> string_literal '"world"' Loc=<hello.m:5:24> r_paren ')' Loc=<hello.m:5:31> semi '; ' Loc=<hello.m:5:32> return 'return' [StartOfLine] [LeadingSpace] Loc=<hello.m:6:3> numeric_constant '0' [LeadingSpace] Loc=<hello.m:6:10> semi '; ' Loc=<hello.m:6:11> r_brace '}' [StartOfLine] Loc=<hello.m:7:1> eof '' Loc=<hello.m:7:2>Copy the code

As we can see, each token consists of a piece of text and a source location. The source location is before the macro extension, so Clang can point you to the correct location if something goes wrong.

parsing

Now the fun part begins: our tag stream is parsed into an abstract syntax tree. Because Objective-C is a fairly complex language, parsing isn’t always easy. After parsing, a program can now serve as an abstract syntax tree: a tree representing the original program. Suppose we have a program hello.m.

#import <Foundation/Foundation.h>

@interface World
- (void)hello;
@end

@implementation World
- (void)hello {
  NSLog(@"hello, world");
}
@end

int main() {
   World* world = [World new];
   [world hello];
}
Copy the code

When we issue the command clang-xclang-ast – dump-fsyntax-only hello.m, we get the following result.

@interface World- (void) hello; @end @implementation World - (void) hello (CompoundStmt 0x10372ded0 <hello.m:8:15, line:10:1> (CallExpr 0x10372dea0 <line:9:3, col:24> 'void' (ImplicitCastExpr 0x10372de88 <col:3> 'void (*)(NSString *, ...). ' <FunctionToPointerDecay> (DeclRefExpr 0x10372ddd8 <col:3> 'void (NSString *, ...) ' Function 0x1023510d0 'NSLog' 'void (NSString *, ...) ')) (ObjCStringLiteral 0x10372de38 <col:9, col:10> 'NSString *' (StringLiteral 0x10372de00 <col:10> 'char [13]' lvalue "hello, world")))) @end int main() (CompoundStmt 0x10372e118 <hello.m:13:12, line:16:1> (DeclStmt 0x10372e090 <line:14:4, col:30> 0x10372dfe0 "World *world = (ImplicitCastExpr 0x10372e078 <col:19, col:29> 'World *' <BitCast> (ObjCMessageExpr 0x10372e048 <col:19, col:29> 'id':'id' selector=new class='World'))") (ObjCMessageExpr 0x10372e0e8 <line:15:4, col:16> 'void' selector=hello (ImplicitCastExpr 0x10372e0d0 <col:5> 'World *' <LValueToRValue> (DeclRefExpr 0x10372e0a8 <col:5> 'World *' lvalue Var 0x10372dfe0 'world' 'World *'))))Copy the code

Each node in the abstract syntax tree is labeled with its original source location, so that Clang can warn your program later if something goes wrong and give you the correct location.

Also check

  • Clang AST is introduced

Static analysis

Once the compiler has an abstract syntax tree, it can analyze the tree to help you find errors, such as in type checking, where it checks that your program is typed correctly. For example, when you send a message to an object, it checks to see if the object actually implements the message. In addition, Clang does more advanced analysis, which checks your program to make sure you’re not doing anything strange.

Type checking

Any time you write code, Clang helps you check to see if you haven’t made any mistakes. One of the obvious things is whether your program sends the right messages to the right objects and calls the right functions at the right values. If you have a normal NSObject*, you can’t just send it a Hello message, because Clang will report an error. In addition, if you create a Test class that subclasses NSObject, like this.

@interface Test : NSObject
@end
Copy the code

And if you try to assign that object to a different type, the compiler will help you and warn you that what you’re doing may not be correct.

There are two types of types: dynamic and static. Dynamic typography means checking types at run time, while static typography means checking types at compile time. In the past, you could send any message to any object at any time, and at run time, the object would determine if it responded to the message. When this type is checked only at run time, this is called dynamic typing.

For static types, this is checked at compile time. When you use ARC, the compiler checks more types at compile time because it needs to know which objects it works with. For example, you can’t write down the code below.

[myObject hello]
Copy the code

If you don’t have a hello method defined in your program.

Other analysis

There are many other analyses that Clang will do for you. If you clone clang warehouse, then go to the lib/StaticAnalyzer/Checkers, you will see all of the static checker. For example, have ObjCUnusedIVarsChecker CPP, check whether ivars did not use it. Or there’s objCselFinitChecker. CPP, which checks if you called [self initWith…] before you started using self in the initializer. Or [super init]. Other checks occur in other parts of the compiler. For example, in the lib/Sema/SemaExprObjC CPP line 2534, you can see below this line.

 Diag(SelLoc, diag::warning_arc_perform_selector_leaks);
Copy the code

This will generate the dreaded “execution selector may cause a leak because its selector is unknown” warning.

Code generation

Now, once your code is fully tokenized, parsed, and parsed by Clang, it can generate LLVM code for you. To see what happens, we can look again at the program hello.c.

#include <stdio.h>

int main() {
  printf("hello world\n");
  return 0;
}
Copy the code

To compile it into LLVM IR, we can issue the following command.

clang -O3 -S -emit-llvm hello.c -o hello.ll
Copy the code

This will generate a hello.ll file, which gives us the following output.

; ModuleID = 'hello.c'
target datalayout = "e-m:o-i64:64-f80:128-n8:16:32:64-S128"
target triple = "x86_64-apple-macosx10.9.0"

@str = private unnamed_addr constant [12 x i8] c"hello world\00"

; Function Attrs: nounwind ssp uwtable
define i32 @main() #0 {
  %puts = tail call i32 @puts(i8* getelementptr inbounds ([12 x i8]* @str, i64 0, i64 0))ret i32 0 } ; Function Attrs: nounwind declare i32 @puts(i8* nocapture readonly) #1 attributes #0 = { nounwind ssp uwtable "less-precise-fpmad"="false" "no-frame-pointer-elim"="true" "no-frame-pointer-elim-non-leaf" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } attributes #1 = { nounwind } ! llvm.ident = ! {! 0}! 0 = metadata ! {metadata !" Apple LLVM Version 6.0 (Clang-600.0.41.2) (based on LLVM 3.5 SVN)"}Copy the code

As you can see, main has only two lines: one to print the string and one to return 0.

For a very simple Objective – C program five. M to do the same thing is also very interesting, we compiled using LLVM – dis < five. BC | less.

#include <stdio.h>
#import <Foundation/Foundation.h>

int main() {
  NSLog(@ "% @"[@5 description]);
  return 0;
}
Copy the code

There’s a lot more, but this is main.

define i32 @main() #0 {
  %1 = load %struct._class_t** @"\01L_OBJC_CLASSLIST_REFERENCES_The $_", align 8
  %2 = load i8** @"\01L_OBJC_SELECTOR_REFERENCES_", align 8, ! invariant.load ! 5
  %3 = bitcast %struct._class_t* %1 to i8*
  %4 = tail call %0* bitcast (i8* (i8*, i8*, ...) * @objc_msgSend to %0* (i8*, i8*, i32)*)(i8* %3, i8* %2, i32 5)
  %5 = load i8** @"\01L_OBJC_SELECTOR_REFERENCES_2", align 8, ! invariant.load ! 5
  %6 = bitcast %0* %4 to i8*
  %7 = tail call %1* bitcast (i8* (i8*, i8*, ...) * @objc_msgSend to %1* (i8*, i8*)*)(i8* %6, i8* %5)tail call void (i8*, ...) * @NSLog(i8* bitcast (%struct.NSConstantString* @_unnamed_cfstring_ to i8*), %1* %7) ret i32 0 }Copy the code

The most important lines are line 4, which creates the NSNumber object, line 7, which sends the description message to the number object, and line 8, which records the string returned by the description message.

To optimize the

To see what optimizations LLVM and Clang can do, take a look at a slightly more complex C example, the recursively defined factorial function.

Recursively defined factorial function: SELECT ALL

#include <stdio.h>

int factorial(int x) {
   if (x > 1) return x * factorial(x- 1);
   else return 1;
}

int main() {
  printf("factorial 10: %d/\n", factorial(10));
}
Copy the code

To compile without optimization, run the following command.

clang -O0 -S -emit-llvm factorial.c -o factorial.ll
Copy the code

The interesting part is to look at the generating code for the factor function.

define i32 @factorial(i32 %x) # 0 {
  The % 1 = alloca i32.align 4
  % 2 = alloca i32.align 4
  store i32 %x.i32* % 2.align 4
  % 3 = load i32* % 2.align 4
  % 4 = icmp sgt i32 % 3.1
  br i1 % 4, label % 5, label % 11

;   % 6 = load i32* % 2.align 4
  % 7 = load i32* % 2.align 4
  % 8 = sub nsw i32 % 7.1
  % 9 = call i32 @factorial(i32 % 8)
  % 10 = mul nsw i32 % 6.% 9
  store i32 % 10.i32* The % 1
  br label % 12

;   store i32 1.i32* The % 1
  br label % 12

;   % 13 = load i32* The % 1
  ret i32 % 13
}
Copy the code

As you can see, on the line marked %9, it recursively calls itself. This is quite inefficient because each recursive call adds to the stack. To turn on the optimization, we can pass the flag -O3 to Clang.

clang -O3 -S -emit-llvm factorial.c -o factorial.ll
Copy the code

Now, the code for the factorial function looks like this.

define i32 @factorial(i32 %x) # 0 {
  The % 1 = icmp sgt i32 %x.1
  br i1 The % 1, label %tailrecurse, label %tailrecurse._crit_edge

tailrecurse:                                      ; preds = %tailrecurse, %0
  %x.tr2 = phi i32 [ % 2.%tailrecurse ], [ %x.% 0 ]
  %accumulator.tr1 = phi i32 [ % 3.%tailrecurse ], [ 1.% 0 ]
  % 2 = add nsw i32 %x.tr2.- 1
  % 3 = mul nsw i32 %x.tr2.%accumulator.tr1
  % 4 = icmp sgt i32 % 2.1
  br i1 % 4, label %tailrecurse, label %tailrecurse._crit_edge

tailrecurse._crit_edge:                           ; preds = %tailrecurse, %0
  %accumulator.tr.lcssa = phi i32 [ 1.% 0 ], [ % 3.%tailrecurse ]
  ret i32 %accumulator.tr.lcssa
}
Copy the code

Even though our function is not written tail-recursively, Clang can still optimize it, and it is now just a function with a loop. There are many more optimizations, and Clang will optimize your code. A great example of what GCC can do is the height of ridiculousfish.com.

Read more

  • LLVM blog: Articles marked as “optimized”
  • LLVM blog: Vectorizing improvements
  • LLVM blog: Greedy register allocation
  • Polly project

How to use this advantage

Now that we’ve seen a complete compilation, from tokenization to parsing, from abstract syntax trees to parsing and compiling, we can wonder: Why should we care?

Use libclang or clang Plugins

The cool thing about Clang is that it’s open source and a very complete project: almost everything is a library, which means it can create your own version of Clang and only change what you need. This means you can create your own version of Clang and only change what you need. For example, you can change the way Clang generates code, add better type checking, or do profiling. There are many ways to do this, the simplest of which is to use a C library called libclang. Libclang provides you with a simple C API to Clang that you can use to analyze all of your source code. However, in my experience, libclang is too limiting as long as you want to do something more advanced. In addition, there is ClangKit, which is an Objective-C wrapper around some of the functionality provided by Clang.

Another approach was to use LibTooling directly to use the C++ libraries provided by clang. This is a lot more work and involves C++, but it gets the best out of clang. You can do any kind of analysis, you can even rewrite the program. If you want to add custom analysis to Clang, want to write your own refactorer, need to rewrite a lot of code, or want to generate charts and documentation from your project, LibTooling is your friend.

Write a parser

Follow the Tutorial instructions to build LLVM, Clang, and Clang-tools-extra using the LibTooling build tool. Be sure to allow some time to compile, even though my machine is very fast, I can still do the dishes in the time it takes for LLVM to compile.

Next, go to your LLVM directory and create a CD ~/ LLVM /tools/clang/tools/. In this directory, you can create your own stand-alone Clang tools. As an example, we created a small tool to help us detect proper use of the library. Clone the sample library into this directory and type make. This will provide you with a binary file called Example.

For example: Suppose we have an Observer class that looks like this.

@interface Observer
+ (instancetype)observerWithTarget:(id)target action:(SEL)selector;
@end
Copy the code

Now, we want to be able to check if an action exists on a target object every time we use this class. We can write a quick C++ function to do this (note that this is the first C++ function I’ve written, so I’m definitely not used to it).

virtual bool VisitObjCMessageExpr(ObjCMessageExpr *E) {
  if (E->getReceiverKind() == ObjCMessageExpr::Class) {
    QualType ReceiverType = E->getClassReceiver();
    Selector Sel = E->getSelector();
    string TypeName = ReceiverType.getAsString();
    string SelName = Sel.getAsString();
    if (TypeName == "Observer" && SelName == "observerWithTarget:action:") {
      Expr *Receiver = E->getArg(0)->IgnoreParenCasts();
      ObjCSelectorExpr* SelExpr = cast<ObjCSelectorExpr>(E->getArg(1)->IgnoreParenCasts());
      Selector Sel = SelExpr->getSelector();
      if (const ObjCObjectPointerType *OT = Receiver->getType()->getAs<ObjCObjectPointerType>()) {
        ObjCInterfaceDecl *decl = OT->getInterfaceDecl();
        if (! decl->lookupInstanceMethod(Sel)) {
          errs() << "Warning: class " << TypeName << " does not implement selector " << Sel.getAsString() << "\n";
          SourceLocation Loc = E->getExprLoc();
          PresumedLoc PLoc = astContext->getSourceManager().getPresumedLoc(Loc);
          errs() << "in " << PLoc.getFilename() << "<" << PLoc.getLine() << ":" << PLoc.getColumn() << ">\n"; }}}}return true;
}
Copy the code

This method first to find the Observer for the receiver to observerWithTarget: action: for the message of the selector expressions, then look at the target and check whether there is the method. Of course, this is just a slightly contrived example, but if you want to mechanically verify something with AST in your code base, this is the thing to do.

More clang possibilities

There are many other ways we can leverage Clang. For example, you can write compiler plug-ins (for example, using the same inspector as above) and load them dynamically into your compiler. I haven’t tested it yet, but it should work with Xcode. For example, if you want a warning about code style, you can write a Clang plug-in for it. (For a simple check, see the Build process article.)

Alternatively, if you need to do extensive refactoring of your code base, and the generic refactoring tools built into Xcode or AppCode are not enough, you can write a simple refactoring tool using Clang. This may sound daunting, but as you’ll see in the tutorials linked below, it’s not too hard.

Finally, if you really want to, you can compile your own Clang and instruct Xcode to use it. Again, it’s not as hard as it sounds, and it’s definitely fun.

Read more

  • Clang tutorial
  • X86_64 assembly language tutorial
  • Custom Clang builds with Xcode (I) and (II).
  • Clang Tutorial (1), (2), (3)
  • Clang plugin tutorial
  • LLVM Blog: What Every C Programmer Should Know (I), (II), and (III)

Translation via www.DeepL.com/Translator (free version)