During a recent development project using Swift, the compile time was surprisingly slow. Every time Git switches branches, it takes a long time to compile and often gets stuck. Sometimes change a little place to debug to see the effect, but also have to compile so long, is really unbearable. So make up your mind to do some research and see if there are any good ways to optimize Xcode compile times.

There are a lot of experimental data in this paper, which are all simple tests based on existing projects. The optimization effect is only for reference 😅.

The first step is to figure out how to calculate the compile time. When you’re done, you can get to the point.

Look at the compilation time
  1. Enter the following statement on the command line. When Xcode is successfully compiled, the compile time is displayed next to the “Succeed” field at the top.
defaults write com.apple.dt.Xcode ShowBuildOperationDuration YES
Copy the code
  1. Using the Github plugin buildTimeAnalyer-for-XCode, you can also specify the build time of each file.

I. Improve Xcode compilation efficiency

1. Whole Module Optimization

Modules are collections of Swift files, each compiled into a framework or executable program. At compile time, the Swift compiler compiles each file in the Module separately, then links them together and outputs the Framework or executable.

Because this compilation is limited to a single file, optimizations that need to cross functions, such as function inlining, basic block merging, etc., can be affected. As a result, compilation times are longer.

If you use full-module optimization, the compiler calls all the files together as one file and then compiles them, which greatly speeds up compilation. For example, the compiler knows the implementation of all functions in a module, so it can ensure that optimizations across functions (including function inlining, function specialization, etc.) are performed.

In addition, the compiler can derive the use of all non-public functions when the full module is optimized. Non-public functions can only be called from within a module, so the compiler can determine all references to these functions. The compiler can then tell if a non-public function or method is not being used at all and directly remove redundant functions.

#### Example of function specialization

Function specialization is when the compiler creates a new version of a function that optimizes performance for a particular call context. It is common in Swift to specialize generic functions for various concrete types.

main.swift

func add (c1: Container<Int>, c2: Container<Int>) -> Int {
  return c1.getElement() + c2.getElement()
}
Copy the code

utils.swift

struct Container<T> {
  var element: T

  func getElement(a) -> T {
    return element
  }
}
Copy the code

For single-file compilation, when the compiler optimizes main.swift, it does not know how getElement is implemented. So the compiler generates a call to getElement. On the other hand, when the compiler optimizes utils.swift, it does not know what specific type the function was called. So it can only generate a generic version of the function, which is much slower than type-specialized code.

Even if you simply declare the return value in getElement, the compiler has to look in the metadata of the type to figure out how to copy the element. It can be a simple Int type, but it can also be a complex type that involves some reference counting operations. In the case of single file compilation, the compiler has no way to know, let alone optimize.

At full-module compilation, however, the compiler can perform function specialization on a generic function:

utils.swift

struct Container {
  var element: Int

  func getElement(a) -> Int {
    return element
  }
}
Copy the code

After you have specialized all the places where the getElement function is called, the generic version of the function can be removed. This allows the compiler to perform further optimizations using the specialized getElement function.

SWIFT_WHOLE_MODULE_OPTIMIZATION Enables full module optimization

Status bar -> Editor -> Build Setting -> Add user-defined Settings, then Add key to SWIFT_WHOLE_MODULE_OPTIMIZATION and value to YES.

Why is Swift’s compiler not fully modular optimized by default?

The default setting for Swift is to Build active Architecture only and Build Active Architecture only when debugging, and this is the default setting for Xcode. You can check this setting in Build Settings –> Build Active Architecture Only.

That is, when each file is compiled separately, the compiler caches the resulting compilation of each file. The advantage of this is that if you have compiled once before, and only a small number of files are changed after that, the rest of the files don’t have to be recompiled, so it’s faster.

Let’s take a look at the overall process of full module optimization, including: analyzer, type checking, SIL optimization, LLVM back end. And most of the time, the first two are very fast. The main optimization of SIL is function inlining and function specialization mentioned above. The LLVM back-end uses multithreading to compile the results of SIL optimization and generate the underlying code.

Setting SWIFT_WHOLE_MODULE_OPTIMIZATION = YES increases the granularity of incremental compilation from the File level to the Module level. If you modify a file in our project and want to compile a debug, you have to merge the file again and start from scratch. Theoretically, if a single LLVM thread has not been modified, the previous cache can also be used for acceleration. But the reality is that the analyzer, type checking, SIL optimizations will definitely be reexecuted, and in most cases LLVM will have to be reexecuted at roughly the same time as the first compile.

Note, however, that the libraries, storyboards, and XIb files in pod will not be affected.

2. Generate a dSYM file (dSYM Generation)

DSYM files store debug information, which contains crash information. Fabric can automatically parse dSYM files in projects.

The default setting for new projects is that the Debug configuration is compiled without generating dSYM files. Sometimes this parameter is changed for Crash log parsing at development time. DSYM generation takes a lot of time, if you do not need to, you can go to the Debug Information Format to change. DSYM files are generated in DWARF with dSYM file.

3. Use the new Xcode 9 build system

In Xcode 9, Apple has quietly introduced a new build system, which you can find on Github. This is only a preview, so it is not enabled by default in Xcode. The official new system will change the way Swift handles dependencies between objects in an effort to speed up compilation. It’s not perfect yet, and it can lead to weird behavior when writing code and long compile times. And sure enough, I tried it and it was slower than before.

If you want to try it out, go to the **File menu -> Working Space ** Building System -> New Building System(Preview)

The Build Time record

Generate dSYM Who Module Optimization Add a blank line after the second compilation For the first time to compile Use New Build System Total compilation time
✔ ✔ 8m 42s
✔ 8m 18s
✔ ✔ ✔ 2m 2s
✔ ✔ 1m 36s
✔ ✔ 0m 38s
✔ 0m 16s
✔ ✔ ✔ 1m 26s
✔ ✔ 0m 55s
✔ ✔ 9m 24s
✔ ✔ ✔ 1m 46s

2. Optimize Swift code

1. Reduce type inference

let array = ["a"."b"."c"."d"."e"."f"."g"]
Copy the code

This way of writing is more concise, but the compiler needs to do type inference to know the exact type of the array, so the best approach is to write the type directly and avoid inference.

let array: [String] = ["a"."b"."c"."d"."e"."f"."g"]
Copy the code

2. Reduce the ternary Operator

let letter = someBoolean ? "a" : "b"
Copy the code

The ternary operator is more concise, but increases the compile time. If you want to reduce the compile time, you can rewrite it as follows.

var letter = ""
if someBoolean { 
  letter = "a"
} else {
  letter = "b"
}
Copy the code

3. Reduce the use of nil coalescing operator

let string = optionalString ?? ""
Copy the code

This is a special syntax in Swift and can be used to set the default value when the optional type is used. But this is also essentially a triadic operator.

letstring = optionalString ! =nil ? optionalString! : nil
Copy the code

Therefore, if you want to save compilation time, you can also rewrite as

if let string = optionalString{ 
    print("\(string)")}else {
    print("")}Copy the code

4. Improved concatenation of strings

let totalString = "A" + stringB + "C"
Copy the code

This concatenation of strings works, but the Swift compiler doesn’t like it, and tries to rewrite it as follows.

let totalString = "A\(stringB)C"
Copy the code

5. Improved the way strings are converted

let StringA = String(IntA)
Copy the code

This concatenation of strings works, but the Swift compiler doesn’t like it, and tries to rewrite it as follows.

let StringA = "\(IntA)"
Copy the code

Calculate in advance

if time > 14 * 24 * 60 * 60 {}
Copy the code

Writing this way is more readable, but it’s a huge burden on the compiler. You can write the details in comments like this:

if time > 1209600 {} // 14 * 24 * 60 * 60
Copy the code

The Build Time record

Reduce type inference

In one file, two types of inference are reduced, and a total of 0.3ms is optimized. The improved effect is as follows:

The total time
Before the change 135.3 ms
After the change 135.0 ms

As you can see, Xcode’s optimization of type inference is quite good, and type inference in the declaration phase is actually not very difficult, so declaring types in advance actually has little effect on the optimization of compile time.

Reduce the ternary operator

In one file, a total of 2 places using the ternal operator are reduced, and a total of 51.2ms is optimized. The improvement effect is as follows:

The total time
Before the change 229.2 ms
After the change 178.0 ms

As you can see, the use of the triplex operator has an effect on the speed of compilation, so you can use if-else statements when it is not particularly necessary for compilation time.

Reduce the use of nil coalescing operator

In a file, five places that use nil coalescing operator are reduced, and a total of 2.8ms is optimized. The specific improvement effects are as follows:

The total time
Before the change 386.4 ms
After the change 178.0 ms

According to the results, the optimization effect is not significant. The nil coalescing operator is actually based on the ternary operator. Why is it less optimized than the ternary operator? As far as I can guess, the reason may be that the threefold operator only needs to be rewritten as if-else statement, while the nil coalescing operator needs to implement the assignment statement with var first most of the time, and then change the assignment with if-else, so the optimization effect is not great in general.

String concatenation

In one file, a total of 7 string concatenation methods have been improved, with 73ms optimization in total. The specific improvement effects are as follows:

The total time
Before the change 696.1 ms
After the change 623.1 ms

The improved concatenation of strings is obviously effective and more consistent with Swift syntax, so why not?

String conversion mode

In one file, 5 modifications were made, a total of 4952.5ms optimization, the effect is very significant. The specific improvement effects are as follows:

The total time
Before the change 5106.2 ms
After the change 153.7 ms
To calculate in advance

In one file, the changes in the previous example were made, with a total optimization of 843.2ms, which is quite significant. The specific improvement effects are as follows:

The total time
Before the change 1034.7 ms
After the change 191.5 ms

More content

😄 making

reference

  1. Whole-Module Optimization in Swift 3
  2. How to enable build timing in Xcode? – Stack Overflow
  3. Speed up Swift compile time