Tracing just-in-time compilation
|Notable compilers & toolchains|
Tracing just-in-time compilation is a technique used by virtual machines to optimize the execution of a program at runtime. This is done by recording a linear sequence of frequently executed operations, compiling them to native machine code and executing them. This is opposed to traditional just-in-time (JIT) compilers that work on a per-method basis.
Just-in-time compilation is a technique to increase execution speed of programs by compiling parts of a program to machine code at runtime. One way to categorize different JIT compilers is by their compilation scope. Whereas method-based JIT compilers translate one method at a time to machine code, tracing JITs use frequently executed loops as their unit of compilation. Tracing JITs are based on the assumptions that programs spend most of their time in some loops of the program ("hot loops") and subsequent loop iterations often take similar paths. Virtual machines that have a tracing JIT are often mixed-mode execution environments, meaning that they have either an interpreter or a method compiler in addition to the tracing JIT.
A tracing JIT compiler goes through various phases at runtime. First, profiling information for loops is collected. After a hot loop has been identified, a special tracing mode is entered, which records all executed operations of that loop. This sequence of operations is called a trace. The trace is then optimized and compiled to machine code (trace). When this loop is executed again, the compiled trace is called instead of the program counterpart.
These steps are explained in detail in the following:
The goal of profiling is to identify hot loops. This is often done by counting the number of iterations for every loop. After the count of a loop exceeds a certain threshold, the loop is considered to be hot, and tracing mode is entered.
In the tracing phase the execution of the loop proceeds normally, but in addition every executed operation is recorded into a trace. The recorded operations are often stored in the form of an intermediate representation. Tracing follows function calls, which leads to them being inlined into the trace. Tracing continues until the loop reaches its end and jumps back to the start.
Since the trace is recorded by following one concrete execution path of the loop, later executions of that trace can diverge from that path. To identify the places where that can happen, special guard instructions are inserted into the trace. One example for such a place are if statements. The guard is a quick check to determine whether the original condition is still true. If a guard fails, the execution of the trace is aborted.
Since tracing is done during execution, the trace can be made to contain runtime information (e.g. type information). This information can later be used in the optimization phase to increase code efficiency.
Optimization and code-generation phase
Traces are easy to optimize, since they represent only one execution path, which means that no control flow exists and needs no handling. Typical optimizations include constant-subexpression elimination, dead-code elimination, register allocation, invariant-code motion, constant folding, and escape analysis.
After the optimization, the trace is turned into machine code. Similarly to optimization, this is easy due to the linear nature of traces.
After the trace has been compiled to machine code, it can be executed in subsequent iterations of the loop. Trace execution continues until a guard fails.
Whereas the idea of JITs reaches back to the 1960s, tracing JITs have become used more often only recently. The first mention of an idea that is similar to today's idea of tracing JITs was in 1970. It was observed that compiled code could be derived from an interpreter at run-time by simply storing the actions performed during interpretation.
The first implementation of tracing is Dynamo, "a software dynamic optimization system that is capable of transparently improving the performance of a native instruction stream as it executes on the processor". To do this, the native instruction stream is interpreted until a "hot" instruction sequence is found. For this sequence an optimized version is generated, cached and executed.
Dynamo was later extended to DynamoRIO. One DynamoRIO-based project was a framework for interpreter construction that combines tracing and partial evaluation. It was used to "dynamically remove interpreter overhead from language implementations".
In 2006, HotpathVM, the first tracing JIT compiler for a high-level language was developed. This VM was capable of dynamically identifying frequently executed bytecode instructions, which are traced and then compiled to machine code using static single assignment (SSA) construction. The motivation for HotpathVM was to have an efficient JVM for resource constrained mobile devices.
Another project that utilizes tracing JITs is PyPy. It enables the use of tracing JITs for language implementations that were written with PyPy's translation toolchain, thus improving the performance of any program that is executed using that interpreter. This is possible by tracing the interpreter itself, instead of the program that is executed by the interpreter.
Example of a trace
Consider the following program that computes a sum of squares of successive whole numbers until that sum exceeds 100000:
def square(x): return x * x i = 0 y = 0 while True: y += square(i) if y > 100000: break i = i + 1
A trace for this program could look something like this:
loopstart(i1, y1) i2 = int_mul(i1, i1) # x*x y2 = int_add(y1, i2) # y += i*i b1 = int_gt(y2, 100000) guard_false(b1) i3 = int_add(i1, 1) # i = i+1 jump(i3, y2)
Note how the function call to
square is inlined into the trace and how the if statement is turned into a
- Dalvik (software)
- Just-in-time compilation
- Profile-guided optimization
- "Allocation removal by partial evaluation in a tracing JIT" Carl Friedrich Bolz, Antonio Cuni, Maciej Fijałkowski, Michael Leuschel, Samuele Pedroni, Armin Rigo - PEPM '11 Proceedings of the 20th ACM SIGPLAN workshop on Partial evaluation and program manipulation - doi:10.1145/1929501.1929508. Retrieved April 24, 2012.
- MITCHELL, J. G. 1970. The design and construction of ﬂexible and efficient interactive programming systems. Ph.D. dissertation. Carnegie-Mellon University, Pittsburgh, PA.
- "Dynamo: A Transparent Dynamic Optimization System" Vasanth Bala, Evelyn Duesterwald, Sanjeev Banerjia - PLDI '00 Proceedings of the ACM SIGPLAN 2000 conference on Programming language design and implementation - pages 1 to 12 - doi:10.1145/349299.349303. Retrieved March 28, 2012
- "Dynamic native optimization of interpreters" Gregory T. Sullivan, Derek L. Bruening, Iris Baron, Timothy Garnett, Saman Amarasinghe - Proceeding IVME '03 Proceedings of the 2003 workshop on Interpreters, virtual machines and emulators doi:10.1145/858570.858576. Retrieved March 21, 2012
- "HotpathVM: an effective JIT compiler for resource-constrained devices Andreas Gal, Christian W. Probst, Michael Franz - Proceeding VEE '06 Proceedings of the 2nd international conference on Virtual execution environments doi:10.1145/1134760.1134780.
- "Trace-based Just-in-Time Type Specialization for Dynamic Languages" A. Gal, M. Franz, B. Eich, M. Shaver, and D. Anderson - Proceedings of the ACM SIGPLAN 2009 conference on Programming language design and implementation, 2009 doi:10.1145/1542476.1542528.
- "Tracing the Meta-Level: PyPy’s Tracing JIT Compiler" Carl Friedrich Bolz, Antonio Cuni, Maciej Fijałkowski, Armin Rigo - ICOOOLPS '09 Proceedings of the 4th workshop on the Implementation, Compilation, Optimization of Object-Oriented Languages and Programming Systems - pages 18 to 25 - doi:10.1145/1565824.1565827. Retrieved March 21, 2012
- "SPUR: A Trace-Based JIT Compiler for CIL" M. Bebenita et al. - Proceedings of the ACM international conference on Object oriented programming systems languages and applications doi:10.1145/1869459.1869517.