||This article has an unclear citation style. (December 2013)|
In computing, just-in-time (JIT) compilation, also known as dynamic translation, is compilation done during execution of a program – at run time – rather than prior to execution. Most often this consists of translation to machine code, which is then executed directly, but can also refer to translation to another format.
JIT compilation is a combination of the two traditional approaches to translation to machine code – ahead-of-time compilation (AOT), and interpretation – and combines some advantages and drawbacks of both. Roughly, JIT compilation combines the speed of compiled code with the flexibility of interpretation, with the overhead of an interpreter and the additional overhead of compiling (not just interpreting). JIT compilation is a form of dynamic compilation, and allows adaptive optimization such as dynamic recompilation – thus in theory JIT compilation can yield faster execution than static compilation. Interpretation and JIT compilation are particularly suited for dynamic programming languages, as the runtime system can handle late-bound data types and enforce security guarantees.
JIT compilation can be applied to some programs, or can be used for certain capacities, particularly dynamic capacities such as regular expressions. For example, a text editor may compile a regular expression provided at runtime to machine code to allow faster matching – this cannot be done ahead of time, as the data is only provided at run time. Several modern runtime environments rely on JIT compilation for high-speed code execution, most significantly most implementations of Java, together with Microsoft's .NET Framework. Similarly, many regular expression libraries ("regular expression engines") feature JIT compilation of regular expressions, either to bytecode or to machine code.
A common implementation of JIT compilation is to first have AOT compilation to bytecode (virtual machine code), known as bytecode compilation, and then have JIT compilation to machine code (dynamic compilation), rather than interpretation of the bytecode. This improves the runtime performance compared to interpretation, at the cost of lag due to compilation. JIT compilers translate continuously, as with interpreters, but caching of compiled code minimizes lag on future execution of the same code during a given run. Since only part of the program is compiled, there is significantly less lag than if the entire program were compiled prior to execution.
In a bytecode-compiled system, source code is translated to an intermediate representation known as bytecode. Bytecode is not the machine code for any particular computer, and may be portable among computer architectures. The bytecode may then be interpreted by, or run on, a virtual machine. The JIT compiler reads the bytecodes in many sections (or in full, rarely) and compiles them dynamically into machine language so the program can run faster. This can be done per-file, per-function or even on any arbitrary code fragment; the code can be compiled when it is about to be executed (hence the name "just-in-time"), and then cached and reused later without needing to be recompiled.
In contrast, a traditional interpreted virtual machine will simply interpret the bytecode, generally with much lower performance. Some interpreters even interpret source code, without the step of first compiling to bytecode, with even worse performance. Statically compiled code or native code is compiled prior to deployment. A dynamic compilation environment is one in which the compiler can be used during execution. For instance, most Common Lisp systems have a compile function which can compile new functions created during the run. This provides many of the advantages of JIT, but the programmer, rather than the runtime, is in control of what parts of the code are compiled. This can also compile dynamically generated code, which can, in many scenarios, provide substantial performance advantages over statically compiled code, as well as over most JIT systems.
A common goal of using JIT techniques is to reach or surpass the performance of static compilation, while maintaining the advantages of bytecode interpretation: Much of the "heavy lifting" of parsing the original source code and performing basic optimization is often handled at compile time, prior to deployment: compilation from bytecode to machine code is much faster than compiling from source. The deployed bytecode is portable, unlike native code. Since the runtime has control over the compilation, like interpreted bytecode, it can run in a secure sandbox. Compilers from bytecode to machine code are easier to write, because the portable bytecode compiler has already done much of the work.
JIT code generally offers far better performance than interpreters. In addition, it can in some cases offer better performance than static compilation, as many optimizations are only feasible at run-time:
- The compilation can be optimized to the targeted CPU and the operating system model where the application runs. For example JIT can choose SSE2 vector CPU instructions when it detects that the CPU supports them. However there is currently no mainstream JIT that implements this. To obtain this level of optimization specificity with a static compiler, one must either compile a binary for each intended platform/architecture, or else include multiple versions of portions of the code within a single binary.
- The system is able to collect statistics about how the program is actually running in the environment it is in, and it can rearrange and recompile for optimum performance. However, some static compilers can also take profile information as input.
- The system can do global code optimizations (e.g. inlining of library functions) without losing the advantages of dynamic linking and without the overheads inherent to static compilers and linkers. Specifically, when doing global inline substitutions, a static compilation process may need run-time checks and ensure that a virtual call would occur if the actual class of the object overrides the inlined method, and boundary condition checks on array accesses may need to be processed within loops. With just-in-time compilation in many cases this processing can be moved out of loops, often giving large increases of speed.
- Although this is possible with statically compiled garbage collected languages, a bytecode system can more easily rearrange executed code for better cache utilization.
Startup delay and optimizations
JIT causes a slight delay to a noticeable delay in initial execution of an application, due to the time taken to load and compile the bytecode. Sometimes this delay is called "startup time delay". In general, the more optimization JIT performs, the better the code it will generate, but the initial delay will also increase. A JIT compiler therefore has to make a trade-off between the compilation time and the quality of the code it hopes to generate. However, it seems that much of the startup time is sometimes due to IO-bound operations rather than JIT compilation (for example, the rt.jar class data file for the Java Virtual Machine (JVM) is 40 MB and the JVM must seek a lot of data in this contextually huge file).
One possible optimization, used by Sun's HotSpot Java Virtual Machine, is to combine interpretation and JIT compilation. The application code is initially interpreted, but the JVM monitors which sequences of bytecode are frequently executed and translates them to machine code for direct execution on the hardware. For bytecode which is executed only a few times, this saves the compilation time and reduces the initial latency; for frequently executed bytecode, JIT compilation is used to run at high speed, after an initial phase of slow interpretation. Additionally, since a program spends most time executing a minority of its code, the reduced compilation time is significant. Finally, during the initial code interpretation, execution statistics can be collected before compilation, which helps to perform better optimization.
The correct tradeoff can vary due to circumstances. For example, Sun's Java Virtual Machine has two major modes—client and server. In client mode, minimal compilation and optimization is performed, to reduce startup time. In server mode, extensive compilation and optimization is performed, to maximize performance once the application is running by sacrificing startup time. Other Java just-in-time compilers have used a runtime measurement of the number of times a method has executed combined with the bytecode size of a method as a heuristic to decide when to compile. Still another uses the number of times executed combined with the detection of loops. In general, it is much harder to accurately predict which methods to optimize in short-running applications than in long-running ones.
Native Image Generator (Ngen) by Microsoft is another approach at reducing the initial delay. Ngen pre-compiles (or "pre-JITs") bytecode in a Common Intermediate Language image into machine native code. As a result, no runtime compilation is needed. .NET framework 2.0 shipped with Visual Studio 2005 runs Ngen on all of the Microsoft library DLLs right after the installation. Pre-jitting provides a way to improve the startup time. However, the quality of code it generates might not be as good as the one that is jitted, for the same reasons why code compiled statically, without profile-guided optimization, cannot be as good as JIT compiled code in the extreme case: the lack of profiling data to drive, for instance, inline caching.
The earliest published JIT compiler is generally attributed to work on LISP by McCarthy in 1960. In his seminal paper Recursive functions of symbolic expressions and their computation by machine, Part I, he mentions functions that are translated during runtime, thereby sparing the need to save the compiler output to punch cards (although this would be more accurately known as a "Compile and go system"). Another early example was by Ken Thompson, who in 1968 gave one of the first applications of regular expressions, here for pattern matching in the text editor QED. For speed, Thompson implemented regular expression matching by JITing to IBM 7094 code on the Compatible Time-Sharing System. An influential technique for deriving compiled code from interpretation was pioneered by Mitchell in 1970, which he implemented for the experimental language LC².
In 1974, the Works Records System, an early interactive IBM mainframe spreadsheet, claimed to both use JIT techniques and memoization to dynamically execute formulae (created by chemical engineers, who were non-programmers), on-the-fly as concatenated machine code snippets at Imperial Chemical Industries in the UK .
Smalltalk (c. 1983) pioneered new aspects of JIT compilations. For example, translation to machine code was done on demand, and the result was cached for later use. When memory became scarce, the system would delete some of this code and regenerate it when it was needed again. Sun's Self language improved these techniques extensively and was at one point the fastest Smalltalk system in the world; achieving up to half the speed of optimized C but with a fully object-oriented language.
Self was abandoned by Sun, but the research went into the Java language. The term "Just-in-time compilation" was borrowed from the manufacturing term "Just in time" and popularized by Java, with James Gosling using the term from 1993. Currently JITing is used by most implementations of the Java Virtual Machine, as HotSpot builds on, and extensively uses, this research base.
The HP project Dynamo was an experimental JIT compiler where the 'bytecode' format and the machine code format were the same; the system turned HPA-6000 machine code into HPA-8000 machine code. Counterintuitively, this resulted in speed ups, in some cases of 30% since doing this permitted optimizations at the machine code level, for example, inlining code for better cache usage and optimizations of calls to dynamic libraries and many other run-time optimizations which conventional compilers are not able to attempt.
JIT compilation fundamentally uses executable data, and thus poses security challenges and possible exploits.
Implementation of JIT compilation consists of compiling source code or byte code to machine code and executing it. This is generally done directly in memory – the JIT compiler outputs the machine code directly into memory and immediately executes it, rather than outputting it to disk and then invoking the code as a separate program, as in usual ahead of time compilation. In modern architectures this runs into a problem due to executable space protection – arbitrary memory cannot be executed, as otherwise there is a potential security hole. Thus memory must be marked as executable; for security reasons this should be done after the code has been written to memory, and marked read-only, as writable/executable memory is a security hole (see W^X).
JIT spraying is a class of computer security exploits that use JIT compilation for heap spraying – the resulting memory is then executable, which allows an exploit if execution can be moved into the heap.
- Binary translation
- Common Language Runtime
- Crusoe, a microprocessor that essentially performs just-in-time compilation from x86 code to microcode within the microprocessor
- GNU lightning — A library that generates assembly language code at run-time
- Self-modifying code
- Tracing just-in-time compilation
- Aycock 2003.
- Haase, Chet (May 2007). "Consumer JRE: Leaner, Meaner Java Technology". Sun Microsystems. Retrieved 2007-07-27.
At the OS level, all of these megabytes have to be read from disk, which is a very slow operation. Actually, it's the seek time of the disk that's the killer; reading large files sequentially is relatively fast, but seeking the bits that we actually need is not. So even though we only need a small fraction of the data in these large files for any particular application, the fact that we're seeking all over within the files means that there is plenty of disk activity.
- "The Java HotSpot Performance Engine Architecture". Oracle.com. Retrieved 2013-07-05.
- Schilling, Jonathan L. (February 2003). "The simplest heuristics may be the best in Java JIT compilers" (PDF). SIGPLAN Notices 38 (2): 36–46. doi:10.1145/772970.772975.
- Toshio Suganuma, Toshiaki Yasue, Motohiro Kawahito, Hideaki Komatsu, Toshio Nakatani, "A dynamic optimization framework for a Java just-in-time compiler", Proceedings of the 16th ACM SIGPLAN conference on Object-oriented programming, systems, languages, and applications (OOPSLA '01), pp. 180–195, October 14–18, 2001.
- Matthew Arnold, Michael Hind, Barbara G. Ryder, "An Empirical Study of Selective Optimization", Proceedings of the 13th International Workshop on Languages and Compilers for Parallel Computing-Revised Papers, pp. 49–67, August 10–12, 2000.
- "Native Image Generator (Ngen.exe)". Msdn2.microsoft.com. Retrieved 2013-07-05.
- Matthew R. Arnold, Stephen Fink, David P. Grove, Michael Hind, and Peter F. Sweeney, "A Survey of Adaptive Optimization in Virtual Machines", Proceedings of the IEEE, 92(2), February 2005, pp. 449–466.
- Aycock 2003, 2. JIT Compilation Techniques, 2.1 Genesis, p. 98.
- McCarthy, J. (April 1960). "Recursive functions of symbolic expressions and their computation by machine, Part I". Communications of the ACM 3 (4): 184–195. doi:10.1145/367177.367199. CiteSeerX: 10
.1 .1 .111 .8833.
- Thompson 1968.
- Aycock 2003, 2. JIT Compilation Techniques, 2.2 LC², p. 98–99.
- Mitchell, J.G. (1970). "The design and construction of flexible and efficient interactive programming systems".
- Mais, Dr. Robert. Imperial Chemical Industry(ICI),The Works Record System (1974)., 3.1. (hardcopy in The Computer History Museum, CA 94043-1311, Catalogue Accession Number 102746930
- Deutsch, L.P.; Schiffman, A.M. (1984). "Efficient implementation of the Smalltalk-80 system" (PDF). POPL '84: Proceedings of the 11th ACM SIGACT-SIGPLAN Symposium on Principles of Programming Languages: 297–302. doi:10.1145/800017.800542. ISBN 0-89791-125-3.
- [dead link]
- Aycock & 2003 2.14 Java, p. 107, footnote 13.
- "Dynamo: A Transparent Dynamic Optimization System" Vasanth Bala, Evelyn Duesterwald, Sanjeev Banerjia - PLDI '00 Proceedings of the ACM SIGPLAN 2000 conference on Programming language design and implementation - pages 1 to 12 - doi:10.1145/349299.349303. Retrieved March 28, 2012
- John Jannotti. "HP's Dynamo - Page 1 - (3/2000)". Ars Technica. Retrieved 2013-07-05.
- "How to JIT – an introduction", Eli Bendersky, November 5th, 2013 at 5:59 am
- Aycock, J. (June 2003). "A brief history of just-in-time". ACM Computing Surveys 35 (2): 97–113. doi:10.1145/857076.857077. CiteSeerX: 10
.1 .1 .97 .3985.
- Thompson, K. (1968). "Programming Techniques: Regular expression search algorithm". Communications of the ACM 11 (6): 419–422. doi:10.1145/363347.363387.
- Free Online Dictionary of Computing entry
- libJIT at Freecode — A library by Rhys Weatherley, Klaus Treichel, Aleksey Demakov, and Kirill Kononenko for development of Just-In-Time compilers in Virtual Machine implementations, Dynamic programming languages and Scripting languages.
- SoftWire — A library by Nicolas Capens that generates assembly language code at run-time (thesis)
- CCG by Ian Piumarta
- JatoVM, a Java JIT-only VM
- OVPsim, an embedded core JIT tools that converts ARM, MIPS, and other ISA instructions to x86 for execution/simulation
- AsmJit — Complete x86/x64 jit assembler library for C++ language by Petr Kobalíček
- Xbyak — A x86/x64 JIT assembler for C++ language by Herumi
- sljit — A platform independent assembly language by Zoltan Herczeg. Sljit can generate code for 32/64 bit x86, arm, ppc, mips and sparc.
- Profiling Runtime Generated and Interpreted Code using the VTune Performance Analyzer