Intel C++ Compiler
|Stable release||14.0.3 (XE 2013 SP1 Update 3) / April 28, 2014|
|Operating system||Linux, OS X, Windows|
|License||Commercial, Academic, Eval and, for Linux, for non-commercial uses|
|Stable release||14.0.1 / November 12, 2013|
|License||Commercial, Academic, Eval and, for Linux, for non-commercial uses|
The compilers generate optimized code for IA-32 and Intel 64 architectures, but non-optimized code on non-Intel but compatible processors, such as certain AMD processors. A specific release of the compiler (11.1) is available for development of Linux-based applications for IA-64 (Itanium 2) processors.
The 14.0 compiler added support for Intel-based Android devices and optimized vectorization and SSE Family instructions for performance. The 13.0 release adds support for the Intel Xeon Phi coprocessor. It continues support for automatic vectorization, which can generate SSE, SSE2, SSE3, SSSE3, SSE4, AVX and AVX2 SIMD instructions, and the embedded variant for Intel MMX and MMX 2. Use of such instruction through the compiler can lead to improved application performance in some applications as run on IA-32 and Intel 64 architectures, compared to applications built with compilers that do not support these instructions.
Intel compilers continue support for Cilk Plus, which is a capability for writing vectorized and parallel code that can be used on IA-32 and Intel 64 processors or which can be offloaded to Xeon Phi coprocessors. They also continue support for OpenMP 3.1, symmetric multiprocessing, automatic parallelization, and Guided Auto-Parallization (GAP). With the add-on Cluster OpenMP capability, the compilers can also automatically generate Message Passing Interface calls for distributed memory multiprocessing from OpenMP directives.
Intel C++ is compatible with Microsoft Visual C++ on Windows and integrates into Microsoft Visual Studio. On Linux and OS X, it is compatible with GNU Compiler Collection (GCC) and the GNU toolchain. Intel C++ Compiler for Android is hosted on Windows, OS X or Linux and is compatible with the Android NDK, including gcc and the Eclipse IDE. Intel compilers are known for the application performance they can enable as measured by benchmarks, such as the SPEC CPU benchmarks.
Intel compilers are optimized to computer systems using processors that support Intel architectures. They are designed to minimize stalls and to produce code that executes in the fewest possible number of cycles. The Intel C++ Compiler supports three separate high-level techniques for optimizing the compiled program: interprocedural optimization (IPO), profile-guided optimization (PGO), and high-level optimizations (HLO). The Intel C++ compiler in the Parallel Studio XE 2013 products also supports tools, techniques and language extensions, such as Cilk Plus, for adding and maintaining application parallelism on IA-32 and Intel 64 processors, and enabling application offloading to Intel coprocessors, such as the Intel Xeon Phi coprocessor.
Cilk Plus adds language extensions to C++ to express data and task-parallelism.
Cilk_sync are keywords to enable task parallelism and the
Cilk_for keyword enables parallelization of for loops. It also provides vector notation with array notations and elemental function capabilities.
Profile-guided optimization refers to a mode of optimization where the compiler is able to access data from a sample run of the program across a representative input set. The data would indicate which areas of the program are executed more frequently, and which areas are executed less frequently. All optimizations benefit from profile-guided feedback because they are less reliant on heuristics when making compilation decisions.
High-level optimizations are optimizations performed on a version of the program that more closely represents the source code. This includes loop interchange, loop fusion, loop unrolling, loop distribution, data prefetch, and more.
Interprocedural optimization applies typical compiler optimizations (such as constant propagation) but using a broader scope that may include multiple procedures, multiple files, or the entire program.
With the September 5, 2012 launch (the 13.0 launch), the Windows-based releases of Intel Parallel Studio XE and Intel C++ Studio XE, each of which include Intel C++, also include a performance guide. This is a GUI-based compiler tool that provides step-by-step advice concerning changes to code that could result in improved application performance.
Description of Packaging
Except for Intel C++ Compiler for Android, Intel compilers are not available in standalone form. Other than the Android compiler, they are available in packages, such as Intel Parallel Studio XE and Intel C++ Studio, which include other build-tools, such as libraries, threading-diagnostic, and performance analysis tools. Intel C++ Composer XE and Intel Composer XE, which includes Intel Fortran, do not include the thread-diagnostic or performance analysis tools. Intel compilers are also included in Intel Cluster Studio and Intel Cluster Studio XE, the latter of which includes diagnostic and analysis tools. Packages that include Intel C++ also include the Math Kernel Library (Intel MKL), Integrated Performance Primitives (Intel IPP) and Threading Building Blocks (Intel TBB). Fortran-only packages only include MKL. The Intel C++ Compiler for Android is a compiler-only package available for hosted-development on Windows, Linux or OS X. The compilers in these packages are source-compatible with the Android NDK, including gcc, and generates code only for Intel-based Android devices.
Ten year version history
|Compiler version||Release date||Major new features|
|Intel C++ Compiler for Android (compiler 14.0.1)||November 12, 2013||Hosted on Windows, Linux, or OS X, compatible with Android NDK tools including the gcc compiler and Eclipse|
|Intel C++ Composer XE 2013 SP1 Update 1 (compiler 14.0.1)||October 18, 2013||Japanese localization of 14.0; Windows 8.1 and Xcode 5.0 support|
|Intel C++ Composer XE 2013 SP1 (compiler 14.0)||September 4, 2013||Online installer; support for Intel Xeon Phi coprocessors; preview Win32 only support for Intel graphics; improved C++11 support|
|Intel C++ Composer XE 2013 (compiler 13.0)||September 5, 2012||Linux-based support for Intel Xeon Phi coprocessors, support for Microsoft Visual Studio 12 (Desktop), support for gcc 4.7, support for Intel AVX 2 instructions, updates to existing functionality focused on improved application performance.|
|Intel C++ Composer XE 2011 Update 6 and above (compiler 12.1)||September 8, 2011||Cilk Plus language extensions updated to support specification version 1.1 and available on Mac OS X in addition to Windows and Linux, Threading Building Blocks updated to support version 4.0, Apple blocks supported on Mac OS X, improved C++11 support including support for Variadic templates, OpenMP 3.1 support.|
|Intel C++ Composer XE 2011 up to Update 5 (compiler 12.0)||November 7, 2010||Cilk Plus language extensions, Guided Auto-Parallelism, Improved C++11 support.|
|Intel C++ Compiler 11.1||June 23, 2009||Support for latest Intel SSE SSE4.2, AVX and AES instructions. Parallel Debugger Extension. Improved integration into Microsoft Visual Studio, Eclipse CDT 5.0 and Mac Xcode IDE.|
|Intel C++ Compiler 11.0||November 2008||Initial C++11 support. VS2008 IDE integration on Windows. OpenMP 3.0. Source Checker for static memory/parallel diagnostics.|
|Intel C++ Compiler 10.1||November 7, 2007||New OpenMP* compatibility runtime library: if you use the new OpenMP RTL, you can mix and match with libraries and objects built by Visual C++. To use the new libraries, you need to use the new option "-Qopenmp /Qopenmp-lib:compat" on Windows, and "-openmp -openmp-lib:compat" on Linux. This version of the Intel compiler supports more intrinsics from Visual Studio 2005.
VS2008 support - command line only in this release. The IDE integration was not supported yet.
|Intel C++ Compiler 10.0||June 5, 2007||Improved parallelizer and vectorizer, Streaming SIMD Extensions 4 (SSE4), new and enhanced optimization reports for advanced loop transformations, new optimized exception handling implementation.|
|Intel C++ Compiler 9.0||June 14, 2005||AMD64 architecture (for Windows), software-based speculative pre-computation (SSP) optimization, improved loop optimization reports.|
|Intel C++ Compiler 8.1||September, 2004||AMD64 architecture (for Linux).|
|Intel C++ Compiler 8.0||December 15, 2003||Precompiled headers, code-coverage tools.|
Flags and manuals
Documentation can be found at the Intel Software Technical Documentation site.
|Windows||Linux & Mac OS X||Comment|
|/O1||-O1||Optimize for size|
|/O2||-O2||Optimize for speed and enable some optimization|
|/O3||-O3||Enable all optimizations as O2, and intensive loop optimizations|
|/QxO||-xO||Enables SSE3, SSE2 and SSE instruction sets optimizations for non-Intel CPUs |
|/fast||-fast||Shorthand. On Windows this equates to "/O3 /Qipo /QxHost /no-prec-div" ; on Linux "-O3 -ipo -static -xHOST -no-prec-div". Note that the processor specific optimization flag (-xHOST) will optimize for the processor compiled on—it is the only flag of -fast that may be overridden.|
|/Qprof-gen||-prof_gen||Compile the program and instrument it for a profile generating run.|
|/Qprof-use||-prof_use||May only be used after running a program that was previously compiled using prof_gen. Uses profile information during each step of the compilation process.|
The Intel compiler provides debugging information that is standard for the common debuggers (DWARF 2 on Linux, similar to gdb, and COFF for Windows). The flags to compile with debugging information are /Zi on Windows and -g on Linux. Debugging is done on Windows using the Visual Studio debugger and, on Linux, using gdb.
While the Intel compiler can generate a gprof compatible profiling output, Intel also provides a kernel level, system-wide statistical profiler as a separate product called VTune. VTune features an easy-to-use GUI (integrated into Visual Studio for Windows, Eclipse for Linux) as well as a command-line interface.
Intel also offers a tool for memory and threading error detection called Intel Inspector XE. Regarding memory errors, it helps detect memory leaks, memory corruption, allocation/de-allocation of API mismatches and inconsistent memory API usage. Regarding threading errors, it helps detect data races (both heap and stack), deadlocks and thread and synch API errors.
Intel and third parties have published benchmark results to substantiate performance leadership claims over other commercial, open source and AMD compilers and libraries on Intel and non-Intel processors. Intel and AMD have documented flags to use on the Intel compilers to get optimal performance on Intel and AMD processors. Nevertheless, the Intel compilers have been known to produce sub-optimal code for processors from other vendors than Intel. For example, Steve Westfield wrote in a 2005 article at the AMD website:
|“||Intel 8.1 C/C++ compiler uses the flag -xN (for Linux) or -QxN (for Windows) to take advantage of the SSE2 extensions. For SSE3, the compiler switch is -xP (for Linux) and -QxP (for Windows). [...] With the -xN/-QxN and -xP/-QxP flags set, it checks the processor vendor string—and if it's not "GenuineIntel," it stops execution without even checking the feature flags. Ouch!||”|
The Danish developer and scholar Agner Fog wrote in 2009:
|“||The Intel compiler and several different Intel function libraries have suboptimal performance on AMD and VIA processors. The reason is that the compiler or library can make multiple versions of a piece of code, each optimized for a certain processor and instruction set, for example SSE2, SSE3, etc. The system includes a function that detects which type of CPU it is running on and chooses the optimal code path for that CPU. This is called a CPU dispatcher. However, the Intel CPU dispatcher does not only check which instruction set is supported by the CPU, it also checks the vendor ID string. If the vendor string is "GenuineIntel" then it uses the optimal code path. If the CPU is not from Intel then, in most cases, it will run the slowest possible version of the code, even if the CPU is fully compatible with a better version.||”|
This vendor-specific CPU dispatching decreases the performance on non-Intel processors of software built with an Intel compiler or an Intel function library - possibly without the knowledge of the programmer. This has allegedly led to misleading benchmarks. A legal battle between AMD and Intel over this and other issues has been settled in November 2009. In late 2010, AMD settled a US Federal Trade Commission antitrust investigation against Intel.
The FTC settlement included a disclosure provision where Intel must:
|“||...publish clearly that its compiler discriminates against non-Intel processors (such as AMD's designs), not fully utilizing their features and producing inferior code.||”|
In compliance with this rule, Intel added an "optimization notice" to its compiler descriptions stating that they "may or may not optimize to the same degree for non-Intel microprocessors" and that "certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors." It says that:
|“||Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.||”|
Intel was caught in a case of suspected "benchmarksmanship", when it was shown that the object code produced by the Intel compiler for the AnTuTu Mobile Benchmark omitted portions of the benchmark in order to show increased performance compared to ARM platforms.
- Intel Debugger
- Cilk Plus
- Threading Building Blocks (TBB)
- Integrated Performance Primitives (IPP)
- Math Kernel Library (MKL)
- VTune Amplifier
- Intel Fortran Compiler
- Intel Developer Zone (Intel DZ; support and discussion)
- "Intel C++ Composer XE 2013 SP1 Release Notes".
- "Non-Commercial Software Development". Developer Zone. Intel. Retrieved 11 October 2012.
- "Intel C++ Compiler for Android documentation".
- A. J. C. Bik, The Software Vectorization Handbook (Intel Press, Hillsboro, OR, 2004), ISBN 0-9743649-2-4.
- The Software Optimization Cookbook, High-Performance Recipes for IA-32 Platforms, Richard Gerber, Aart J.C. Bik, Kevin B. Smith, and Xinmin Tian, Intel Press, 2006
- Intel C++ Compiler XE 13.0 User and Reference Guides
- The pitfalls of verifying floating-point computations, by David Monniaux, also printed in ACM Transactions on programming languages and systems (TOPLAS), May 2008; section 4.3.2 discusses nonstandard optimizations.
- Intel Software Products site provides more information
- Intel C++ Composer XE 2013 Release Notes http://software.intel.com/en-us/articles/intel-c-composer-xe-2013-release-notes/
- This note is attached to the release in which Cilk Plus was introduced. This ULR points to current documentation: http://software.intel.com/en-us/intel-composer-xe/
- "Intel® Compilers | Intel® Developer Zone". Intel.com. 1999-02-22. Retrieved 2012-10-13.
- [dead link]
- Your Processor, Your Compiler, and You: The Case of the Secret CPUID String
- Intel's "cripple AMD" function
- "Intel and U.S. Federal Trade Commission Reach Tentative Settlement". Newsroom.intel.com. 2010-08-04. Retrieved 2012-10-13.
- FTC, Intel Reach Settlement; Intel Banned From Anticompetitive Practices
- "Optimization Notice". Intel Corporation. Retrieved 11 December 2013.
- Intel C++ Compiler for Android
- Compilers in Parallel Studio XE 2013
- Cilk Plus Open Source Site
- TBB Open Source Site
- Free download of Intel compilers for non-commercial use