Standard Performance Evaluation Corporation

From Wikipedia, the free encyclopedia
  (Redirected from SPEC)
Jump to: navigation, search
"SPEC" redirects here. For other uses, see SPEC (disambiguation).
Standard Performance Evaluation Corporation
SPEC-logo reg.png
Formation 1988
Type Not-for-profit
Headquarters Gainesville, Virginia
Hardware & Software Vendors, Universities, Research Centers

The Standard Performance Evaluation Corporation (SPEC) is an American non-profit organization that aims to "produce, establish, maintain and endorse a standardized set" of performance benchmarks for computers.[1]

SPEC was founded in 1988.[2][3] SPEC benchmarks are widely used to evaluate the performance of computer systems; the test results are published on the SPEC website. Results are sometimes informally referred to as "SPECmarks" or just "SPEC".

SPEC evolved into an umbrella organization encompassing four diverse groups; Graphics and Workstation Performance Group (GWPG), the High Performance Group (HPG), the Open Systems Group (OSG) and the newest, the Research Group (RG). More details are on their website; [1].


Membership in SPEC is open to any interested company or entity that is willing to commit to SPEC's standards. It allows:

  • Participation in benchmark development
  • Participation in review of results
  • Complimentary software based on group participation

The list of members is available on SPEC's membership page;[2].

Membership Levels[edit]

  • Sustaining Membership requires dues payment and typically includes hardware or software companies.
  • SPEC "Associates" pay a reduced fee and typically include Universities.
  • SPEC "Supporting Contributors" are invited to participate in development of a single benchmark, and do not pay fees.

SPEC Benchmark Suites[edit]

The benchmarks aim to test "real-life" situations. There are several benchmarks testing Java scenarios, from simple computation (SPECjbb) to a full system with Java EE, database, disk, and network (SPECjEnterprise).

The SPEC CPU suites test CPU performance by measuring the run time of several programs such as the compiler GCC, the chemistry program gamess, and the weather program WRF. The various tasks are equally weighted; no attempt is made to weight them based on their perceived importance. An overall score is based on a geometric mean.


SPEC benchmarks are written in a portable programming language (usually C, C#, Java or Fortran), and the interested parties may compile the code using whatever compiler they prefer for their platform, but may not change the code. Manufacturers have been known to optimize their compilers to improve performance of the various SPEC benchmarks. SPEC has rules that attempt to limit such optimizations.


In order to use a benchmark, a license has to be purchased from SPEC; the costs vary from test to test with a typical range from several hundred to several thousand dollars. This pay-for-license model might seem to be in violation of the GPL as the benchmarks include software such as GCC that is licensed by the GPL. However, the GPL does not require software to be distributed for free, only that recipients be allowed to redistribute any GPLed software that they receive; the license agreement for SPEC specifically exempts items that are under "licenses that require free distribution", and the files themselves are placed in a separate part of the overall software package.


Current Benchmarks[edit]

  • SPECapc for 3ds Max™ 2011, performance evaluation software for systems running Autodesk 3ds Max 2011.
  • SPECapcSM for Lightwave 3D 9.6, performance evaluation software for systems running NewTek LightWave 3D v9.6 software.
  • SPEC CPU2006, combined performance of CPU, memory and compiler. Designed to provide performance measurements that can be used to compare compute-intensive workloads on different computer systems, SPEC CPU2006 contains two benchmark suites: CINT2006 for measuring and comparing compute-intensive integer performance, and CFP2006 for measuring and comparing compute-intensive floating point performance.
    • CINT2006 ("SPECint"), testing integer arithmetic, with programs such as compilers, interpreters, word processors, chess programs etc.
    • CFP2006 ("SPECfp"), testing floating point performance, with physical simulations, 3D graphics, image processing, computational chemistry etc.
  • SPEC CPUv6, The CPU Search Program seeks to encourage those outside of SPEC to assist them in locating applications that could be used in the next CPU-intensive benchmark suite, currently designated as SPEC CPUv6.
  • SPECjbb2013, evaluates the performance of server side Java by emulating a three-tier client/server system (with emphasis on the middle tier). The SPECjbb2013 benchmark has been developed from the ground up to measure performance based on the latest Java application features. It is relevant to all audiences who are interested in Java server performance, including JVM vendors, hardware developers, Java application developers, researchers and members of the academic community.
  • SPECjEnterprise2010, a multi-tier benchmark for measuring the performance of Java 2 Enterprise Edition (J2EE) technology-based application servers. SPECjEnterprise2010 measures full-system performance for Java Enterprise Edition (Java EE) 5 or later application servers, databases and supporting infrastructure and expands the scope of the SPECjAppServer2004 benchmark.
  • SPECjms2007, Java Message Service performance. SPECjms2007 is the first industry-standard benchmark for evaluating the performance of enterprise message-oriented middleware servers based on JMS (Java Message Service). It provides a standard workload and performance metrics for competitive product comparisons, as well as a framework for indepth performance analysis of enterprise messaging platforms.
  • SPECjvm2008, measuring basic Java performance of a Java Runtime Environment on a wide variety of both client and server systems. SPECjvm2008 is a benchmark suite for measuring the performance of a Java Runtime Environment (JRE), containing several real life applications and benchmarks focusing on core java functionality. The SPECjvm2008 workload mimics a variety of common general purpose application computations.
  • SPECapc, performance of several 3D-intensive popular applications on a given system
  • SPEC MPI2007, for evaluating performance of parallel systems using MPI (Message Passing Interface) applications.
  • SPEC OMP2001 V3.2, for evaluating performance of parallel systems using OpenMP ( applications.


  • SPECpower_ssj2008, evaluates the energy efficiency of server systems. SPECpower_ssj2008 is the first industry-standard SPEC benchmark that evaluates the power and performance characteristics of volume server class computers. The initial benchmark addresses the performance of server-side Java, and additional workloads are planned.
  • Server Efficiency Rating Tool (SERT). The Server Efficiency Rating Tool (SERT) was created by Standard Performance Evaluation Corporation (SPEC) at the request of the US Environmental Protection Agency. It is intended to measure server energy efficiency, initially as part of the second generation of the US Environmental Protection Agency (EPA) ENERGY STAR for Computer Servers program. Designed to be simple to configure and use via a comprehensive graphical user interface, the SERT uses a set of synthetic worklets to test discrete system components such as memory and storage, providing detailed power consumption data at different load levels.

Other SPEC benchmarks incorporating power measurement




Network File System

  • SPECsfs2008, File server throughput and response time supporting both NFS and SMB protocol access. SPEC's benchmark designed to evaluate the speed and request-handling capabilities of file servers utilizing the NFSv3 and SMB protocols.
  • SPECsip_Infrastructure2011, SIP server performance

Graphics and Workstation Performance

•SPECviewperf® 12

•SPECviewperf® 11, performance of an OpenGL 3D graphics system, tested with various rendering tasks from real applications


•SPECapcSM for 3ds Max™ 2015

•SPECapcSM for Maya® 2012

•SPECapcSM for PTC Creo 2.0

•SPECapcSM for Siemens NX 8.5

•SPECapcSM for SolidWorks 2013

•Previous versions of SPECapc and SPECviewperf benchmarks

  • SPECvirt_sc2013 ("SPECvirt"), evaluates the performance of datacenter servers used in virtualized server consolidation environments.

Virtualization. SPEC's updated benchmark addressing performance evaluation of datacenter servers used in virtualized server consolidation. SPECvirt_sc2013 measures the end-to-end performance of all system components including the hardware, virtualization platform, and the virtualized guest operating system and application software. The benchmark supports hardware virtualization, operating system virtualization, and hardware partitioning schemes.

High Performance Computing, OpenMP, MPI, OpenACC, OpenCL

•SPEC ACCEL, SPEC ACCEL tests performance with a suite of computationally intensive parallel applications running under the OpenCL and OpenACC APIs. The suite exercises the performance of the accelerator, host CPU, memory transfer between host and accelerator, support libraries and drivers, and compilers. •SPEC MPI2007, MPI2007 is SPEC's benchmark suite for evaluating MPI-parallel, floating point, compute intensive performance across a wide range of cluster and SMP hardware. The suite consists of the initial MPIM2007 suite and MPIL2007, which contains larger working sets and longer run times than MPIM2007. •SPEC OMP2012, The successor to the OMP2001, designed for measuring performance using applications based on the OpenMP 3.1 standard for shared-memory parallel processing. OMP2012 also includes an optional metric for measuring energy consumption. SIP •SPECsip_infrastructure2011. SPEC's benchmark designed to evaluate a system's ability to act as a SIP server supporting a particular SIP application. The application modeled is a VoIP deployment for an enterprise, telco, or service provider, where the SIP server performs proxying and registration.

SPEC Tools

•Server Efficiency Rating Tool (SERT). Intended to measure server energy efficiency, initially as part of the second generation of the US Environmental Protection Agency (EPA) ENERGY STAR for Computer Servers program. Designed to be simple to configure and use via a comprehensive graphical user interface, the SERT uses a set of synthetic worklets to test discrete system components such as memory and storage, providing detailed power consumption data at different load levels.

•Chauffeur Worklet Development Kit (WDK). Chauffeur was designed to simplify the development of workloads for measuring both performance and energy efficiency. Because Chauffeur contains functions that are common to most workloads, developers of new workloads can focus on the actual business logic of the application, and take advantage of Chauffeur's capabilities for configuration, run-time, data collection, validation, and reporting. Chauffeur was initially designed to meet the requirements of the SERT. However, SPEC recognized that the framework would also be useful for research and development purposes. The Chauffeur framework is now being made available as the Chauffeur Worklet Development Kit (WDK). This kit can be used to develop new workloads (or "worklets" in Chauffeur terminology). Researchers can also use the WDK to configure worklets to run in different ways, in order to mimic the behavior of different types of applications. These features can be used in the development and assessment of new technologies such as power management capabilities.

•PTDaemon. The SPEC PTDaemon software is used to control power analyzers in benchmarks which contain a power measurement component.

Committees for Future Benchmarks[edit]


•Handheld, SPEC has formed a committee chartered for the development of, and support for, a compute intensive benchmark suite for handheld devices.


•SOA. SPEC has formed a new subcommittee to develop standard methods of measuring performance for typical middleware, database and hardware deployments of applications based on the Service Oriented Architecture (SOA).

Retired Benchmarks[edit]

  • SPEC CPU2000
  • SPEC CPU95
  • SPEC CPU92
  • SPEC 2001
  • SPEC HPC2002
  • SPEC HPC96
  • SPECjAppServer2004
  • SPECjAppServer2002
  • SPECjAppServer2001
  • SPECjbb2005
  • SPECjbb2000
  • SPECjvm98
  • SPECmail2009
  • SPECmail2008
  • SPECmail2001
  • SPEC SDM91
  • SPEC SFS97_R1 3.0)
  • SPEC SFS97 (2.0)
  • SPEC virt_sc2010
  • SPECweb2009
  • SPECweb2005
  • SPECweb96
  • SPECweb99
  • SPECweb99_SSL


SPEC attempts to create an environment where arguments are settled by appeal to notions of technical credibility, representativeness, or the "level playing field". SPEC representatives are typically engineers with expertise in the areas being benchmarked. Benchmarks include "run rules", which describe the conditions of measurement and documentation requirements. Results that are published on SPEC's website undergo a peer review by members' performance engineers.


  1. ^ "SPEC Frequently Asked Questions". Retrieved 15 March 2010. 
  2. ^ "The SPEC Organization". Retrieved 15 March 2010. 
  3. ^ "SPEC Membership". Retrieved 15 March 2010. 
  • Kant, Krishna (1992). Introduction to Computer System Performance Evaluation. New York: McGraw-Hill Inc. pp. 16–17. ISBN 0-07-033586-9. 

External links[edit]