SPECjvm2008
Frequently Asked Questions

Version 1.3
Last modified: March 22, 2018



Benchmark description

Q1.1: What is SPECjvm2008?
Q1.2: What are the differences between SPECjvm98 and SPECjvm2008?
Q1.3: Does SPECjvm2008 replace SPECjvm98?
Q1.4: Is all source code included in SPECjvm2008?
Q1.5: How much does the SPECjvm2008 benchmark cost?
Q1.6: What does SPECjvm2008 test?
Q1.7: Is this a Java EE benchmark?
Q1.8: How long does one run one of the benchmarks in SPECjvm2008 take?
Q1.9: What is the runtime of the full SPECjvm2008 suite?
Q1.10: Why is an operation not interrupted directly when the measurement period is over?
Q1.11: What is the reasoning around having an operation length in seconds in SPECjvm2008?

Rules and Reporting

Q2.1: What is the performance metric for SPECjvm2008?
Q2.2: Where can I find published results for SPECjvm2008?
Q2.3: How can I publish SPECjvm2008 results?
Q2.4: How much does it cost to publish results?
Q2.5: Can I compare SPECjvm2008 results with SPECjvm98 results?
Q2.6: Can I compare SPECjvm2008 results with SPECjAppServer2004 or SPECjbb2005 results?
Q2.7: Can I extrapolate estimates from existing result?
Q2.8: Can I report with vendor A hardware, vendor B OS, and vendor C JRE?
Q2.9: Are the results independently audited?
Q2.10: Can I announce my results without a review by the SPEC Java subcommittee?

Configuration

Q3.1: What is the minimum configuration necessary to test this benchmark?
Q3.2: What software is required to run the benchmark?
Q3.3: How many different HW, OS, and JVM configurations have you test?
Q3.4: How scalable is the benchmark?
Q3.5: Can I run SPECjvm2008 with multiple JVM instances?
Q3.6: Can I run the benchmark while not starting from the SPECjvm2008 root folder?

Trouble shooting

Q4.1: What should I do if the run fails in some way? Q4.2: Why does my run take longer than 20 seconds, even if I specified 10 seconds on commandline?
Q4.3: What should I do if I get an OutOfMemoryError?
Q4.4: Why do I get a failure in the check test due to a Compiler version test failure?
Q4.5: Why do I get a build failure, failing to find symbol javax.tools.JavaFileManager?
Q4.6: Why do xml.transform, derby and compiler take so long before they start?
Q4.7: Why do I get a null pointer exception running the xml.validation benchmark?
Q4.8: Why won't SPECjvm2008 run with Java SE 8, Java SE 9, or later?

Miscellaneous

Q5.1: How do I obtain the SPECjvm2008 benchmark?
Q5.2: Who developed SPECjvm2008?




Benchmark description


Q1.1: What is SPECjvm2008?

SPECjvm2008 is a benchmark suite, containing several real life applications and some benchmarks focusing on core java functionality. The main purpose of SPECjvm2008 is to measure the performance of a Java Runtime Environment (JRE). It also measures the performance of the operating system and hardware in the context of executing the JRE. It focuses on the performance of the JRE executing a single application; it reflects the performance of the hardware processor and memory subsystem, but has low dependence on file I/O and includes no network I/O cross machines. The SPECjvm2008 workload mimics a variety of common general purpose application computations. These characteristics reflect the intent that this benchmark will be applicable to measuring basic Java performance on a wide variety of both client and server systems running Java. SPEC also finds user experience of Java important and the suite therefore includes startup benchmarks and has a required run category called base, which has to be run without any tuning of the JVM to improve the out of the box performance.


Q1.2: What are the differences between SPECjvm98 and SPECjvm2008?

SPECjvm2008 is a new suite of benchmarks that test systems running the latest generation of JVMs on workloads selected from what are common use cases in 2008, both on server and on client side Java. The run mode is changed from end to end time, to timed runs, where as much as possible should be done inside a measurement interval. The workloads are executed in parallel mode to utilize today's systems. In addition to measuring the throughput, the benchmark also focuses on user experience of Java, which is addressed by adding startup benchmark and by requiring a base submission where tuning is not allowed. There is also a different set of run rules, a different set of metrics compared to SPECjvm98.


Q1.3: Does SPECjvm2008 replace SPECjvm98?

Yes. SPEC is immediately replacing SPECjvm98 with SPECjvm2008. SPECjvm98 code can still be purchased from SPEC but no further benchmarks results will be reviewed and accepted by SPEC. Note that the results from SPECjvm2008 not are comparable to SPECjvm98 results.


Q1.4: Is all source code included in SPECjvm2008?

YES, all source code is included and available to review and analysis. The components are shipped under different licenses, see the license file.


Q1.5: How much does the SPECjvm2008 benchmark cost?

The SPECjvm2008 benchmark is freely downloadable from SPEC (still subject to the SPEC license). It is also possible to buy a CD from SPEC with the benchmark.


Q1.6: What does SPECjvm2008 test?

SPECjvm2008 is designed to test the performance of a JRE on typical java applications, including the Java libraries such as JAXP (XML) and Crypto, both workloads common for client and server side Java applications. Other factors include the application environment, e.g. H/W and OS. For more information on what is specifically tested, see the benchmark documentation for each benchmark.


Q1.7: Is this a Java EE benchmark?

No, this is not a Java EE benchmark, and does not measure performance of Enterprise Java Beans (EJBs), servlets, Java Server Pages (JSPs), etc.


Q1.8: How long does one run of one of the benchmarks in SPECjvm2008 take?

One benchmark has a 2 minutes warmup time and 4 minutes measured runtime. During this time several operations will be performed. An operation will never be interrupted, so all threads will continue to run until all threads have completed the operation started inside the measurement interval. This means that a 4 minutes measurement period will take at least 4 minutes to complete, in some cases a noticable amount of time more.


Q1.9: What is the runtime of the full SPECjvm2008 suite?

The runtime will be a bit more than 2 hours. The suite contains 21 benchmarks (counting each sub-benchmark as one benchmark now) plus the the startup benchmarks. This means that execution will be at least 21*6 minutes, which is 126, plus the startup benchmarks, plus the extra overhead of gathering all the threads after each benchmark and the bringing up and down the harness.


Q1.10: Why is an operation not interrupted directly when the measurement period is over?

An operation will continue to be run even when the measurement period is over. This is done in order to ensure that each operation that is started inside the measurement period will contribute to the measurement period, but only as much as is inside it. So an operation that is started but not completed inside will contribute between 0 and 1, depending on how much of the execution time was inside the measurement period. In order to not give any advantage for executing outside the measurement period, the harness will continue to execute threads until all operations started inside the measurement period are completed.


Q1.11: What is the reasoning around having an operation length in seconds in SPECjvm2008?

In the java world it is very common to run short operations. A server does often act on requests that take milliseconds to perform, but there are most often many of these requests. Both the SPECjbb2005 and the SPECjAppServer2004 benchmarks are built on this concept. The operations in SPECjvm2008 are actually somewhat longer than those in the mentioned benchmarks, but then again shorter than other existing workloads. By running multiple operations next to each other and many of them for an extended period of time, several "side effects" occur, which are typical JVM specific problems (or VEE to be more correct). An Example of such problems is load on the memory system (including both allocation and garbage collector), which is key in Java performance and tested by continuous work. When running the benchmark operations over and over for 4 minutes the "side effects" that a JVM has to handle occur.

For several of the benchmarks the SPECjvm2008 team tested increasing and decreasing the work loads and found that other than the time to execute the operation, the characteristics where basically the same, so in these scenarios nothing was gained by having larger (or smaller) workload input. Then a workload size was selected inside the expected and realistic range that consisted of a decent amount of work, similar to most of the other benchmarks.

Some of the sub-benchmarks are real world applications and the work load size was chosen by the problem being solved. Others are not real full world applications, but code that exercises core functionality of larger enterprise systems. Improving the performance of these benchmarks will improve the performance on such function, and application,contribute to improving the performance of applications using that function. In a suite together with other workloads, they do provide value.



Rules and Reporting


Q2.1: What is the performance metric for SPECjvm2008?

SPECjvm2008 produces these throughput metrics in operations per minute (ops/m):


Q2.2: Where can I find published results for SPECjvm2008?

SPECjvm2008 results are available on SPEC's web site: http://www.spec.org/jvm2008/results/


Q2.3: How can I publish SPECjvm2008 results?

In order to submit a result to SPEC for review, you need to first obtain the benchmark, which can be downloaded from the SPEC web site. Then follow the user guide on how to produce and submit results.


Q2.4: How much does it cost to publish results?

In order to publish a result on the SPEC web site, you must pay a review fee of $500 or become an member of the SPEC organisation.


Q2.5: Can I compare SPECjvm2008 results with SPECjvm98 results?

NO. The two benchmarks are not comparable. In addition to that, this would be a violation to the run rules.


Q2.6: Can I compare SPECjvm2008 results with other SPEC benchmarks?

NO. SPECjvm2008 is not comparable with any other benchmark. In addition to that, this would be a violation to the run rules.


Q2.7: Can I extrapolate estimates from existing result?

NO. Run rules for SPECjvm2008 do not allow estimated results to be publicly disclosed. Extrapolated results are considered to be estimates.


Q2.8: Can I report using vendor A hardware, vendor B OS, and vendor C JRE?

Yes, the SPECjvm2008 run rules do not preclude third-party submission of benchmark results. But result submitters must abide by the licensing restrictions of all the products used in the benchmark; SPEC is not responsible for vendor (hardware or software) licensing issues. Many products include a restriction on publishing benchmark results without the expressed written permission of the vendor.


Q2.9: Are the results independently audited?

Results published on SPEC pages are all reviewed by the SPEC Java subcommittee. See SPEC Results Disclaimer.


Q2.10: Can I announce my results without a review by the SPEC Java subcommittee?

Yes, you can publish compliant results that have not been reviewed by SPEC. Should you do so, any SPEC member may request a full disclosure result, which you are obliged to provide within ten business days, and which may be reviewed by SPEC. You can find more details in the SPECjvm2008 run rules.



Configuration


Q3.1: What is the minimum hardware configuration necessary to test this benchmark?

It is possible to run SPECjvm2008 on a machine with one hardware thread (CPU/Socket/Core).
The recommended minimum amount of memory needed is 512 MB for a small system.
The recommended minimum amount of disk space needed is 256 MB (including installation).

SPECjvm2008 is designed to scale up the workload when a larger machine (higher number of logical CPUs) is used. In most of the work loads this means more live data will be required and the minimum amount of memory mentioned above will not be enough. In derby this also means that even more space on disk is required.

In order to use as few resources as possible, run with only one benchmark thread, use option "-bt 1". This will however affect the benchmark result.


Q3.2: What software is required to run the benchmark?

SPECjvm2008 requires a Java Runtime Environment supporting a complete implementation of the classes referenced by this benchmark as defined in the Java SE 1.5.0, Java SE 6 or Java SE 7 specifications. Other Java SE specifications are not supported (as stated in SPECjvm2008 Run Rules, section 2.1).

You may try to run with JDK-8 or later, but no warranty is made. See Trouble shooting, Q4.8.


Q3.3: How many different HW, OS and JVM configurations have you test?

Several combinations of the following products have been tested and verified there are no issues in the benchmark. There are a some Known Issues reported).

Java Virtual Machines:

Operating Systems:

Hardware Architectures:

Scalability:


Q3.4: How scalable is the benchmark?

There is no hard limit in the benchmark harness and the work load when it comes to scaling up the number of benchmark threads and this has been verified in the testing. The size of the workload does increase with the number of benchmark threads. While some workloads are embarrassingly parallel and will scale well long past the 64 benchmark threads it has been verified with, others intentionally introduce scalability problems, for example with using java locks and also by heavy work which introduces several scalability problems, in the hardware and on the memory bus, as well as in the JVM with heavy allocation as well as large-scale garbage collection work.


Q3.5: Can I run SPECjvm2008 with multiple JVM instances?

No, SPECjvm2008 designed to work within a single JVM instance.


Q3.6: Can I run the benchmark while not starting from the SPECjvm2008 root folder?

Yes, it is designed to be able to do that. Specify the system property "specjvm.home.dir" to point to where the SPECjvm2008 install exists. See the User Guide for more info and examples.



Trouble shooting


Q4.1: What should I do if the run fails in some way?

It depends on how it fails. The first step is to look in the Known Issues document. If someone else has run into this issue, it is likely to be in this document.

If you believe it is a bug in one of the products used, for example the JVM tested, it is recommended to see if this is a known issue there and then contact the vendor who is responsible for the product. A good way to determine if this is a problem with the product tested is to test with another vendors product, or with another version of it, to see if it occurs there too. If it only is in one of the vendors or versions of the product, it is likely to be a vendor specific issue.

If you believe it is a bug in the SPECjvm2008 benchmark suite, download the latest version of the benchmark to see if it is fixed. If not, contact [email protected] and describe the issue.


Q4.2: Why does my run take longer than 20 seconds, even if I specified 10 seconds on the command line?

An operation within the sub-benchmark has not completed and SPECjvm2008 does not allow operations to be interrupted. See Q1.10 for the reasoning around this.

It could also be because result of the warmup period indicates that the iteration time not will be enough to finish five operations. Then the iteration time will be increased. There is a message printed by the harness if this occurs. A run should always finish at least 5 operations, otherwise the run is considered too small.


Q4.3: What should I do if I get an OutOfMemoryError?

An Out of Memory Error can be thrown. If and when this occurs depends on the JVM being used and the platform it is run upon, in particular the number of logical CPUs. The error is thrown because the amount of live data in the benchmark is larger than the JVM can fit on the heap. The amount of live data grows with the number of benchmark threads, so the larger machine, the more likely this is to happen. See the Known Issues document about OOME for more info and recommended workaround.


Q4.4: Why do I get a failure in the check test due to a Compiler version test failure?

There is a javac version included in SPECjvm2008 and the check benchmark verifies the correct one is used, since the compiler benchmark uses it. There is a known issue on this for one vendor including a workaround. See the Known Issues document for more info.


Q4.5: Why do I get a build failure, failing to find symbol javax.tools.JavaFileManager?

There is a javac version included in SPECjvm2008, since the compiler benchmark uses it. There is a known issue for building, depending on the order of classes on the bootclasspath for one vendor including a workaround. See the Known Issues document for more info.


Q4.6: Why do xml.transform, derby and compiler take so long before they start?

They need to perform some preparatory work before measurements interval, which is done in a setup phase before the warmup and measurement period. Derby initializes and populates a database, The xml.transform benchmark makes one operation before measurement start to verify results fully, this includes canonicalization of output. The compiler benchmark gathers files information to reduce file I/O impact during measurements interval.


Q4.7:Why do I get a null pointer exception running the xml.validation benchmark?

This could be the known issue with direct path's in jdk 1.5.0. See the Known Issues document for more info.


Q4.8: Why won't SPECjvm2008 run with Java SE 8, Java SE 9, or later?

Java SE 8 and later specifications are not supported. See Trouble shooting, Q3.2.

For Java SE 8 and later:

For Java SE 9 and later:



Miscellaneous


Q5.1: How do I obtain the SPECjvm2008 benchmark?

You can download the benchmark from http://www.spec.org/jvm2008/. You can also buy a CD from the SPEC office.


Q5.2: Who developed SPECjvm2008?

SPECjvm2008 was developed by the Java subcommittee's core design team. AMD, BEA, HP, IBM, Intel and Sun participated in design, implementation and testing phases of the product.




Copyright © 2008 Standard Performance Evaluation Corporation


Home - Contact - Site Map - Privacy - About SPEC


[email protected]
Last updated: Mar 22 17:32:08 EDT 2018
Copyright © 1995 - 2018 Standard Performance Evaluation Corporation
URL: http://www.spec.org/jvm2008/docs/FAQ.html