SPECjbb2013 Result File Fields

Last updated: Jan 08 2013

To check for possible updates to this document, please see http://www.spec.org/jbb2013/docs/SPECjbb2013-Result_File_Fields.html

ABSTRACT
This document describes the various fields in the result file making up the complete SPECjbb2013 result disclosure.


Table of Content

1.SPECjbb2013 Benchmark

2.Top Bar

2.1 Headline

2.2 Test sponsor

2.3 SPEC license #

2.4 Hardware Availability

2.5 Tested by

2.6 Test Location

2.7 Software Availability

2.8 Test Date

2.9 Publication Date

2.10 INVALID or WARNING or COMMENTS

3.Benchmark Results Summary

3.1 Category

3.2 Result Chart

4. Overall SUT (System Under Test) Description Overview

4.1.1 Vendor

4.1.2 System Vendor URL

4.1.3 System Source

4.1.4 System Designation

4.1.5 Total Systen Count

4.1.6 All SUT Systems Identical

4.1.7 Total Node Count

4.1.8 All Nodes Identical

4.1.9 Nodes Per System

4.1.10 Total Chips

4.1.11 Total Cores

4.1.12 Total Threads

4.1.13 Total Memory (Gb)

4.1.14 Total OS Images

4.1.15 SW Environment

4.2 Hardware

4.2.1 HW Vendor

4.2.2 HW Vendor URL

4.2.3 HW Available

4.2.4 Model

4.2.5 Number of Systems

4.2.6 Form Factor

4.2.7 Nodes Per System

4.2.8 CPU Name

4.2.9 CPU Characteristics

4.2.10 Chips Per System

4.2.11 Cores Per System

4.2.12 Cores Per Chip

4.2.13 Threads Per System

4.2.14 Threads Per Core

4.2.15 CPU Frequency (MHz)

4.2.16 Primary Cache

4.2.17 Secondary Cache

4.2.18 Tertiary Cache

4.2.19 Other Cache

4.2.20 Disk Drive

4.2.21 File System

4.2.22 Memory Amount (GB)

4.2.23 # and size of DIMM(s)

4.2.24 Memory Details

4.2.25 # and type of Network Interface Cards (NICs) Installed

4.2.26 Power Supply Quantity and Rating (W)

4.2.27 Other Hardware

4.2.28 Cabinet/Housing/Enclosure

4.2.29 Shared Description

4.2.30 Shared Comment

4.2.31 Tuning

4.2.32 Notes

4.3 Other Hardware/Software

4.3.1 Vendor

4.3.2 Vendor URL

4.3.3 Version

4.3.4 Available

4.3.5 Bitness

4.3.6 Notes

4.4 Operating System

4.4.1 OS Vendor

4.4.2 OS Vendor URL

4.4.3 OS Version

4.4.4 OS Available

4.4.5 Bitness

4.4.6 Notes

4.5 Java Virtual Machine (JVM)

4.5.1 JVM Vendor

4.5.2 JVM Vendor URL

4.5.3 JVM Version

4.5.4 JVM Available

4.5.5 JVM Notes

4.6 Other Software

4.6.1 SW Vendor

4.6.2 SW Vendor URL

4.6.3 Version

4.6.4 Available

4.6.5 Bitness

4.6.6 Notes

5. Results Details

5.1 max-jOPS

5.2 critical-jOPS

5.3 Last Success jOPS/First Failure jOPS for SLA points Table

5.4 Number of probes

5.5 Request Mix Accuracy

5.6 Rate of non-critical failures

5.7 Delay between performance status pings

5.8 IR/PR accuracy

6. Topology

7. SUT or Driver configuration

7.1 Hardware

7.1.1 OS Images

7.1.2 Hardware Description

7.1.3 Number of Systems

7.1.4 SW Environment

7.1.5 Tuning

7.1.6 Notes

7.2 OS image

7.2.1 JVM Instances

7.2.2 OS Image Description

7.2.3 Tuning

7.2.4 Notes

7.3 JVM Instance

7.3.1 Parts of Benchmark

7.3.2 JVM Instance Description

7.3.3 Command Line

7.3.4 Tunning

7.3.5 Notes

8.Run Properties

9. Validation Details

9.1 Validation Reports

9.1.1 Compliance

9.1.2 Correctness

9.2 OtherChecks


1. SPECjbb2013 Benchmark

SPECjbb2013 (Java Server Benchmark) is SPEC's benchmark for evaluating the performance of server side Java. Like its predecessors, SPECjbb2000/5, SPECjbb2013 evaluates the performance of server side Java by emulating a three-tier client/server system (with emphasis on the middle tier). The benchmark exercises the implementations of the JVM (Java Virtual Machine), JIT (Just-In-Time) compiler, garbage collection, threads and some aspects of the operating system. It also measures the performance of CPUs, caches, memory hierarchy and the scalability of shared memory processors (SMPs). The benchmark also using an approach of reporting response time while gradually increasingly the load and reporting not only full system capacity throughput, but also throughput under response time constraint..

The benchmark suite consists of three separate software modules:

These modules work together in real-time to collect server performance data by exercising the system under test (SUT) with a predefined workload.


2. Top Bar

The top bar shows the measured SPECjbb2013 result and gives some general information regarding this test run.

2.1 Headline

The headline of the performance report includes one field displaying the hardware vendor and the name of the system under test. If this report is for a historical system the declaration "(Historical)" must be added to the model name. In a second field the max-jOPS and composite-jOPS is printed, eventually prefixed by an "Invalid" indicator, if the current result does not pass the validity checks implemented in the benchmark.

2.2 Test sponsor

The name of the organization or individual that sponsored the test.Generally, this is the name of the license holder.

2.3 SPEC license #

The SPEC license number of the organization or individual that ran the benchmark

2.4 Hardware Availability

The date when all the hardware necessary to run the result is generally available. For example, if the CPU is available in Aug-2007, but the memory is not available until Oct-2007, then the hardware availability date is Oct-2007 (unless some other component pushes it out farther).

2.5 Tested by

The name of the organization or individual that ran the test and submitted the result.

2.6 Test Location

The name of the city, the state and country the test took place. If there are installations in multiple geographic locations, that must also be listed in this field.

2.7 Software Availability

The date when all the software necessary to run the result is generally available. For example, if the operating system is available in Aug-2007, but the JVM is not available until Oct-2007, then the software availability date is Oct-2007 (unless some other component pushes it out farther).

2.8 Test Date

The date when the test is run. This value is automatically supplied by the benchmark software; the time reported by the system under test is recorded in the raw result file .

2.9 Publication Date

The date when this report will be published after finishing the review. This date is automatically filled in with the correct value by the submission tool provided by SPEC. By default this field is set to "Unpublished" by the software generating the report.

2.10 INVALID or WARNING or COMMENTS

Any inconsistencies with the run and reporting rules causing a failure of one of the validity checks implemented in the report generation software will be reported here and all pages of the report file will be stamped with an "Invalid" water mark in case this happens. The printed text will show more details about which of the run rules wasn't met and the reason why. More detailed explanation may as well be at the end of report in sections �run properties� or �validation Details�. If there are any special waivers or other comments from SPEC editor, those will also be listed here.


3 Benchmark Results Summary

This section describes the result details as a graph (jOPS and Response time), SPECjbb2013 category, number of groups and links to other sections of the report.

3.1 Category

The header of this section decrypts as which SPECjbb2013 category was run and how many �number of groups� were set to run using property �specjbb.group.count�:

3.2 Result Chart

The raw data from this graph can be found by clicking on the graph. This graph only shows the Response-Throughput (RT) phase of the benchmark. Initial phase of finding High Bound Injection Rate (HBIR) (Approximate High Bound of throughput) and later validation at the end of the run are not part of this graph. X-axis is showing jOPS (Injection Rate : IR) as system is being tested for gradually increasing RT step levels in increments of 1% of HBIR. Y-axis is showing response time (min, various percentiles, max) where 99th percentile determines the critical-jOPS metric being shown a yellow vertical line. The last successful RT step level before the �First Failure� of an RT step level is marked as red vertical line reflecting the max-jOPS metric of the benchmark. Benchmark continues to test few RT step levels beyond the �First Failure� RT step level. Often, there should be very few RT step levels passing beyond �First Failure� RT step level else it indicates that with more tuning system should be able to pass higher max-jOPS. A user need to view either controller.out or level-1 report output to view details about levels beyond �First Failure� RT step level.


4. SUT (System Under Test) Description

The following section of the report file describes the hardware and the software of the system under test (SUT) used to run the reported benchmark with the level of detail required to reproduce this result.

4.1.1 Vendor

Company which sells the system

4.1.2 System Vendor URL

URL of system vendor

4.1.3 System Source

Single Supplier or Parts Built

4.1.4 System Designation

Possible values for this property are:

4.1.5 Total Systen Count

The total number of configured systems.

4.1.6 All SUT Systems Identical

{YES / NO].

4.1.7 Total Node Count

The total number of configured systems. Please refer to Run and Reporting Rules document for definition of system. As example, a rack based blade system, can be one system with many blade nodes with all running under single OS image or each running its own OS image.

4.1.8 All Nodes Identical

{YES / NO].

4.1.9 Nodes Per System

The number of nodes configured on each system.

4.1.10 Total Chips

The number of total chip installed on all system(s) in overall SUT(s).

4.1.11 Total Cores

The number of total cores installed on all system(s) in overall SUT(s).

4.1.12 Total Threads

The number of total thread on all system(s) in overall SUT(s).

4.1.13 Total Memory (Gb)

The number of total memory installed on all system(s) in overall SUT(s).

4.1.14 Total OS Images

The number of total OS images installed on all system(s) in overall SUT(s).

4.1.15 SW Environment

Environment mode. [virtual / Non-virtual]

4.2 Hardware

The following section of the report file describes the hardware and the software of the system under test (SUT) used to run the reported benchmark with the level of detail required to reproduce this result. Same fields are also valid for Driver system(s) HW and SW description. For driver system, some fields like memory etc. may not be needed in as details as for SUT.

4.2.1 HW Vendor

The Company name which sells the system.

4.2.2 HW Vendor URL

The URL of the system vendor.

4.2.3 HW Available

The HW availability (month-year) of the system.

4.2.4 Model

The model name identifying the system under test

4.2.5 Number of Systems

The number of systems under test

4.2.6 Form Factor

The form factor for this system.
In multi-node configurations, this is the form factor for a single node. For rack-mounted systems, specify the number of rack units. For blades, specify "Blade". For other types of systems, specify "Tower" or "Other".

4.2.7 Nodes Per System

The number of nodes per system.

4.2.8 CPU Name

A manufacturer-determined processor formal name.

4.2.9 CPU Characteristics

Technical characteristics to help identify the processor, such as number of cores, frequency, cache size etc.
If the CPU is capable of automatically running the processor core(s) faster than the nominal frequency and this feature is enabled, this field should also list the feature and the maximum frequency it enables on that CPU (e.g.: "Intel Turbo Boost Technology up to 3.46GHz").
If this CPU clock feature is present but is disabled, no additional information is required here.

4.2.10 Chips Per System

The numberof Chips Per System.

4.2.11 Cores Per System

The number of Cores Per System.

4.2.12 Cores Per Chip

The number of Cores Per Chip.

4.2.13 Threads Per System

The number of Threads Per System .

4.2.14 Threads Per Core

The number of Threads Per Core .

4.2.15 CPU Frequency (MHz)

The nominal (marked) clock frequency of the CPU, expressed in megahertz.
If the CPU is capable of automatically running the processor core(s) faster than the nominal frequency and this feature is enabled, then the CPU Characteristics field must list additional information, at least the maximum frequency and the use of this feature.
Furthermore if the enabled/disabled status of this feature is changed from the default setting this must be documented in the System Under Test Notes field.

4.2.16 Primary Cache

Description (size and organization) of the CPU's primary cache. This cache is also referred to as "L1 cache".

4.2.17 Secondary Cache

Description (size and organization) of the CPU's secondary cache. This cache is also referred to as "L2 cache".

4.2.18 Tertiary Cache

Description (size and organization) of the CPU's tertiary, or "L3" cache.

4.2.19 Other Cache

Description (size and organization) of any other levels of cache memory.

4.2.20 Disk Drive

A description of the disk drive(s) (count, model, size, type, rotational speed and RAID level if any) used to boot the operating system and to hold the benchmark software and data during the run.

4.2.21 File System

The file system used.

4.2.22 Memory Amount (GB)

Total size of memory in the SUT in GB.

4.2.23 # and size of DIMM(s)

Number and size of memory modules used for testing.

4.2.24 Memory Details

Detailed description of the system main memory technology, sufficient for identifying the memory used in this test.
The recommended format is described here.

Format:
ggggg eRxff PCy-wwwwm ECC CLa; slots k, ... l populated


Example:
2GB 2Rx4 PC2-5300F ECC CL5; slots 1, 2, 3, and 4 populated

Where:

4.2.25 # and type of Network Interface Cards (NICs) Installed

A description of the network controller(s) (number, manufacturer, type, ports and speed) installed on the SUT

4.2.26 Power Supply Quantity and Rating (W)

The number of power supplies that are installed in this node and the power rating for each power supply. Both entries should show "None" if the node is powered by a shared power supply.

4.2.27 Other Hardware

Any additional equipment added to improve performance and required to achieve the reported scores.

4.2.28 Cabinet/Housing/Enclosure

The model name identifying the enclosure housing the tested nodes.

4.2.29 Shared Description

Additional descriptions about the shared HW.

4.2.30 Shared Comment

Description of additional performance relevant components not covered in the fields above

4.2.31 Tuning

Tuning information.

4.2.32 Notes

Additional Notes.

4.3. Other Hardware/Software

Other HW like network switch(s) or other software.

4.3.1 HW Vendor

The company name which sells item under Other Hardware/Software.

4.3.2 HW Vendor URL

The company URL which sells item under Other Hardware/Software.

4.3.3 Version

Other Hardware/Software Version.

4.3.4 Availablity

The HW/SW availability (month-year).

4.3.5 Bitness

If applicable Other HW or SW Bitness else type �n/a�

4.3.6 Notes

Additional Notes.

4.4 Operating System

System OS Section

4.4.1 OS Vendor

The OS vendor name.

4.4.2 OS Vendor URL

The OS vendor URL.

4.4.3 OS Version

OS version

4.4.4 OS Available

The OS availability (month-year).

4.4.5 Bitness

Additional OS Notes.

4.4.6 OS Notes

Additional OS Notes.

4.5 JVM

JVM Section

4.5.1 OS Vendor

The JVM vendor name.

4.5.2 OS Vendor URL

The JVM vendor URL.

4.5.3 JVM Version

(???).

4.5.4 JVM Available

The JVM availability (month-year).

4.5.5 JVM Notes

Additional JVM Notes.

4.6 Other Software

Other Software Section

4.6.1 SW Vendor

The OS vendor name.

4.6.2 Vendor URL

The OS vendor URL.

4.6.3 Version

(???).

4.6.4 Available

The availability (month-year) of the other Software.

4.6.5 Bitness

JVM Bitness.

4.6.6 Notes

Additional JVM Notes.


5. Results Details

Details about max-jOPS and critical-jOPS calculations.

5.1 max-jOPS

Showing last few RT(Response-Throughput) step levels close to max-jOPS. Pass means that RT step level passed and fail means system did not pass that RT step level. Successful RT step level before first failed RT step level is chosen as max-jOPS. .

5.2 critical-jOPS

This is a complex calculation. jOPS at various SLAs (Service Level Agreement) are calculated using data shown in the table called �Last Success jOPS/First Failure jOPS for SLA points� as well as other RT step levels in between those two levels. Then geometric mean of jOPS at these SLAs represent the critical-jOPS metric. This metric could be 0 if jOPS for any one or more of the five SLAs (10ms, 50ms, 100ms, 200ms, 500ms) with details later in the report is 0.

5.3 Last Success jOPS/First Failure jOPS for SLA points

First column list various SLAs points (different response time thresholds) while first title row is listing response time percentiles. The data for a given SLAs (as example 10000 us = 10ms) and percentile (as example 99th percentile) has two data values in format [Last Success jOPS/First Failure jOPS]. Last Success jOPS is the last successful RT step level whose 99th percentile of response time of all samples was 10ms or less. If 99th percentile of response time was never 10ms or below, data will be �-�. First Failure jOPS is the RT step level where first time 99th percentile of response time of all samples was more than 10ms. Data points with red color background are being used in calculation for the metric critical-jOPS.

5.4 Number of probes

This is one of the validation criteria. This graph only shows RT phase step levels. jOPS for RT step levels are on x-axis and number of probes as % of total jOPS is on y-axis (logarithmic scale). Two horizontal lines are showing limits. To have good confidence in response time, we need to ensure that a good % of total jOPS is being issued as probes. For more details, please refer to validation section of Run and Reporting Rules document.

5.5 Request Mix Accuracy

This is one of the validation criteria. Total requests are issued to maintain a request mix. This graph only shows RT phase step levels. jOPS for RT step levels are on x-axis and y-axis shows the (Actual % in the mix � Expected % in the mix). For more details about passing criteria, please refer to validation section of Run and Reporting Rules document.

5.6 Rate of non-critical failures

This is one of the validation criteria. If these non-critical failures are 0 during the RT phase, only a message is printed. If non-critical failures during RT phase are >0, then a graph is shown. In case of graph, jOPS for RT step levels are on x-axis and number of non-critical failures for each RT step level is on y-axis. Transaction Injectors (TxI) issue requests to Backend(s) to process. Many time for various reasons, TxI will timeout after waiting for a threshold. This is counted as non-critical failure. For more details about passing criteria, please refer to validation section of Run and Reporting Rules document.

5.7 Delay between performance status pings

This is one of the validation criteria. X-axis is time in mili seconds (msec). Y-axis is showing delay time in msec. Validation criteria applies to whole RT phase and not to individual RT step levels. Also, minimum y-axis value is 5 sec as that is passing criteria and chosen to reduce the size of .raw file for submission. If a user want to see y-axis data starting with 0, user need to generate report with level-1 and it will have the detailed graph. For more details about passing criteria, please refer to validation section of Run and Reporting Rules document.

5.8 IR/PR accuracy

This graph shows the relationship between IR (Injection rate), aIR(Actual Injection Rate) and Actual PR(Processed Rate). The graph is showing all the phase starting from HBIR(High Bound Injection Rate) search, RT phase warm-up, RT phase and validation phase at the end. X-axis is showing iteration number where iteration means a time period for which IR/aIR/PR being evaluated. IR is target Injection rate, actual IR is the IR we could issue for a given iteration and PR is total processed rate for that iteration. To pass an iteration, IR/aIR/PR must be within certain % of each other. Y-axis shows as how far Actual IR and Actual PR are compared to IR as base. If those are within low and high bound, that iteration passed else it failed. A user will see many failures during HBIR search. During RT phase till max-jOPS is found, there could be some failures as certain number of retries are allowed. For more details about passing criteria, please refer to Run and Reporting Rules document.


6. Topology

This section covers the topology for SUT and driver system (Distributed category only). First section shows an easy summary of the deployment of JVM and OS images across H/W systems. Later sub-sections detail about JVM instances across OS images deployed for each H/W configuration for SUT and driver system (Distributed category only).


7. SUT or Driver configuration

This section covers as how JVM instances are deployed inside OS images and those OS images are deployed across HW systems.

7.1 Hardware

On a given system HW configuration describes OS images being deployed.

7.1.1 OS Images

Format is OS_image_type (number of them deployed on this system). OS_image_type should match one of the OS configuration described in this section 7.2 .

7.1.2 Hardware Description

Name of the HW when describing in product section like �HW_1� .

7.1.3 Number of Systems

Number of systems using same exact deployment .

7.1.4 SW Environment

Virtual or non-virtual.

7.1.5 Tuning

Any tuning.

7.1.6 Notes

Any notes.

7.2 OS image

On a given OS image describes JVM instances being deployed.

7.2.1 JVM Instances

Format is list of many JVM instances following: JVM_image_type (number of them deployed in this OS image). JVM_image_type should match one of the JVM Instance configuration described in this section 7.3 .

7.2.2 OS Image Description

Name of the OS product when describing in product section like �OS_1� .

7.2.3 Tuning

Any tuning.

17.2.4 Notes

Any Notes.

7.3 JVM Instance

Describes a JVM Instance.

7.3.1 Parts of Benchmark

Name of benchmark agent this JVM instance will run. It can be Composite (for Composite category) or for MulitJVM and Distributed category it can be Controller or TxInjector or Backend.

7.3.2 JVM Instance Description

Name of the JVM product when describing in product section like �jvm_1�.

7.3.3 Command Line

Command line parameters being used to launch this JVM instance.

7.3.4 Tunning

Any tuning.

7.3.5 Notes

Any notes.


8.Run Properties

This section covers the run properties which are being set by the user.


9. Validation Details

Details about validation are listed here.

9.1 Validation Reports

Provides details about different type of validation.

9.1.1 Compliance

SPECjbb2013 Run and Reporting Rules document list specifically the properties which can be set by the user. If user settable properties are set within compliant settable range, this section prints the message �PASSED� for all agents.
If a user sets either non-settable property to different than default and/or sets user settable properties to out of compliant range, this property is listed here as well as agent along with message that run is INVALID.

9.1.2 Correctness

Benchmark also has data structures which must be in synchronization and should match certain criteria. If this criteria fails, run is declared INVALID.

9.2 OtherChecks

List other checks for compliance as well as High Bound maximum and High Bound settled values during the HBIR (High Bound Injection Rate) search phase.


Product and service names mentioned herein may be the trademarks of their respective owners.
Copyright 2007-2013 Standard Performance Evaluation Corporation (SPEC).
All Rights Reserved