(To check for possible updates to this document, please see http://www.spec.org/accel/Docs/ )
Overview
Click one of the following to go to the detailed contents about that item:
I. Introduction
II. Config file options for runspec
III. Config file options for specmake
IV. Config file options for the shell
V. Config file options for readers
VI. Using Feedback Directed Optimization (FDO)
VII. The config file preprocessor
VIII. Output files - and how they relate to your config file
IX. About Alternate Sources
X. Troubleshooting
Contents
I. Introduction
A. What is a config file? (Background: benchmark philosophy.)
B. What does a config file affect?
1. runspec
2. specmake
3. The shell
4. Readers of the results
5. The config file preprocessor
C. Config file structure
1. Comments and whitespace
2. Header section
3. Named sections, section markers, section specifiers
a. Precedence for the benchmark specifier
Order of differing sections does not matter
Order of the same section does matter
b. Precedence for the tuning specifier
c. Precedence for the extension specifier
Extension found in config file
Extension not found in config file
d. Combining specifier types
e. Precedence among section types
4. MD5 section
5. Shell-like "here documents" and backslash continued lines
6. Included files
D. Variable substitution
1. By runspec
a. At startup: $[variable]
b. During a run: $variable and ${variable}
c. Example: cleaning files before a training run
d. Example: submit to multiple nodes, Tru64 Unix
2. By the shell \$VARIABLE
a. Protecting shell variables
3. By specmake $(VARIABLE)
4. Limitations on variable substitution
5. Unsetting a variable with "%undef%"
II. Config file options for runspec
A. Precedence: config file vs. runspec command line
B. Options
action allow_extension_override backup_config basepeak build_in_build_dir check_md5 check_version command_add_redirect current_range delay deletework device difflines env_vars expand_notes expid ext fail fail_build fail_run feedback flagsurl http_proxy http_timeout idle_current_range ignore_errors ignore_sigint info_wrap_columns inherit_from iterations keeptmp line_width locking log_line_width log_timestamp mach mail_reports mailcompress mailmethod mailport mailserver mailto make make_no_clobber makeflags mean_anyway minimize_builddirs minimize_rundirs no_input_handler no_monitor nobuild notes_wrap_columns notes_wrap_indent output_format output_root plain_train platform power power_analyzer preenv rebuild reportable runlist section_specifier_fatal sendmail setprocgroup size src.alt strict_rundir_verify sysinfo_program table teeout temp_meter train_with tune use_submit_for_speed verbose version_url voltage_range
C. About sysinfo
1. The example sysinfo program
a. Location and updates
b. Turning the feature on or off
c. Output
d. Avoiding warnings about conflicting fields
e. Example: where does the example sysinfo program send its output?
2. Writing your own sysinfo program
III. Config file options for specmake
CC, CXX, FC
CLD, CXXLD, FLD
ONESTEP
OPTIMIZE, COPTIMIZE, CXXOPTIMIZE, FOPTIMIZE
PORTABILITY, CPORTABILITY, CXXPORTABILITY, FPORTABILITY...
RM_SOURCES
PASSn_CFLAGS, PASSn_CXXFLAGS, PASSn_FFLAGS
The syntax "+=" is available (but should be used with caution)
IV. Config file options for the shell
A. Options
bench_post_setup fdo_pre0 fdo_preN fdo_make_cleanN fdo_pre_makeN fdo_make_passN fdo_post_makeN fdo_runN fdo_postN post_setup submit
B. Using and Changing fdo Options
C. Using submit
1. Basic Usage
2. Useful features related to submit
3. Continuation of Submit Lines
Sidebar: About Quoting and Submit
4. Reporting of submit usage
V. Config file options for readers
A. Descriptive fields
hw_accel_connect hw_accel_desc hw_accel_ecc hw_accel_model hw_accel_name hw_accel_type hw_accel_vendor hw_avail hw_cpu_name hw_cpu_char hw_cpu_mhz hw_cpu_max_mhz hw_disk hw_fpu hw_memory hw_model hw_nchips hw_ncores hw_ncoresperchip hw_ncpuorder hw_nthreadspercore hw_ocache hw_other hw_pcache hw_power_{id}_cal_date hw_power_{id}_cal_label hw_power_{id}_cal_org hw_power_{id}_met_inst hw_power_{id}_connection hw_power_{id}_label hw_power_{id}_model hw_power_{id}_serial hw_power_{id}_setup hw_power_{id}_vendor hw_psu hw_psu_info hw_scache hw_tcache hw_temperature_{id}_connection hw_temperature_{id}_label hw_temperature_{id}_model hw_temperature_{id}_serial hw_temperature_{id}_setup hw_temperature_{id}_vendor hw_vendor license_num prepared_by sw_accel_driver sw_avail sw_base_ptrsize sw_compiler sw_file sw_os sw_state sw_other sw_peak_ptrsize tester test_sponsor
B. Field scoping and numbered continuation lines
C. Additional notes for the reader
1 Notes sections
notes_comp_NNN notes_port_NNN notes_base_NNN notes_peak_NNN notes_submit_NNN notes_os_NNN notes_plat_NNN notes_part_NNN notes_NNN
2 Note numbering
3 Additional tags
4 Links in notes sections
5 Attachments
VI. Using Feedback Directed Optimization (FDO)
A. The minimum requirement: PASSn_<language>FLAGS, PASSn_<language>OPTIMIZE, or fdo*n
B. Combining PASS*n and fdo*n; fake is your friend
C. Interaction with the config file feedback option
D. If the config file feedback option is used at multiple levels
E. Interaction with runspec --feedback
VII. The config file preprocessor
A. Defining macros
B. Un-doing macro definition
C. Using macros
D. Conditionals
1. %ifdef .. %endif
2. %ifndef .. %endif
3. %if .. %endif
4. %else
5. %elif
E. Informational directives
1. %warning
2. %error
VIII. Output files - and how they relate to your config file
A. Automatic backup of config files
B. The log file and verbosity levels
1. Useful Search Strings
2. About Temporary Debug Logs
3. Definitions of verbosity levels
C. Log file example: Feedback-directed optimization.
D. Help, I've got too many logs
E. Finding the build directory
F. Files in the build directory
G. For more information
IX. About Alternate Sources
A. Example: Applying a src.alt
B. Developing a src.alt (brief introduction)
X. Troubleshooting
Note: links to SPEC documents on this web page assume that you are reading the page from a directory that also contains the other SPEC documents. If by some chance you are reading this web page from a location where the links do not work, try accessing the referenced documents at one of the following locations:
SPEC config files provide very detailed control of testing. Before learning about these details, most users will find it helpful to begin with:
The runspec document discusses the primary user interface for running SPEC ACCEL; with this document, attention turns more toward how things work inside. |
q. This document looks big and intimidating. Where do I start? a. Don't start here. Start with runspec.html. But, once you do read this document, please be sure to notice:
If you keep track of which options are addressed to which consumer, you will considerably ease your learning curve. |
A config file contains:
A key decision that must be made by designers of a benchmark suite is whether to allow the benchmark source code to be changed when the suite is used.
If source code changes are allowed:
+ | The benchmark can be adapted to the system under test. |
+ | Portability may be easier. |
– | But it may be hard to compare results between systems, unless some formal audit is done to ensure that comparable work is done. |
If source code changes are not allowed:
+ | Results may be easier to compare. |
– | It may take more time and effort to develop the benchmark, because portability will have to be built in ahead of time. |
– | Portability may be hard to achieve, at least for real applications. Simple loops of 15 lines can port with little effort, and such benchmarks have their uses. But real applications are more complex. |
SPEC has chosen not to allow source code changes for the SPEC ACCEL, except under very limited circumstances. Please see the run rules about when changes are allowed and for which benchmark suite.
By restricting source code changes, SPEC separates the activity of porting benchmarks, which has a goal of being performance neutral, from the activity of using the benchmarks, where the goal is getting the best score possible.
Are source code changes ever allowed? Normally, no. But if you discover a reason why you believe such a change is essential, SPEC wants to hear about it, and will consider such requests for a future revision of the suite. SPEC will normally not publish SPEC ACCEL results using modified source code, unless such modifications are unavoidable for the target environment, are reviewed by SPEC, are made available to all users of the suite, and are formally approved by a vote.
So, if source code changes are not allowed, but the benchmarks must be compiled in a wide variety of environments, can the users at least write their own makefiles, and select -D options to select different environments? The answer to these two questions are "no", and "yes", respectively:
You do this in the config file, which contains a centralized collection of all the portability options and optimization options for all the benchmarks in the suite. The SPEC tools then automatically generate the makefiles for you.
The config file contains places where you can specify the characteristics of both your compile time and run time environments. It allows the advanced user to perform detailed manipulation of makefile options, but retains all the changes in one place so that they can be examined and reproduced.
The config file is one of the key ingredients in making results reproducible. For example, if a customer would like to run the SPEC ACCEL suite on her own SuperHero Model 4 and discover how close results are in her environment to the environment used when the vendor published a SPEC ACCEL result, she should be able to do that using only 3 ingredients:
A config file contains options targeting five distinct consumers:
To understand how to write a config file effectively, you need to understand which consumer you are addressing at any given point.
The above point seems worth emphasizing:
To understand how to write a config file effectively, you need to understand which consumer you are addressing at any given point.
This section gives you an overview of the consumers; more detail is included in later sections.
Various aspects of the operation of runspec can be affected by setting options within a config file. You'll find a list of these options in the table of contents for section II - including some that are available both on the runspec command line and some that can only be set within a config file.
For example, if michael.cfg includes the lines:
output_format = text,ps tune = base reportable = 1 runlist = opencl
then the defaults for the runspec command would change as specified. A user who types either of the following two commands would get precisely the same effect:
runspec --config=michael runspec --config=michael --output=text,ps --tune=base --reportable opencl
The tool specmake is simply GNU make renamed to avoid any possible conflicts with other versions of make that may be on your system. The options commonly used for specmake are listed in the table of contents for section III.
For example, these config file lines:
CC = cc CPORTABILITY = -DSPEC_LP64 OPTIMIZE = -O4
are written to the makefile set that is ultimately used to build the benchmark, and are interpreted by specmake.
Some config file lines define commands that are handed off to the shell or, on Windows, the command interpreter. The list of these is in the table of contents for section IV.
For example, consider a config file that contains:
fdo_pre0 = mkdir /tmp/joydeep/feedback; rm -f /tmp/joydeep/feedback/*
When using this config file, runspec will pass the above command to the shell prior to running a training run for feedback directed optimization. It is the shell that actually carries out the requested commands, not runspec, so the syntax of the command depends on whether you are using the Unix shell (/bin/sh) or the Windows command interpreter (cmd.exe).
Because runspec can cause arbitrary commands to be executed, it is therefore important to read a config file you are given before using it.
If a SPEC ACCEL result is published (for example, at http://www.spec.org/), it is expected to contain all the information needed for a reader to understand exactly what was tested. Fields that are informative for readers are listed in the table of contents for section V.
For example, config file lines such as these are addressed to the human reader:
hw_avail = Aug-2012 sw_avail = Jul-2012 notes_base_015 = Note: Bios was set to use performance option.
In addition, for results published by SPEC, the config file itself is available to readers at http://www.spec.org/accel/. The config file is presented as you wrote it, with three exceptions (protected comments, the MD5 section, and rawfile corrections for reader fields). The config file is made available because it is so important to reproducing results, as described in the Introduction. The config file is saved on every run, as a compressed portion of the rawfile, and can be accessed with runspec --rawformat --output_format=config <rawfile>.
There is also a config file preprocessor, which is addressed via lines that begin with % in the first column.
A config file contains:
About "scope": Every line of the config file is considered to be within the scope of one of the above three. Lines prior to the first section marker are in the scope of the header section. All other lines are either in the scope of the most recently preceding user-defined section marker, or else in the MD5 section.
A line within the scope of a named section may be overridden by a line within the scope of a different named section, according to rules that are described below.
Comment lines begin with a # (also known as a "hash mark"), and can be placed anywhere in a config file. You can also add comments at the end of a line, as shown below.
# # This config file contains new tuning for the new C++ compiler # iterations = 1 # For quick testing. Reportable runs use 3. OPTIMIZE = -O11 # Most go up to ten. These go to eleven.
If you need to include a hash mark that you do NOT want to be treated as the start-of-comment delimiter, put a backslash in front of it:
hw_model = Mr. Lee's \#1 Semi-Autonomous Unit
When the config file is saved as an encoded portion of the rawfile, the comments are included. But if a comment line begins with #>, it is a "protected comment" that will not be saved in the rawfile. Thus you could use # for most of your comments, and use #> for proprietary information, such as:
#> I didn't use the C++ beta version because of Bob's big back-end bug.
Blank lines can be placed anywhere in a config file.
Trailing spaces and tabs are stripped, unless they are preceded by a backslash:
CC_PATH=/path/with/no/trailing/spaces
That is turned into "/path/with/no/trailing/spaces". To preserve those trailing spaces, you'd simply add a backslash:
CC_PATH=/path/with/trailing/spaces\
That is turned into "/path/with/trailing/spaces ".
Spaces within a line are usually ignored. Of course, you wouldn't want to spell OPTIMIZE as OPT I MIZE, but you are perfectly welcome to do either of the following:
OPTIMIZE=-O2 OPTIMIZE = -02
One place where spaces are considered significant is in notes, where the tools assume you are trying to line up your comments in the full disclosure reports. (Notes are printed in a fixed-width font.)
Spaces at the beginning of lines are ignored, except when attempting to address the preprocessor. Preprocessor directives always begin with a percent sign (%) in the first column. You can put spaces after the percent sign, if you wish, as shown by the examples below.
The header section is simply the first section, prior to the first occurrence of a named section.
Most attempts to address runspec itself must be done in the header section. For example, if you want to set reportable=1, you must do so before any occurrences of section markers.
Relaxed Header Restrictions: Options that are restricted to appear only in the header section may nevertheless be entered in a named section, if and only if that section's name is default=default=default=default: (or a shorter spelling thereof, such as just default:).
It is, usually, still a good idea to put these options up top, since they tend to be global in nature and config files are typically more maintainable when global options are kept together.
Nevertheless, there may be circumstances where it is more convenient to, effectively, 'return' to the header section, for example, if multiple preprocessor macros want to write text into both the header section and named sections.
(Internally, the tools still think of your option as a header section option, even if you take advantage of this relaxed syntax and enter it into default=default=default=default: instead. Therefore this document, also retains the "header section" terminology.)
A named section is a portion of the config file that begins with a section marker and continues until the next section marker or the MD5 section is reached. The contents of the named section are applied based upon the precedence rules described in the following sections.
A "section marker" is a one- to four-part string of the form:
benchmark[,...]=tuning[,...]=extension[,...]=machine[,...]:
These are referred to below as the 4 "section specifiers". The allowed values for the section specifiers are:
benchmark: | default
opencl, openacc, openmp Any individual benchmark, such as 350.md A list of benchmarks, such as 350.md,124.hotspot A benchmark set, using one of the bset names found in $SPEC/benchspec/ACCEL or %SPEC%\benchspec\ACCEL |
tuning: | default
base peak A list of tuning levels, such as base,peak |
extension: | default
An arbitrary string, such as "cloyce-naturalblonde" A list of extensions, separated by commas |
machine: | default
An arbitrary string [*] A list of machine types, separated by commas |
Trailing default section specifiers may be omitted from a section marker. Thus all three of these section markers are equivalent:
363.swim=base=default=default: 363.swim=base=default: 363.swim=base:
[*] The "machine" specifier works similarly to the extension specifier, but it does not affect the name of the executable produced, which, in many environments, makes it less useful than the extension specifier. This document does not describe details of the usage of the "machine" specifier, other than to note that it exists; if you feel particularly courageous, you may experiment with it.
Warning: Please be aware that if you use a single config file to build with two different machine settings, you will likely overwrite the original binaries each time you change the machine, since the machine specifier does not affect the name of the generated executable.
Section markers can be entered in any order. Section markers can be repeated; material from identical section markers will automatically be consolidated. That is, you are welcome to start one section, start a different one, then go back and add more material to the first section.
By constructing section markers, you specify how you would like your options applied, with powerful defaulting and overriding capabilities. The next several sections walk through examples to demonstrate precedence, including how sections interact with each other.
For the benchmark specifier, the precedence is:
highest named benchmark(s) suite name lowest default
Using default as the benchmark specifier
For example, consider this config file that only mentions default for the benchmark specifier:
$ cat tmp.cfg runlist = ep size = test iterations = 1 tune = base output_format = text teeout = 1 default=default=default=default: OPTIMIZE = -xO1 -w $ runspec --config=tmp | grep randdp.c cc -c -o randdp.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xO1 -w randdp.c $
The config file above is designed for quick, simple testing: it runs only one benchmark, namely 352.ep, using the smallest (test) workload, runs it only once, uses only base tuning, outputs only the text-format (ASCII) report, and displays the build commands to the screen (teeout). To use it, we issue a runspec command, and pipe the output to grep to search for the actual generated compile command. (Alternatively, on Windows, we could use findstr on the generated log file).
The careful reader may ask, "Why does the runlist reference ep rather than 352.ep?" The answer is that the runlist can use benchmark numbers or any abbreviation that is sufficient for uniqueness; any of the following would have the same effect: "352.ep", "352", "ep", "352.e". This is the same rule as for the corresponding option on the command line.
The results show that the tuning applied was the expected -xO1 -w (which mean optimization level 1 and suppress warnings, for the Oracle Solaris Studio Compilers). The tools have automatically added -c -o randdp.o to specify where the object file is to be written. The switches -DSPEC_ACCEL -DSPEC -DNDEBUG were also added automatically. The former may enable benchmark code changes from SPEC's porting process (if any), and the latter turns off C language "assert" statements (if any).
Using a named suite as the benchmark specifier
The next example differs from the previous one by adding a section marker with openacc, for the OpenACC suite, as the first section specifier:
$ cat tmp.cfg runlist = ep size = test iterations = 1 tune = base output_format = text teeout = 1 default=default=default=default: OPTIMIZE = -xO2 -w openacc=default=default=default: OPTIMIZE = -xO3 -w $ runspec --config=tmp | grep randdp.c cc -c -o randdp.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xO3 -w randdp.c $
The second OPTIMIZE line is used above because the reference to the openacc suite is considered to be more specific than the overall default.
You can also mention the "bsets" found in $SPEC/benchspec/ACCEL or %SPEC%\benchspec\ACCEL. For example, you could reference all the C++ benchmarks by saying:
all_cpp=default=default=default: OPTIMIZE = -xO3 -w
You can mention multiple sets:
all_c,all_cpp=default=default=default: OPTIMIZE = -xO3 -w
Using a named benchmark as the benchmark specifier
Furthermore, we can add a specifier that mentions 352.ep by name:
$ cat tmp.cfg runlist = ep size = test iterations = 1 tune = base output_format = text teeout = 1 default=default=default=default: OPTIMIZE = -xO2 -w openacc=default=default=default: OPTIMIZE = -xO3 -w 352.ep=default=default=default: OPTIMIZE = -xO4 -w $ runspec --config=tmp | grep randdp.c cc -c -o randdp.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xO4 -w randdp.c $
The third OPTIMIZE line wins above, because it is included in the section that is considered to be the most specific.
You can name more than one benchmark if you wish, as in:
314.omriq,352.ep=default=default=default: OPTIMIZE = -xO4 -w
Order of differing sections does not matter:
Let's change the example from the previous section to a different order.
$ cat tmp.cfg runlist = ep size = test iterations = 1 tune = base output_format = text teeout = 1 352.ep=default=default=default: OPTIMIZE = -xO4 -w default=default=default=default: OPTIMIZE = -xO2 -w openacc=default=default=default: OPTIMIZE = -xO3 -w $ runspec --config=tmp | grep randdp.c cc -c -o randdp.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xO4 -w randdp.c $
Notice above that the order of entry is not significant; it's the order of precedence from least specific to most specific.
Order of the same section does matter:
When a specifier is listed more than once at the same descriptive level, the last instance of the specifier is used. Consider this case:
352.ep=default=default=default: OPTIMIZE = -xO4 350.md=default: OPTIMIZE = -fast 352.ep=default=default=default: OPTIMIZE = -xO3
The ending value of OPTIMIZE for 352.ep is -xO3, not -xO4.
For the tuning specifier, either base or peak has higher precedence than default.
Here is an example of its use:
$ cat tmp.cfg runlist = ep size = test iterations = 1 tune = base,peak output_format = text teeout = 1 default=default=default=default: CC = /opt/SUNWspro/bin/cc -w default=base=default=default: CC = /update1/bin/cc -w default=peak=default=default: CC = /update2/bin/cc -w $ runspec --config=tmp | grep randdp.c /update1/bin/cc -w -c -o randdp.o -DSPEC_ACCEL -DSPEC -DNDEBUG randdp.c /update2/bin/cc -w -c -o randdp.o -DSPEC_ACCEL -DSPEC -DNDEBUG randdp.c $
In the above example, we compile ep twice: once for base tuning, and once for peak. Notice that in both cases the compilers defined by the more specific section markers have been used, namely /update1/bin/cc and /update2/bin/cc, rather than /opt/SUNWspro/bin/cc from default=default=default=default.
For the extension specifier, any named extension is at a higher precedence level than the default.
Using an extension found in the config file
The next example, builds with either the library for the "bsdmalloc" memory allocation routines, or the alternative multi-threaded "mtmalloc" routines.
$ cat tmp.cfg runlist = ep size = test iterations = 1 tune = base output_format = text teeout = 1 default=default=default: LIBS = -lslowmalloc default=default=myke: LIBS = -lbsdmalloc default=default=yusuf: LIBS = -lthread -lmtmalloc $ $ runspec --config=tmp --extension=myke | grep randdp.o cc -c -o randdp.o -DSPEC_ACCEL -DSPEC -DNDEBUG randdp.c cc ep.o print_results.o randdp.o c_timers.o wtime.o -lm -lbsdmalloc -o ep $ $ runspec --config=tmp --extension=yusuf | grep randdp.o cc -c -o randdp.o -DSPEC_ACCEL -DSPEC -DNDEBUG randdp.c cc ep.o print_results.o randdp.o c_timers.o wtime.o -lm -lthread -lmtmalloc -o ep $ $ cd $SPEC/benchspec/ACCEL/352.ep/exe $ ls -lt | head -3 total 1872 -rwxrwxr-x 1 myke staff 1909232 Dec 23 14:11 ep_base.myke -rwxrwxr-x 1 myke staff 1909616 Dec 23 14:13 ep_base.yusuf $
Notice above that two different versions of ep were built from the same config file, and neither one used slowmalloc, since the named extension is higher priority than the default. Both executables are present in the exe directory for 352.ep.
Using an extension that is not found in the config file
The previous section demonstrated use of the runspec switch --extension to select among extensions defined in the config file. But what if the extension on the command line is not mentioned in the config file? The example below continues immediately from the example just above:
$ runspec --config=tmp --extension=yusoff runspec v1698 - Copyright 1999-2012 Standard Performance Evaluation Corporation ... ERROR: The extension 'yusoff' defines no settings in the config file! If this is okay and you'd like to use the extension to just change the extension applied to executables, please put allow_extension_override = yes into the header section of your config file.
By default, if you mention an extension on the runspec command line that does not exist in the config file, the tools refuse to build.
Extension override
But if you add allow_extension_override=yes to the top of the config file, then the tools will build or run with the extension you specified, using the same settings as they would have used if no extension had been entered on the runspec command line.
The next example continues with a config file that adds allow_extension_override to the previous example config file:
$ diff tmp.cfg tmp2.cfg 0a1 > allow_extension_override=yes $ $ runspec --config=tmp2 --extension=yusoff | grep randdp.o cc -c -o randdp.o -DSPEC_ACCEL -DSPEC -DNDEBUG randdp.c cc ep.o print_results.o randdp.o c_timers.o wtime.o -lm -lslowmalloc -o ep $ $ cd $SPEC/benchspec/ACCEL/352.ep/exe $ ls -lt | head -2 total 1872 -rwxrwxr-x 1 myke staff 1905232 Dec 23 14:18 ep_base.yusoff
Notice above that although the tools now consent to build with the requested extension, the library settings now falls back to the default, since "yusoff" does not match any section markers.
If more than one section applies to a particular benchmark without disagreement among them, then all are applied.
Consider this example:
$ cat tmp.cfg runlist = ep size = test iterations = 1 tune = base output_format = text teeout = 1 default=default=default=default: OPTIMIZE = -xO2 -w CC = /opt/SUNWspro/bin/cc LIBS = -lbsdmalloc 352.ep=default=default=default: OPTIMIZE = -xO4 -w default=peak=default=default: CC = /update1/bin/cc default=default=mt=default: LIBS = -lthread -lmtmalloc $ runspec --config=tmp --tune=peak --ext=mt | grep randdp.o /update1/bin/cc -c -o randdp.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xO4 -w randdp.c /update1/bin/cc ep.o print_results.o randdp.o c_timers.o wtime.o -lm -lthread -lmtmalloc -o ep $
Notice above that all three sections applied: the section specifier for 352.ep, the specifier for peak tuning, and the specifier for extension mt.
If sections conflict with each other, the order of precedence is:
highest benchmark suite tuning lowest extension
And this order can be demonstrated as follows:
$ cat tmp.cfg runlist = ep size = test iterations = 1 tune = base output_format = text teeout = 1 makeflags = -j30 default=default=default=default: OPTIMIZE = -xO1 -w openacc=default=default=default: OPTIMIZE = -xO2 -w 352.ep=default=default=default: OPTIMIZE = -xO3 -w default=peak=default=default: OPTIMIZE = -xO4 -w default=default=mt=default: OPTIMIZE = -xO5 -w 357.csp=peak=default=default: OPTIMIZE = -xO6 -w 357.csp=default=mt=default: OPTIMIZE = -xO7 -w $ $ runspec --config=tmp ep | grep randdp.c [1] cc -c -o randdp.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xO3 -w randdp.c $ runspec --config=tmp --tune=peak ep | grep randdp.c [2] cc -c -o randdp.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xO3 -w randdp.c $ runspec --config=tmp --extension=mt ep | grep randdp.c [3] cc -c -o randdp.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xO3 -w randdp.c $ $ runspec --config=tmp --tune=base olbm | grep lbm.c [4] cc -c -o lbm.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xO2 -w lbm.c $ runspec --config=tmp --tune=peak nab | grep lbm.c [5] cc -c -o lbm.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xO2 -w lbm.c $ runspec --config=tmp --extension=mt nabp | grep lbm.c [6] cc -c -o lbm.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xO2 -w lbm.c $ $ runspec --config=tmp --tune=base csp | grep sp.c [7] cc -c -o sp.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xO2 -w sp.c $ runspec --config=tmp --tune=peak csp | grep sp.c [8] cc -c -o sp.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xO6 -w sp.c $ runspec --config=tmp --extension=mt csp | grep sp.c [9] cc -c -o sp.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xO7 -w sp.c $ runspec --config=tmp --tune=peak --extension=mt csp | grep sp.c [10] cc -c -o sp.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xO6 -w sp.c $
Notice above that the named benchmark always wins: lines [1], [2], and [3]. If there is no section specifier that names a benchmark, but there is a section specifier that names a suite, then the suite wins: lines [4], [5], and [6]. If a named benchmark does not exist for the specified tunings, it will only use the default suite tuning in line [7]. Since there is an applicable benchmark specifiers, then tuning or extension can be applied in lines [8] and [9]. If both tuning and extension are applied to a named benchmark, tuning wins [10].
The final section of your config file is generated automatically by the tools when the benchmarks are compiled, and looks something like this:
__MD5__ 352.ep=base=pgi-13.10=default: # Last updated Mon Dec 23 14:13:36 2013 optmd5=a2822826b30e9fae4225ad414dcac759 baggage= compile_options=\ @eNqNUF1rgzAUfc+vCPd5KaPQh0ktaHStmxqh9WUvwcV0ZFNT/Cjs3+/aVdYONhoCuefmfpxzUtuw\ uvjQe1Npag+9sU3nkK5vjeplOzSlaeVRt2b/6cI9EAw7LHFh/gCEiyRz6OFNKcrw2ql/ZikLtlnI\ pcd5GJ8BPmkQ+vl6wpQVY6eYU9YXbnM0pSnu1FAWi9mC/j7Lzg6t0ivCHcq5C+NWOAHhP4ls58IV\ BSDIjGf5Y+ytt/j3D51TqR+mfCOvqoEIh+LkKIleQkz+TRZIHKXPZyduVLW0r+9a9d0KY1bVU/pH\ wagtDiaht1PBtsTbbWQc+aOUqobvSSIfPbow6AuPco3k exemd5=5e4d524cd3e300aafeb4b7a96a3421a4
The "MD5" is a checksum that ensures that the binaries referenced in the config file are in fact built using the options described therein. For example, if you edit the config file to change the optimization level for 352.ep, the next time the file is used for ep, the tools will notice the change and will recompile it.
You can optionally disable this behavior, but doing so is strongly discouraged. See the acerbic remarks in the description of check_md5, below.
If you would like to see what portions of your config file are used in computing the MD5 hash, runspec with --debug=30 or higher, and examine the log file.
For published results, the published config file (from rawformat --output_format=config) does not include the MD5 section.
Shell-style "here documents" are supported for setting variables to multi-line values. Backslash-continued lines are also supported:
$ cat tmp2.cfg expand_notes = 1 size = test runlist = ep iterations = 1 output_format = text foo =<<EOT This + is a + test + EOT bar = \ and +\ so +\ is +\ this+ notes01 = $foo notes02 = $bar $ runspec --config=tmp2 | grep txt format: ASCII -> /ailuropoda/raj/spec/result/ACCEL_ACC.017.test.txt $ grep + ../result/*017*txt This + is a + test + and + so + is + this+ $
Note: although the above forms of continued lines are supported, they are rarely used. The more common method of continuation is by appending a number to a field, as described in the section "Field scoping and numbered continuation lines".
It is possible to include another file in your config file. A typical use for this feature might be to keep all the software information in the main config file, but to include the hardware information about the current System Under Test (SUT) in another file. For example:
$ cat tmp.cfg output_format = text iterations = 1 size = test sw_compiler = myC V1.4 sw_avail = Mar-2014 runlist = ep include: SUT.inc default=base: OPTIMIZE = -O $ cat SUT.inc hw_model = SuperHero IV hw_avail = Feb-2014 $ runspec --config tmp | grep txt format: ASCII -> /manas/spec/result/ACCEL_ACC.160.test.txt $ grep avail ../result/*160.test.txt [...] Hardware availability: Feb-2014 [...] Software availability: Mar-2014 $
Notice above that the report mentions both the hardware and software dates.
You can do variable substitution using your config file. But, as described in the Introduction, the contents of a config file are directed to various consumers. Therefore, effective use of variable substitution requires you to be aware of which software is doing the substitution. Differing syntax is used for each.
|
q. Wait a minute... all these choices for substitution? Which one do I want? a. You probably want the preprocessor. Have a look at the example at the top of Section VII. If that looks like what you want, you're all set; otherwise, you'll have to think through which consumer you are addressing (runspec, specmake, or the shell) and pick your syntax accordingly. |
Substitution by runspec itself uses two different methods: the first is immediately after the config file is read, and the second is via Perl variable interpolation.
Substitution for variables of the form $[variable] happens immediately after the config file is read. Any value that's set in the config file and is visible in the scope where the variable is used can be substituted. Because of the named section scoping restriction, if you want to use variable substitution to note your optimization flags, the notes for the individual benchmarks must be in those benchmarks' sections:
350.md=peak=default=default:
PEAKFLAG=-gofast
# The following will turn into what you expect
notes_peak_350_1=I use $[PEAKFLAG]
350.md=base=default=default:
BASEFLAG=-besafe
# The following will turn into what you expect:
notes_base_350_1=I use $[BASEFLAG]
# The following will NOT WORK:
notes_base_350_2=My brother likes $[PEAKFLAG]
You can't substitute variables that don't exist or whose values aren't known when the config file is read.
Wrong:
ext = foo default: OPTIMIZE = -xO2 notes01 = my ext is $[ext] default=default=bar: OPTIMIZE = -xO1 notes02 = my ext is $[ext]
This doesn't work because the sorting of which extensions to use doesn't happen until after the config file is processed. In this particular example, it's obvious (to you) what the value should be, but the tools aren't as clever as you are.
Perhaps the most useful variable is the one for the top of the SPEC ACCEL tree, $[top], often found in contexts such as:
flagsurl = $[top]/myflagsdir/myflagsfile.xml
Variables of possible interest might include:
configpath | The location of your config file |
---|---|
dirprot | protection that is applied to directories created by runspec |
endian | 4321 for big endian, 1234 for little |
flag_url_base | directory where flags files are looked up |
OS | unix or windows |
os_exe_ext | exe for windows, nil elsewhere |
realuser | the user name according to the OS |
top | the top directory of your installed SPEC ACCEL tree |
username | the username for purposes of tagging run directories |
uid | the numeric user id |
You can substitute for most options that you enter into the config file, including: action, allow_extension_override, backup_config, basepeak, check_md5, check_version, command_add_redirect, config, copies, delay, deletework, device, difflines, env_vars, expand_notes, expid, ext, fake, feedback, flagsurl, http_proxy, http_timeout, ignore_errors, ignore_sigint, info_wrap_columns, iterations, line_width, locking, log_line_width, mach, mail_reports, mailcompress, mailmethod, mailport, mailserver, mailto, make, make_no_clobber, makeflags, mean_anyway, minimize_builddirs, minimize_rundirs, no_input_handler, no_monitor, notes_wrap_columns, notes_wrap_indent, output_format, output_root, plain_train, platform, power, power_analyzer, rawformat, rebuild, reportable, runlist, section_specifier_fatal, sendmail, setprocgroup, size, strict_rundir_verify, sysinfo_program, table, teeout, temp_meter, tune, use_submit_for_speed, username, verbose, version_url.
You can also print out the value of additional variables that you may have created.
Here is a sample config file that illustrates square bracket variable substitution:
$ cat x.cfg expand_notes = 1 runlist = opencl action = validate myfriend = jamiemeow output_root = /tmp flagsurl = $[top]/Docs/flags/flags-advanced.xml notes01 = Today, I am running $[runlist] in $[top] on a $[OS] system notes02 = Today the flags file is $[flagsurl] notes03 = Today, my favorite friend is $[myfriend] $ runspec --config=x --fakereportable | grep txt format: ASCII -> /tmp/result/ACCEL_OCL.002.txt $ grep Today /tmp/result/ACCEL_OCL.002.txt Today, I am running opencl in /spec/accel on a unix system Today the flags file is /spec/accel/Docs/flags/flags-advanced.xml Today, my favorite friend is jamiemeow
You can't use square brackets to substitute variables whose value changes during a run or build.
Wrong:
default=default=default=default: # I executed '' notes0 = I executed '$[command]'
What did you expect?
Don't worry -- you'll receive a warning if you use a variable that the tools don't know about. It's up to you to heed them.
The second round uses Perl variable interpolation. Only Perl scalars (denoted by a leading $) can be interpolated. For example, notes001 below uses the log file number (generated by the tools) and the hardware availability date (which was set directly):
$ cat tmp.cfg runlist = md tune = base size = test iterations = 1 output_format = text expand_notes = 1 hw_avail = May-2014 notes001 = This run is from log.$lognum with hw_avail $hw_avail $ runspec -c tmp | grep txt format: ASCII -> /spec/david/result/ACCEL_ACC.029.test.txt $ grep with ../result/*029*txt This run is from log.029 with hw_avail May-2014 $
In this case, $hw_avail could also have been substituted in the first round by writing it as $[hw_avail]. In general, for variable interpolation, the earlier the better.
To put text immediately after a variable, you need to make it possible for the parser to see the variable that you want, by using braces:
% tail -2 tmp.cfg notes001 =You have done ${lognum}x runs tonight, go to bed. % runspec -c tmp | grep txt format: ASCII -> /john/result/ACCEL_ACC.103.test.txt % grep done /john/result/ACCEL_ACC.103.test.txt You have done 103x runs tonight, go to bed.
Interpolation won't always do what you wish it might do: for example, some variables are only defined at certain times, and your submit or notes line might be interpolated at a different time. When debugging a config files that uses variable interpolation, you will probably find --size test useful.
Some things that you might choose to interpolate include:
baseexe | The first part of the executable name, which is <baseexe>_<tune>.<ext>. For example, in "miniGhost_base.foo", baseexe is "miniGhost". |
---|---|
benchmark | The number and name of the benchmark currently being run, for example 359.miniGhost |
benchname | The name of the benchmark currently being run, for example miniGhost |
benchnum | The number of the benchmark currently being run, for example 359 |
benchtop | The top directory for the benchmark currently being run, for example /spec/accel/benchspec/ACCEL/359.miniGhost |
command | The shell command line to run the current benchmark, for example ../run_base_test_foo.0000/miniGhost_base.foo --scaling 1 --nx 100 --ny 100 --nz 100 > testset.out 2>> testset.err |
commandexe | The executable for the current command, for example ../run_base_test_foo.0000/miniGhost_base.foo |
ext | The extension for the benchmark being run |
iter | The current iteration number |
logname | The complete log file name, for example /spec/accel/result/ACCEL.168.log |
lognum | The log file number, for example 168 |
tune | The tuning for the benchmark being run (base or peak) |
workload | The current workload number (within the iteration) |
For example, let's say you want to go for minimal performance. You might want to do this with the nice command. You can say:
submit=nice 20 '$command'
and the $command gets expanded to whatever would normally be executed but with nice 20 stuck in front of it.
If you'd like a complete list of the variables that you can use in your commands (relative to the config file you're using), set runspec's verbosity to 35 or higher (-v 35) and do a run that causes a command substitution to happen, with expand_notes=1.
Perhaps a more useful example is this one, which directs the shell to clean files related to the current executable in the temporary directory, before a training run for feedback directed optimization:
fdo_pre0 = mkdir /tmp/pb; rm -f /tmp/pb/${baseexe}*
NOTICE in this example that although the commands are carried out by the shell, the variable substitution is done by runspec.
The following lines submit to multiple nodes on a Compaq AlphaServer SC Series Supercomputer running Tru64 Unix:
submit= echo "$command" > dobmk; prun -n 1 sh dobmk command_add_redirect=1
In this example, the command that actually runs the benchmark is written to a small file, dobmk, which is then submitted to a remote node selected by prun. The parallel run command, prun, can execute multiple copies of a process, but in this case we have requested just one copy by saying -n 1. The SPEC tools will create as many copies as required.
The command_add_redirect is crucial. What happens without it?
$ head prun1.cfg submit= echo "$command" > dobmk; prun -n 1 sh dobmk ext = prun action = validate runlist = md size = test use_submit_for_speed = 1 iterations = 1 ignore_errors = 1 output_format = text $ runspec -c prun1.cfg ... $ cd $SPEC/benchspec/ACCEL/350.md/run/run_base_test_prun.0000/ $ cat dobmk ../run_base_test_prun.0000/md_omp_base.prun $
Now let's use command_add_redirect, and see how dobmk changes:
$ diff prun1.cfg prun2.cfg 1a2 > command_add_redirect=1 $ $ runspec -c prun1.cfg ... $ cd $SPEC/benchspec/ACCEL/350.md/run/run_base_test_prun.0000/ $ cat dobmk ../run_base_test_prun.0000/md_omp_base.prun > 1.out 2>> 1.err $
Notice that with command_add_redirect=1, the substitution for $command includes both the name of the executable and the file assignments for standard in, standard out, and standard error. This is needed because otherwise the files would not be connected to md on the remote node. That is, the former generates [*]:
echo "md_omp_base.prun " > dobmk; prun -n 1 sh dobmk > 1.out 2>> 1.err
And the latter generates [*]:
echo "md_omp_base.prun > 1.out 2>> 1.err" > dobmk; prun -n 1 sh dobmk
[*] The picky reader may wish to know that the examples were editted for readability: a line wrap was added, and the directory string .../run_base_test_prun.0000/ was omitted. The advanced reader may wonder how the above lines were discovered: for the former, we found out what runspec would generate by going to the run directory and typing specinvoke -n, where -n means dry run; in the latter, we typed specinvoke -nr, where -r means that $command already has device redirection. For more information on specinvoke, see utility.html.
Substitution by the shell - or by the windows command interpreter - uses backslash dollar sign. The next two sections explain why both punctuation marks are required, and then provide an example.
Because Perl variables look a lot like shell variables, you need to specially protect shell variables if you want to prevent Perl from trying to interpret them. Notice what happens with the protected and unprotected versions:
$ cat tmp.cfg runlist = md size = test tune = base,peak iterations = 1 output_format = text teeout = 1 expand_notes = 1 use_submit_for_speed = 1 default=peak=default=default: submit = echo "home=$HOME; spec=$SPEC;" > /tmp/chan; $command default=base=default=default: submit = echo "home=\$HOME; spec=\$SPEC;" > /tmp/nui; $command $ runspec --config=tmp > /dev/null $ cat /tmp/chan home=; spec=; $ cat /tmp/nui home=/home/chris; spec=/spec/accel; $
In the first submit command, $HOME and $SPEC were gobbled up by runspec. But since those are not the names of variables that can be interpolated, empty strings were the result. In the second command, the backslashes prevented runspec from interpreting the variables, so they were seen by the shell instead.
By default, submit may be applied to runs. In this example this capability was redundantly enabled by setting
use_submit_for_speed = 1
in the configuration file.
Variables with a dollar sign and parentheses, aka "round brackets", are substituted by specmake. For example:
COMPILER_DIR=/usr/local/bin/ CC=$(COMPILER_DIR)cc
For a more extensive example of variable substitution handled by specmake, see the example in the file $(SPEC)/config/example-advanced.cfg. Search that file for LIBS, and note the long comment which provides a walk-through of a complex substitution handled by specmake.
Deprecated feature alert: Although it is also possible to pass information to specmake using curly brackets: ${SPECMAKE}, this is not recommended. Instead, you should consistently use curly brackets to address runspec and round brackets to address specmake. It is possible that a future version of runspec may insist on interpolating curly brackets itself, rather than allowing specmake to do so.
Once runspec hands control over to specmake or to the shell, the results of further substitution are invisible to runspec. For this reason, you can't say:
Wrong:
MYDIR = /usr/gretchen/compilers FC = $(MYDIR)/f90 notes_comp_001 = compiler: $(FC)
However, there are a couple of ways to get around this restriction. The best way for global settings is to use the preprocessor:
%define MYDIR /usr/gretchen/compilers FC = %{MYDIR}/f90 notes_comp_001 = compiler: %{MYDIR}/f90
That leaves a little to be desired, though, doesn't it? If your Fortran compiler is changed to 'f2001', you still need to remember to change it in two places. You could of course define a whole macro for this:
%define MYFC /usr/gretchen/compilers/f90 FC = %{MYFC} notes_comp_001 = compiler: %{MYFC}
But what if you have a config file where FC might be set in multiple places, and you really want to know how it was set right here? In that case, use a combination of the preprocessor and variable substitution:
$ cat right_here.cfg %define MYDIR /usr/gretchen/compilers default=base: FC = %{MYDIR}/f2001 notes_comp_001 = For base, we used this Fortran: $[FC] 363.swim=peak: FC = %{MYDIR}/f77 notes_comp_002 = For 363.swim peak, we used this Fortran: $[FC] $ runspec -c right_here.cfg --fakereportable openacc | grep txt format: ASCII -> /usr/gretchen/spec/result/ACCEL_ACC.179.test.txt $ show ../result/*179*.txt Compiler Invocation Notes ------------------------- For base, we used this Fortran: /usr/gretchen/compilers/f2001 For 363.swim peak, we used this Fortran: /usr/gretchen/compilers/f77
It is sometimes useful to be able to undo the setting of a variable that is defined in a lower-precedence section. This is easily accomplished using the special value '%undef%':
$ cat gnana.cfg teeout = yes action = build runlist = ep default=default: OPTIMIZE = -O COPTIMIZE = -fast 352.ep=peak: COPTIMIZE = %undef% $ runspec --config=gnana --tune=base | grep randdp.c cc -c -o randdp.o -DSPEC_ACCEL -DSPEC -DNDEBUG -O -fast -w randdp.c $ runspec --config=gnana --tune=peak | grep randdp.c cc -c -o randdp.o -DSPEC_ACCEL -DSPEC -DNDEBUG -O -w randdp.c $ go ep /spec/gnana/test/benchspec/ACCEL/352.ep $ cd buil/build_peak_none.0000/ $ grep OPT Make* Makefile.spec:COPTIMIZE = Makefile.spec:OPTIMIZE = -O
As you can see in the peak compilation, the '-fast' flag was not present because the setting for COPTIMIZE had been deleted.
This section documents options that control the operation of runspec itself.
In the list that follows, some items are linked to runspec.html, because they can be specified either in a config file, or on the runspec command line.
If options can be specified on both the runspec command line and in a config file, the command line will win vs. items specified in the header section, but will not win over items specified in named sections. Effectively, the order of precedence is:
named sections (highest) command line header section (lowest)
For an example of these precedence rules in action, see section VI.E. on --feedback
In the table that follows, the "Use In" column indicates where the option can be used:
H | use only in header section |
N | use in a named section. |
H,N | can be used in both the header section and in named sections. The item can therefore be applied on a global basis, and/or can be applied to individual benchmarks. |
Option | Use In | Default | Meaning |
action | H | validate | What to do. |
allow_extension_override | H | 0 | The runspec command can use --extension to select among different options in a config file, as mentioned above. But what if the extension mentioned on the runspec command does not occur in any section marker? What should be done then?
|
backup_config | H | 1 | When updating the MD5 hashes in the config file, make a backup copy first. Highly recommended to defend against full-file-system errors, system crashes, or other unfortunate events. |
basepeak | H,N | 0 | Use base binary and/or base result for peak. If applied to the whole suite (in the header section), then only base is run, and its results are reported for both the base and peak metrics. If applied to a single benchmark, the same binary will be used for both base and peak runs, and the median of the base run will be reported for both. |
Option | Use In | Default | Meaning |
build_in_build_dir | H | 1 | When set, put build directories in a subdirectory named benchspec/ACCEL/nnn.benchmark/build/build... (Unix) or benchspec\ACCEL\nnn.benchmark\build\build... (Windows) Specifying '0' will cause the build directories to be the run tree: benchspec/ACCEL/nnn.benchmark/run/build... (Unix) benchspec\ACCEL\nnn.benchmark\run\build... (Windows) Why are build directories separated? Benchmarks are now built in directories named nnn.benchmark/build rather than under the benchmark's run subdirectory in order to make it easier to copy, backup, or delete build and run directories separately from each other. It may also make problem diagnosis easier in some situations, since your habit of removing all the run directories will no longer destroy essential evidence 10 minutes before the compiler developer says "Wait - what exactly happened at build time?". |
check_md5 | H | 1 | Runspec uses MD5 hashes to verify that executables match the config file that invokes them, and if they do not, runspec forces a recompile. You can turn that feature off by setting check_md5=0. Warning: It is strongly recommended that you keep this option at its default, '1' (that is, enabled). If you disable this feature, you effectively say that you are willing to run a benchmark even if you don't know what you did or how you did it -- that is, you lack information as to how it was built! The feature can be turned off because it may be useful to do so sometimes when debugging (for an example, see env_vars, below), but it should not be routinely disabled. Since SPEC requires that you disclose how you build benchmarks, reportable runs (using the command-line switch --reportable or config file setting reportable=1) will cause check_md5 to be automatically enabled. |
Option | Use In | Default | Meaning |
check_version | H | 0 (1 for reportable runs) |
When set, before doing a reportable run, runspec will download a small file (~15 bytes) from www.spec.org containing the current version of the suite and the date it was released, and check your copy vs. that file. In this way, you can be notified if the version of the suite that you're using is out-of-date. Setting this variable to 0 will disable this check. If you'd like to check a local file instead, you can modify version_url to point to your internal copy. If you would like to check your version for a NON-reportable run, you will need to add --check_version to your command line. Setting check_version=1 in the config file only causes the check for reportable runs. |
command_add_redirect | H | 0 | If set, the generated $command will include redirection operators (stdout, stderr), which are passed along to the shell that executes the command. If this variable is not set, specinvoke does the redirection itself. This option is commonly used when using the submit command; please see the example in section I.D.1.d. |
current_range | H,N | none | Set the maximum current in amps to be used by the power analyzer(s). This can be used to control the settings on a per benchmark level (named section) or across all benchmarks (header section). |
delay | H,N | 0 | Insert a delay of the specified number of seconds before and after benchmark execution. This delay does not contribute to the measured runtime of the benchmark. This delay is also not available in a reportable run. |
Option | Use In | Default | Meaning |
deletework | H,N | 0 | If set to 1, always delete existing benchmark working directories. An extra-careful person might want to set this to ensure no unwanted leftovers from previous benchmark runs, but the tools are already trying to enforce that property. |
device | H,N | none | Allows the user to set the accelerator device number to use. For OpenCL, the value is passed into the executable. For OpenACC, the environment variable ACC_DEVICE_NUM is set before running. For OpenMP, the environment variable OMP_DEFAULT_DEVICE is set before running. |
difflines | H,N | 10 | Number of lines of differences to print when comparing results. |
env_vars | H,N | 0 | If this variable is set to 1, then environment settings can be changed for benchmarks using ENV_* options in the config file. For example, consider the following executable, which fails because the library directory has been removed: $ ls -l miniGhost_base.tmp3 -rwxrwxr-x 1 david ptg 3442856 Dec 23 14:23 miniGhost_base.tmp $ ldd miniGhost_base.tmp | head -2 libdl.so.2 => /lib64/libdl.so.2 libnuma.so => (file not found) $ But if we add the following to the config file: check_md5=0 env_vars=1 359.miniGhost: ENV_LD_LIBRARY_PATH=/spec/david/lovecraft/lib then bwaves gets an LD_LIBRARY_PATH which points to the new library directory, and the benchmark runs successfully. (The addition of check_md5=0 allows us to change the settings for 359.miniGhost without triggering a rebuild.) See the run rules for requirements on setting the environment. If you are trying to do a reportable run, perhaps the preenv feature is what you're looking for. Which environment? If you are attempting to communicate settings from your shell environment into runspec, this is not the feature that you are looking for. Try the config file preprocessor instead. The env_vars option and ENV* are about communication from the config file to the environment of the invoked benchmark. When developing a config file that uses env_vars, you may find it useful to set the verbosity level to 35 (or higher), which will cause the tools to log environment settings. |
Option | Use In | Default | Meaning |
expand_notes | H | 0 | If set, will expand variables in notes. This capability is limited because notes are NOT processed by specmake, so you cannot do repeated substitutions. You'll find some suggestions above. |
expid | H | If set to a non-empty value, will cause executables, run directories, results, and log files to be put in a subdirectory (with the same name as the value set) in their normal directories. For example, setting expid = CDS will cause benchmark binaries to end up in exe/CDS, run directories to end up in run/CDS, and results and logs in $SPEC/result/CDS. | |
ext | H | none | Extension for executables created. This may not be set to any value that contains characters other than alphanumerics, underscores, hyphens, or periods. |
fail | H,N | 0 | If set, will cause a build or run to fail. |
fail_build | H,N | 0 | If set, will cause a build to fail. For example, you could say something like this: 350.md=default: #> I am posting this config file for use by others in the #> company, but am forcing it to fail here because #> I want to force users to review this section. #> #> Once you find your way here, you should test whether #> bug report 234567 has been fixed, by using the first #> line below. If it has not been fixed, then use the #> second. In either case, you'll need to remove the #> fail_build. #> #> - Pney Guvaxre #> Boomtime, the 66th day of Confusion in the YOLD 3172 # OPTIMIZE = -Osuperduper # OPTIMIZE = -Omiddling fail_build = 1 In the example above, the build is forced to fail until the user examines and modifies that section of the config file. Notice that Pney has used protected comments to cause the comments about the internal bug report to disappear if the config file were to be published as part of a reportable run. |
fail_run | H,N | 0 | If set, will cause a run to fail. |
Option | Use In | Default | Meaning |
feedback | H,N | 1 | The feedback option applies an on/off switch for the use of feedback directed optimization (FDO), without specifying how the feedback will be done.
The interaction between feedback and these other options is described in section VI, below. |
flagsurl | H | none | If set, retrieve the named URL or filename and use that as the "user" flags file. If the special value "noflags" is used, runspec will not use any file and (if formatting previously run results) will remove any stored file. For more about automated processing of flags, see flag-description.html. If you want to list more than one flagsfile, the recommended method is by using numbered continuation lines, for example: flagsurl1 = mycompiler.xml flagsurl2 = myplatform.xml Using other methods (such as backslash continuation) to specify multiple flags files may appear to work, but may result in unexpected differences between the original config file and the config file as written by output format config. |
Option | Use In | Default | Meaning |
http_proxy | H | In some cases, such as when doing version checks and loading flag description files, runspec will use HTTP or FTP to fetch a file. If you need to specify the URL of a proxy server, this is the variable to use. By default, no proxy is used. Note that this setting will override the value of the http_proxy environment variable. For example, one might set: http_proxy = http://webcache.tom.spokewrenchdad.com:8080 Note: if an FTP proxy is needed, it must be set in the ftp_proxy environment variable; there is no corresponding config file setting. Config files as posted at www.spec.org/accel will not include whatever you put on this line. |
|
http_timeout | H | 30 | This is the amount of time (in seconds) to wait while attempting to fetch a file via HTTP or FTP. If the connection cannot be established in this amount of time, the attempt will be aborted. |
idle_current_range | H | none | Set the maximum current in amps to be used by the power analyzer(s) for the idle power measurement. |
ignore_errors | H | 0 | Ignore certain errors which would otherwise cause the run to stop. Very useful when debugging a new compiler and new set of options: with this option set, you'll find out about all the benchmarks that have problems, instead of only finding out about the first one. |
ignore_sigint | H | 0 | Ignore SIGINT. If this is set, runspec will attempt to continue running when you interrupt one of its child processes by pressing ^C (assuming that you have ^C mapped in the common way). Note that this does NOT cause runspec itself to ignore SIGINT. |
Option | Use In | Default | Meaning |
info_wrap_columns | H | 50 | When set to a value greater than 0, attempts to split non-notes informational lines such that they are no longer than info_wrap_columns columns wide. Lines are split on whitespace, and newly created lines are guaranteed to have at least the same indentation as the original line. If a line contains an item that is longer than info_wrap_columns, a warning is logged and the original line is left unchanged. |
inherit_from | N | '' | If set within a benchmark section, allows explicit inheritance of settings from another benchmark's section. The section to be inherited from is referenced using colons between the four section specifiers. Other inheritance mechanisms continue to work. Effectively, the referenced benchmark is the second highest priority -- second only to items specifically mentioned in the referring section. An example may help to clarify these points: $ cat -n tmp.cfg 1 iterations = 1 2 size = test 3 teeout = yes 4 5 default=default: 6 OPTIMIZE = -xO1 7 8 openacc=default: 9 OPTIMIZE = -xO2 10 LIBS = -lbsdmalloc 11 12 352.ep=default: 13 OPTIMIZE = -xO3 14 CC = /update1/cc 15 16 370.bt=peak: 17 inherit_from = 352.ep:default:default:default 18 CC = /update2/cc $ runspec --config=tmp --tune base bt | grep bt.o cc -c -o bt.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xO2 bt.c cc -xO2 add.o adi.o error.o exact_rhs.o exact_solution.o initialize.o rhs.o solve_subs.o x_solve.o y_solve.o z_solve.o print_results.o set_constants.o bt.o verify.o -lm -lbsdmalloc -o bt $ runspec --config=tmp --tune peak bt | grep bt.o /update2/cc -c -o bt.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xO3 bt.c /update2/cc -xO3 add.o adi.o error.o exact_rhs.o exact_solution.o initialize.o rhs.o solve_subs.o x_solve.o y_solve.o z_solve.o print_results.o set_constants.o bt.o verify.o -lm -lbsdmalloc -o bt In the above example,
Line 17 above could have been simplified. Just as trailing default section specifiers can be omitted at the original definition points, as explained above, they can also be omitted on an inherit_from option. |
Option | Use In | Default | Meaning |
iterations | H,N | 3 | Number of iterations to run. |
keeptmp | H | 0 | Whether or not to keep various temporary files. If you leave keeptmp at its default setting, temporary files will be automatically deleted after a successful run. If not, temporary files may accumulate at a prodigious rate, and you should be prepared to clean them by hand. Temporary files include:
|
line_width | H | 0 | Line wrap width for screen. If left at the default, 0, then lines will not be wrapped and may be arbitrarily long. |
locking | H | 1 | Try to use file locking to avoid race conditions, e.g. if more than one copy of runspec is in use. Although performance tests are typically done with only one copy of runspec active, it can be handy to run multiple copies if you are just testing for correctness, or if you are compiling the benchmarks. |
log_line_width | H | 0 | Line wrap width for logfiles. If your editor complains about lines being too long when you look at logfiles, try setting this to some reasonable value, such as 80 or 132. If left at the default, 0, then lines will not be wrapped and may be arbitrarily long. |
log_timestamp | H | 0 | Whether or not to prepend time stamps to log file lines. |
Option | Use In | Default | Meaning |
mach | H | default | Default machine ID. This may not be set to any value that contains characters other than alphanumerics, underscores, hyphens, or periods. See the warning in the description of section specifiers, above. |
mailcompress | H | 0 | When using the 'mail' output format, turning this on will cause the various report attachments to be compressed with bzip2. |
mailmethod | H | smtp | When using the 'mail' output format, this specifies the method that should be used to send the mail. On UNIX and UNIX-like systems, there are three choices: 'smtp' (communicate directly with an SMTP server over the network), 'mail' (try using mail(1) if available), and 'sendmail' (try invoking sendmail directly). On Windows systems, only 'smtp' is available. SMTP is the recommended setting. |
mailport | H | 25 | When using the 'mail' output format, and when the mailmethod is 'smtp', this specifies the port to use on the mail server. The default is the standard SMTP port and should not be changed. |
mailserver | H | 127.0.0.1 | When using the 'mail' output format, and when the mailmethod is 'smtp', this specifies the IP address or hostname of the mailserver through which to send the results. |
Option | Use In | Default | Meaning |
mailto | H | '' | The address or addresses to which results should be sent when using the 'mail' output format. If multiple addresses are specified, they should be separated by commas or whitespace. Each address should consist only of the name@domain part (i.e. no "full name" type info). The addresses are not checked for correct formatting; if a mistake is made, the results may be sent to an unknown location. Think: comp.arch. OK, probably not there, but seriously be careful about security on this one. Config files as posted at www.spec.org/accel will not include whatever you put on this line (thus, spambots will not see the contents of this field). Note that to get your reports mailed to you, you need to specify both mail as an output_format and an address to which they should be mailed. For example: [email protected] output_format=text,mail If no addresses are specified, no mail will be sent. |
mail_reports | H | all | The list of report types to mail. The format and possible values are the same as for output_format, with the addition of log, which will cause the current log file to be sent.
The default is for all files associated with the run to be mailed (so, this will include what you listed as your desired
output_format plus log (the log file) and rsf (the rawfile). You can cut your email down to
the bare essentials with something like this:
[email protected] output_format=text,mail mail_reports=textIf none of the requested report types were generated, no mail will be sent. |
Option | Use In | Default | Meaning |
make | H,N | specmake | Name of make executable. Note that the tools will enforce use of specmake for reportable results. |
make_no_clobber | H,N | 0 | Don't delete directories when building executables. This option should only be used for troubleshooting a problematic compile. The tools will not allow you to use this option when building binaries for a reportable result. Note that you could issue multiple successive runspec commands with this option set (either in the config file, or with the --make_no_clobber switch), and the build directories will be preserved. But once you remove make_no_clobber (allowing it to default back to 0), then the tools will attempt a normal build with a fresh build directory. |
makeflags | H,N | '' | Extra flags for make (such as -j). Set this to -j n where n is the number of concurrent processes to run during a build. Omitting n or setting it to zero unlimits the number of jobs that will be run in parallel. (Use of -j in conjunction with ONESTEP will still result in successful builds, but they will be necessarily serialized unless your compiler implements the parallelism itself.) Use with care! Other flags should be used here only if you are familiar with GNU make. Note that requesting a parallel build with makeflags = -j N causes multiple processors to be used at build time. It has no effect on whether multiple processors are used at run time. |
Option | Use In | Default | Meaning |
mean_anyway | H | 0 | Calculate mean even if invalid. DANGER: this will write a mean to all reports even if no valid mean can be computed (e.g. half the benchmarks failed). A mean from an invalid run is not "reportable" (that is, it cannot be represented in public as the SPEC metric). |
minimize_rundirs | H | 0 | Try to keep working disk size down. Cannot be used in a reportable run. |
minimize_builddirs | H | 0 | Try to keep working disk size down during builds. |
nobuild | H | 0 | Do not attempt to build benchmarks. Useful to prevent attempts to rebuild benchmarks that cannot be built. Also comes in handy when testing whether proposed config file options would potentially force an automatic rebuild. |
no_monitor | H,N | '' | Exclude the listed workloads from monitoring via the various monitor_* hooks. |
Option | Use In | Default | Meaning |
no_input_handler | H,N | close | Method to use to simulate an empty input. Choices are:
Normally, this option should be left at the default. If a reportable run for ACCEL uses this feature, an explanation should be provided as to why it was used. |
notes_wrap_columns | H | 0 | When set to a value greater than 0, attempts to split notes lines such that they are no longer than notes_wrap_columns columns wide. Lines are split on whitespace, and newly created lines are guaranteed to have at least the same indentation as the original line. If a line contains an item that is longer than notes_wrap_columns, a warning is logged and the original line is left unchanged. |
notes_wrap_indent | H | ' ' | When line wrapping is enabled (see notes_wrap_columns), this is the string that will be prepended to newly created lines after the indentation from the original line is applied. The default is four spaces, but it can be set to any arbitrary string. |
Option | Use In | Default | Meaning |
output_format | H | all | Format for reports. Valid options are listed at runspec.html under --output_format; major options include txt (ASCII text), html, pdf, and ps. You might prefer to set this to txt if you're going to be doing lots of runs, and only create the pretty reports at the end of the series. See also the information in runspec.html about --rawformat. |
output_root | H | When set to a non-empty value, causes all files written (other than config files) to be rooted in the directory named by the value, instead of being rooted in $SPEC. For example, setting output_root = /tmp/foo will cause results and logs to be deposited in /tmp/foo/result. This also applies to benchmark binaries and run directories. This feature can be used to easily allow multiple people to access a single benchmark installation to which (with one exception) they do not need write access. The exception is $SPEC/config. Because the setting for the location comes from the config file, the config files still live under $SPEC. On Unix systems, you might choose to set the permissions on the config subdirectory to 1777 (which is the same as /tmp on many systems). For an example of the use of output_root, see the section on it in runspec.html. |
Option | Use In | Default | Meaning |
plain_train | H,N | 1 | When true (set to 1, 'yes', or 'true'), does not apply any submit commands to the feedback training run. It also causes the monitor_* hooks to be ignored for the feedback training run. |
platform | H,N | none | Allows the user to set the accelerator platform name to use. For OpenCL, the value is passed into the executable. For OpenACC, the environment variable ACC_DEVICE_TYPE is set before running. This setting is not used for OpenMP. |
power | H | none | When this is set to yes, it tells the tools to collect power data during the run. |
power_analyzer | H | none | This provides a list of names that are associated with the power meters the tools will use to communicate with the SPEC power daemon. These names will then be used in the descriptive fields options which relate to describing the power meters. When used with the descriptive fields, only the letters and numbers are used in the {id} portion of the field. |
preenv | H | 1 | Use preENV_ lines in the config file. When this option is set (the default), lines of the form preENV_<variable> = <value> will cause runspec to set the specified environment variable to value and re-exec runspec to perform the run. The restart is done in order to enforce the "unchanging environment" run rule. Multiple preENV_ settings may appear in the config file. For an example of the use of preENV settings, see the discussion of $LD_LIBRARY_PATH in the description of make_bundle. Note that both the preenv option and any preENV_<variable> = <value> lines must appear in the header section of the config file, where the entire run is affected. If you are looking for a way to affect the environment of an individual benchmark, try env_vars. |
Option | Use In | Default | Meaning |
reportable | H | 0 | Strictly follow reporting rules, to the extent that it is practical to enforce them by automated means. The tester remains responsible for ensuring that the runs are rule compliant. You must set reportable to generate a valid run suitable for publication and/or submission to SPEC. |
rebuild | H | 0 | Rebuild binaries even if they exist. |
runlist | H | none | What benchmarks to run. Names can be abbreviated, just as on the command line. See the long discussion of run order in runspec.html. |
section_specifier_fatal | H | 1 | While parsing the config file, if a section specifier is found that refers to an unknown benchmark or benchset, an error is output and the run stops. Set section_specifier_fatal=0 in the header section of your config file to convert this error into a warning and allow the run to continue. The ability to convert section specifier errors into warnings is probably of use only for benchmark developers. |
Option | Use In | Default | Meaning |
sendmail | H | /usr/sbin/ sendmail |
When using the mail output format, and when the mailmethod is sendmail, this specifies the location of the sendmail binary. |
setprocgroup | H | 1 | Set the process group. On Unix-like systems, improves the chances that ^C gets the whole run, not just one of the children. |
size | H | ref | Size of input set. If you are in the early stages of testing a new compiler or new set of options, you might set this to test or train. |
src.alt | N | '' | Name of subdirectory under <benchmark>/src/src.alt/ from which to
draw approved source code modifications. Set this in the named section for the benchmark(s) where you wish to have src.alt(s)
applied. Multiple src.alts may be specified for a single benchmark; the names should be separated by commas. If the
specified alternate sources are not for the version of the suite in use, or if they are corrupt, the build will fail. You may also spell this option as srcalt. |
strict_rundir_verify | H,N | 1 | When set, the tools will verify that the file contents in existing run directories match the expected MD5 checksums. Normally, this should always be on, and reportable runs will force it to be on. Turning it off might make the setup phase go a little faster while you are tuning the benchmarks. Developer notes: setting strict_rundir_verify=0 might be useful when prototyping a change to a workload or testing the effect of differing workloads. Note, though, that once you start changing your installed tree for such purposes it is easy to get lost; you might as well keep a pristine tree without modifications, and use a second tree that you convert_to_development. |
Option | Use In | Default | Meaning |
sysinfo_program | H | 'specperl $[top]/Docs/sysinfo' | The name of an executable program or script that automatically fills out some of the information about your system configuration. An example sysinfo program is on your kit. To use the example, add this line near the top of your config file (i.e. in the header section): sysinfo_program = specperl $[top]/Docs/sysinfo If you would like to turn the feature off, you can use: sysinfo_program = Warning: it is possible that SPEC may decide to require use of a sysinfo utility at a future time. If that occurs, the online copy of this document at www.spec.org/accel/Docs/config.html will be updated to so indicate. Details about the sysinfo utility may be found in the section About sysinfo, below, including how to selectively enable output types, how to resolve conflicting field warnings, and how to write your own sysinfo utility. |
Option | Use In | Default | Meaning |
table | H | 1 | In ASCII reports, include information about each execution of the benchmark. |
teeout | H | 0 | Run output through tee so you can see it on the screen. Primarily affects builds, but also provides some information about progress of runtime, by showing you the specinvoke commands. |
temp_meter | H | none | This provides a list of names associated with the temperature meters the tools will use to communicate with the SPEC power daemon. These names will then be used in the descriptive fields options which relate to describing the temperature meters. |
train_with | H,N | train | Select the workload with which to train binaries built using feedback-directed optimization. The ability to train with alternate workloads could be used, for example when studying the efficacy of different training methods, as follows: (1) First convert your tree to a development tree; (2) place your new training workload under nnn.benchmark/data/myworkload. (3) Give it the same structure as the existing training workload: an input/ directory, an output/ directory, and a reftime file with contents similar to the one found in nnn.benchmark/data/train/reftime. For reportable runs, you cannot use binaries that were trained with alternate workloads. |
Option | Use In | Default | Meaning |
tune | H | base | default tuning level. In a reportable run, must be either all or base. |
use_submit_for_speed | H,N | 1 | Allow the use of submit commands for benchmark runs. |
verbose | H | 5 | Verbosity level. Select level 1 through 99 to control how much debugging info runspec prints out. For more information, see the section on the log file, below. |
version_url | H | {version} | If version checking is enabled, this specifies the location from which the version information should be fetched. The default name {version} is http://www.spec.org/accel/current_version |
voltage_range | H | none | Set the maximum voltage in volts to be used by the power analyzer(s) for power measurement. |
A sysinfo program is an executable program or script that automatically fills out some of the system description fields that are addressed to readers of your results. If you provide one, it will be called by runspec early during its execution.
Why sysinfo?
This section includes information about:
SPEC is providing a greatly expanded example sysinfo program, with support for several operating systems, including Linux, Mac OS X, Microsoft Windows, and Solaris.
Note that the example sysinfo program is enabled by default!
It is on by default because SPEC believes that the benefits listed in the box above, Why sysinfo?, outweigh its limitations.
SPEC is aware that there are many dependencies and differences among hardware and software platforms, and that these change from time to time. Thus its status as an "example": you may need to modify the example in order to use it on your system.
If your operating system is not recognized, the utility will print "Sorry, (name of os) is not supported by (name of program)."
The example sysinfo utility is on your kit, as $SPEC/Docs/sysinfo (Unix) or %SPEC%\Docs\sysinfo (Windows).
It is possible that SPEC may update the example from time to time; if so, updates will be posted at http://www.spec.org/accel/Docs/sysinfo. (Note: the link goes to an executable file. If your web browser objects to opening it, try saving it instead.)
Version identification: The example sysinfo program, as supplied by SPEC, includes a revision and date of update. It generates its own md5sum when it is run. If you are unsure whether the copy in your Docs/ has been changed from SPEC's version:
For example (on a system where curl(1) is available to do the download):
$ . ./shrc $ mkdir update $ cd update $ curl -s -O http://www.spec.org/accel/Docs/sysinfo $ chmod +x sysinfo $ ./sysinfo -p | grep Rev | tr $ " " # Rev: 483 Date:: 2013-07-05 # aa89b8fcf372a4a5b1ea1c40fe336477 $ cd $SPEC/Docs $ ./sysinfo -p | grep Rev | tr $ " " # Rev: 395 Date:: 2012-07-25 # 8f8c0fe9e19c658963a1e67685e50647 $
In the example above, the md5sum for the copy in the Docs directory is different than the copy freshly retrieved from SPEC's server. The date also shows that the freshly retrieved copy is more recent. (Note: when you try this for yourself, feel free to skip the tr(1) command at the end. It is there only to keep the dollar signs out of the above example text, which was deemed useful because otherwise SVN would automatically change the above example every time the document you are reading _now_ is checked in.)
To use the example sysinfo program, add this line near the top of your config file (in the header section):
sysinfo_program = specperl $[top]/Docs/sysinfo
(Note: this is the default behaviour)
To turn it off, use:
sysinfo_program =
Warning: it is possible that, at a future time, SPEC may decide to require use of a sysinfo utility for reportable results. If that occurs, the online copy of this document at www.spec.org/accel/Docs/config.html will be updated to so indicate.
By default, there will be more information in your platform notes. For example, by default Windows systems will include lines such as these:
Platform Notes -------------- Sysinfo program C:\kit118/Docs/sysinfo $Rev: 1577 $ $Date:: 2017-02-07 #$ \6d78399eddfd6b1e8813c4ae4a352725 running on snowcrash Thu Aug 11 08:38:31 2011 This section contains SUT (System Under Test) info as seen by some common utilities. To remove or add to this section, see: http://www.spec.org/accel/Docs/config.html#sysinfo Trying 'systeminfo' OS Name : Microsoft Windows 7 Professional OS Version : 6.1.7601 Service Pack 1 Build 7601 System Manufacturer: LENOVO System Model : 2537LB8 Processor(s) : 1 Processor(s) Installed. [01]: Intel64 Family 6 Model 37 Stepping 5 GenuineIntel ~2400 Mhz BIOS Version : LENOVO 6IET75WW (1.35 ), 2/1/2011 Total Physical Memory: 3,892 MB Trying 'wmic cpu get /value' DeviceID : CPU0 L2CacheSize : 256 L3CacheSize : 3072 MaxClockSpeed : 2400 Name : Intel(R) Core(TM) i5 CPU M 520 @ 2.40GHz NumberOfCores : 2 NumberOfLogicalProcessors: 4 (End of data from sysinfo program)
Also by default, the example sysinfo program will try to parse the above information into system description fields, such as:
hw_cpu_mhz = 2400 hw_cpu_name = Intel Core i5 M 520 hw_model = 2537LB8 hw_nchips = 1 sw_os000 = Microsoft Windows 7 Professional sw_os001 = 6.1.7601 Service Pack 1 Build 7601
The output is written to a section named default: at the end of your config file. Therefore, by default, most of its output will be printed in all reports.
Editing the output: The example sysinfo program does not replace the need for human intelligence. Sysinfo creates contemporaneous evidence, and it may give you a start, but you will still have to understand your system, and you should examine its output carefully. If you see incomplete or incorrect information, please observe the following guidelines:
Please DO NOT edit notes lines generated by sysinfo (such as notes_plat_sysinfo_100)
Notes lines might be incorrect if, for example, a Linux utility changes its output format. If such cases, sysinfo might be expecting an obsolete format and may produce incorrect notes. In general, SPEC does not recommend editing notes_plat lines. Instead, if you see incorrect lines, please report the problem to SPEC; and do one of the following:
Please DO edit fields generated by sysinfo (such as hw_xxx)
System description fields are filled in by sysinfo using only the information that it can readily obtain from system utilities, which will often be incomplete or only partial. Fields will typically need to be edited in the usual manner, as described in utility.html.
For example, suppose that sysinfo detects 63.84 GB of memory. On seeing that, you realize, "My firmware ate a bit of memory prior to boot, and I really have 64 GB". You then edit the hw_memory field to change it to to 64 GB, and while you are there you add additional detail about DIMM types that was not visible to the utility, perhaps something like one of these:
spec.accel.hw_memory: 64 GB (8 x 8 GB 2Rx4 PC3-10600R-9, ECC) spec.accel.hw_memory: 64 GB (16 x 4 GB 2Rx4 PC3L-10600R-9, ECC)
Original output is archived: The original output of the sysinfo utility may be found by running rawformat with output format config . If the description has been edited, rawformat will output both .cfg and .orig.cfg.
Quieter output: If you prefer, you can send the output to config file comments intead of reports, by adding one or both of these switches to the above sysinfo line:
-f Write comments instead of fields (e.g, 'sw_os' becomes '# sw_os') -p Write comments instead of platform notes
Comment destination: Comments are written only to output format config. They do not appear in reports, and they are not written to the config file in your $SPEC/config directory. They are written to the copy in your $SPEC/result directory. This can make them easy to overlook, which is why the default is to write actual notes and fields, instead of writing only comments. See the example below for more information about output destinations.
When the example sysinfo utility is run, you may see warnings such as this one:
WARNING: Your config file sets some fields that are also set by sysinfo: hw_cpu_name, sw_os To avoid this warning in the future, see http://www.spec.org/accel/Docs/config.html#sysinfo
Any fields created by sysinfo (such as hw_cpu_name) are written to a section named default: at the end of your config file. The warning means that you have already set one of the listed fields in your default: section. Because the sysinfo output comes at the end of your config file, its setting will override the earlier config file settings, which may or may not be desirable.
To get rid of the warnings, you have four choices:
You can remove the fields from your config file, and let the utility write them instead. To find out exactly which fields are written by the example sysinfo utility, search the source for the string: promisedfields
If you wish to allow the example sysinfo program to set some fields, while setting others yourself, remove the ones that sysinfo is allowed to write, and move the others to a section that has a higher precedence than the default: section. For example:
openacc=default=default=default: hw_memory001 = 8 GB (2 x 4 GB 2Rx8 PC3-10600E-9, ECC, hw_memory002 = running at 1067 MHz and CL7)
Notice in the above example that the first section marker is openacc, which is higher priority than default. The effect will be to always silently apply the listed hw_memory001 and hw_memory002, since all benchmarks are either opencl, openacc, or openmp.
If you wish to remove all the fields from the sysinfo output, you can use the -f switch, like so:
You can write your own sysinfo program, or you can modify the supplied example. Note: if you do so, you should retain a copy of the exact modified program together with your results.
An example may help to clarify which files get which output. On this system, $SPEC/config/dave.cfg runs the 350.md test workload once, with text and config outputs:
$ cat dave.cfg runlist = 350.md size = test iterations = 1 sysinfo_program = specperl $[top]/Docs/sysinfo ext = 0406 output_format = text,config default=default: CC = /usr/bin/gcc OPTIMIZE = -O2 PORTABILITY = -DSPEC_LP64 $
A second config file differs only in that it adds the -p switch.
$ diff dave.cfg radish.cfg 4c4 < sysinfo_program = specperl $[top]/Docs/sysinfo --- > sysinfo_program = specperl $[top]/Docs/sysinfo -p $
Both config files are used, creating output file sets 20 and 21:
$ for f in dave radish > do > runspec -c $f | grep test.txt > done format: ASCII -> /home/dave/accel/result/ACCEL_ACC.020.test.txt format: ASCII -> /home/dave/accel/result/ACCEL_ACC.021.test.txt
On this Linux system, a line from /proc/meminfo about huge pages appears in several output files for run #20, but it is only a config file comment for run #21.
$ go result /home/dave/accel/result $ grep Hugep ACCEL_ACC.020.test* ACCEL_ACC.020.test.cfg:notes_plat_sysinfo_110 = Hugepagesize: 2048 kB ACCEL_ACC.020.test.rsf:spec.accel.notes_plat_sysinfo_110: Hugepagesize: 2048 kB ACCEL_ACC.020.test.txt: Hugepagesize: 2048 kB $ $ grep Hugep ACCEL_ACC.021.test* ACCEL_ACC.021.test.cfg:# Hugepagesize: 2048 kB $
Of course, you are not limited to using the pre-supplied sysinfo program from the Docs/ directory. You can write your own, by following these rules:
name = value
For a complete list of options that are used by specmake, please see makevars.html and notice which variables are documented as OK to mention in a config file.
Here are the commonly used variables:
CC | How to invoke your C compiler. |
CXX | How to invoke your C++ compiler. |
FC | How to invoke your Fortran compiler. |
CLD | How to invoke the linker when compiling C programs. |
CXXLD | How to invoke the linker when compiling C++ programs. |
FLD | How to invoke the linker when compiling Fortran programs. |
ONESTEP | If set, build from sources directly to final binary. See the discussion in rule 2.3.7 of runrules.html. |
OPTIMIZE | Optimization flags to be applied for all compilers. |
COPTIMIZE | Optimization flags to be applied when using your C compiler. |
CXXOPTIMIZE | Optimization flags to be applied when using your C++ compiler. |
FOPTIMIZE | Optimization flags to be applied when using your Fortran compiler. |
PORTABILITY | Portability flags to be applied no matter what the compiler. |
CPORTABILITY | Portability flags to be applied when using your C compiler. |
CXXPORTABILITY | Portability flags to be applied when using your C++ compiler. |
FPORTABILITY | Portability flags to be applied when using your Fortran compiler. |
FPPPORTABILITY | Portability flags to be applied when pre-processing Fortran sources. Note that since preprocessing is not a part of standard Fortran, SPEC supplies a copy of the freely available filepp, with minor modifications, as specpp. Using specpp ensures that the preprocessing is done consistently across platforms. If you need to define Fortran preprocessor variables, do not put them in FPORTABILITY. Instead, put them in FPPPORTABILITY or PORTABILITY. |
RM_SOURCES | Remove a source file. Should only be used for library substitutions that comply with run rule 2.2.2. |
PASSn_CFLAGS | Flags for pass "n" C compilation when using feedback-directed optimization (FDO). Typically n is either 1 or 2, for the compile done before the training run and the compile done after the training run. See the chapter on Using Feedback Directed Optimization for more information. |
PASSn_CXXFLAGS | Flags for pass "n" C++ compilation when using FDO. |
PASSn_FFLAGS | Flags for pass "n" Fortran compilation when using FDO. |
PASSn_LDFLAGS | Flags to use with the linker in pass "n" when using FDO. |
Note that you can also make up your own variable names, which specmake will use (and perform substitution on). For an example of this feature, see the example file $(SPEC)/config/example-advanced.cfg. Search that file for LIBS, and note the long comment which provides a walk-through of a complex substitution handled by specmake.
The syntax "+=" is available to add to specmake options.
This option is available as a convenience to provide more flexible setting of options.
For example, consider this config file:
$ cat tmp.cfg runlist = 352.ep default: OPTIMIZE = -O1 openacc=default: OPTIMIZE += -unroll default=peak: OPTIMIZE += -outer-unroll 352.ep=peak: OPTIMIZE += -inner-unroll
Notice how options accumulate based on the section specifiers above, and that the accumulation varies (as expected) depending on whether base or peak tuning is chosen:
$ runspec --config=tmp --tune=base --fake | grep randdp.c cc -c -o randdp.o -DSPEC_ACCEL -DSPEC -DNDEBUG -O1 -unroll randdp.c $ runspec --config=tmp --tune=peak --fake | grep randdp.c cc -c -o randdp.o -DSPEC_ACCEL -DSPEC -DNDEBUG -O1 -outer-unroll -unroll -inner-unroll randdp.c $
Caution: although the option add to syntax flexibility when writing config files, that very flexibility may be a disadvantage. A config file that uses "+=" has the potential to introduce unwanted user surprises. Its effect will vary depending on precedence of section specifiers, as shown above; it will also vary depending on the order of your config file, and (if you mix and match features), there are glorious opportunities to surprise yourself by mixing += with macros, include files, and variable substititution. Don't do that. Keep it simple. Create conventions for your config files, write them down, in config file comments, and review the output (remember, --fake is your friend). Use the feature with caution, and drive responsibly.
The SPEC toolset has attempted to keep config files and binaries in synch with each other (unless you have set check_md5=0, which is not recommended). This generally means that edits to your config file have often resulted in the binaries being rebuilt, sometimes to the dismay of testers who found that rebuilds were attempted at inconvenient times. The inconvenience was compounded by the fact that the first step in a rebuild is to delete the old binary, so even a very fast interrupt of the run wouldn't help: once the rebuild would start, your binary was gone.
The tools do not promise to entirely eliminate unexpected rebuilds. Occasionally, they may make assumptions that seem conservative about config file options that might affect the generated binary. In particular:
Testing option sensitivity: If you would like to test whether a config file option change might cause a rebuild, without actually DOING the rebuild, here is a recommended procedure:
In the above command:
Some options in your config file cause commands to be executed by your shell (/bin/sh) or by the Windows command interpreter (cmd.exe).
Because runspec can cause arbitrary commands to be executed, it is therefore important to read a config file you are given before using it.
These are the options that cause commands to be executed:
bench_post_setup | Command to be executed after each benchmark's run directory setup phase. The run rules say that this feature may be used to cause data to be written to stable storage (e.g. sync). The command must be the same for all benchmarks. It will be run after each benchmark is setup, and for all workloads (test/train/ref). The bench_post_setup option may appear only in the header section. |
fdo_pre0 | Commands to be executed before starting a feedback directed compilation series. |
fdo_preN | Commands to be executed before pass N. |
fdo_make_cleanN | Commands to be executed for cleanup at pass N. |
fdo_pre_makeN | Commands to be done prior to Nth compile. |
fdo_make_passN | Commands to actually do the Nth compile. |
fdo_post_makeN | Commands to be done after the Nth compile. |
fdo_runN | Commands to be used for Nth training run. |
fdo_postN | Commands to be done at the end of pass N. |
post_setup | Command to be executed after all benchmark run directories have been set up. The run rules say that this feature may be used to cause data to be written to stable storage (e.g. sync). Notes:
Example 1: In a reportable run of all with parallel_test set to 1, you will get three instances of post_setup (see note 3 above). Example 2: In a reportable run of all with parallel_test set to >1, you will get one instance of post_setup (see notes 3 and 4 above). |
submit | Commands to be used to distribute jobs across a multiprocessor system, as described below in the section on Using Submit |
Many examples of using the fdo_ options are provided in the chapter on Using Feedback Directed Optimization. A few introductory remarks are appropriate here:
Before setting an fdo option, you will want to observe the defaults, and of course your changes will have to comply with the run rules.
About fdo Defaults: To see the defaults for the various fdo commands, use the --fake switch. For example:
$ cat > test.cfg OPTIMIZE = -O2 PASS1_CFLAGS = -feedback:collect PASS2_CFLAGS = -feedback:apply $ $ runspec --action build --fake -c test -T peak 101 ... specmake -n --always-make build FDO=PASS2 2> fdo_make_pass2.err | tee fdo_make_pass2.out %% Fake commands from fdo_make_pass2 (specmake -n --always-make build FDO=PASS...): g++ -c -o args.o -DSPEC_ACCEL -DSPEC -DNDEBUG -feedback:apply -O2 args.cc g++ -c -o main.o -DSPEC_ACCEL -DSPEC -DNDEBUG -feedback:apply -O2 main.cc ...
In this example, we see that by default fdo_make_pass2 causes specmake to be invoked with --always-make build FDO=PASS2, which causes various g++ compiler commands to be generated.
About changes to fdo options: You are allowed to modify the commands that are generated in order to support various compilation models. An example is provided in the chapter on Using Feedback-Directed Optimization. Of course, your modifications must comply with the run rules: for example, you could not use an fdo hook to get around the prohibition on mentioning names. If you are in doubt whether a change to an fdo hook is legal, please write to SPEC.
The config file feature submit allows you to enter commands that will be run in conjuction with the benchmark commands. This capability is very useful for the SPEC CPU2006 rate benchmarks as well as SPEC MPI2007 benchmarks. See those benchmarks for further examples of how it may be used. The following description will discuss items beyond the needs of SPEC ACCEL.
Normally submit is used with the config file variable $command plus your operating system's facilities that assign jobs to processors, such as dplace, pbind, procbind, prun, start/affinity, or taskset.
Consider the following SPEC CPU2006 example. This config file runs 2 copies of 401.bzip2 in SPECrate mode, assigning them to processors with the Linux taskset command:
$ cat tmp.cfg copies = 2 runlist = 401.bzip2 rate = 1 submit = taskset -c $SPECCOPYNUM $command default=base: OPTIMIZE = -O2 $ runspec --config tmp
In the example above, $SPECCOPYNUM acquires the values 0 and 1, causing processors 0 and 1 to be assigned the work of the 401.bzip2 commands.
The following applies to SPEC CPU2006. It is provided for more information about submit.
Optionally, you may also wish to use the SPEC tool features bind, command_add_redirect, and/or $SPECCOPYNUM. To see how submit fits in with these tools features, please see the example in the section on Variable Substitution, above.
You might also want to search published results at www.spec.org/cpu2006 for systems that are similar to your system. If you see SPECrate runs for such systems, the config files associated with them may have examples that combine the features in ways that are useful on your architecture.
The following example is again taken from SPEC CPU2006. It is provided for more information about submit.
Continuation of multiple submit lines is supported, using continuation rules similar to other fields.
Here is an example from a Solaris system:
$ cat tmp.cfg rate = 1 # In SPECrate mode, copies = 2 # run 2 copies of runlist = 410.bwaves # 410.bwaves, using size = test # the small "test" worklad. iterations = 1 # Just run it once. command_add_redirect = 1 # Include redir. ops in $command submit0 = echo 'pbind -b $SPECCOPYNUM \$\$ >> pbind.out' > dobmk submit2 = echo "$command" >> dobmk submit4 = sh dobmk default=base: OPTIMIZE = -O $ runspec -c tmp2
As noted in the comments, the above example runs 2 copies of 410.bwaves with the "test" workload. On Solaris, processors are assigned work using the pbind command. The line with submit0 causes a small file, dobmk, to be created with the result of:
echo 'pbind -b $SPECCOPYNUM \$\$ >> pbind.out'
That is, assign the current process to the processor whose number matches the current SPECCOPYNUM, and send the output of the pbind command to file pbind.out. The line with submit2 adds to dobmk the actual command that runs the benchmark, including any needed IO redirection (because command_add_redirect is set). Finally, the line with submit4 actually runs the benchmark.
If you use the submit feature, a notes section will automatically be created to indicate that you have done so.
Submit Notes ------------ The config file option 'submit' was used.
You can add notes to that section, or customize it as you wish, by creating lines with notes_submit_NNN. The phrase
The config file option 'submit' was used
must appear somewhere in your notes; if it doesn't, it will get added.
For example, if your config file has:
notes_submit_000 = notes_submit_005 = Processes were assigned to specific processors using 'pbind' commands. The config notes_submit_010 = file option 'submit' was used, along with a list of processors in the 'BIND' notes_submit_015 = variable, to generate the pbind commands. (For details, please see the config file.) notes_submit_020 =
then your report will include a section that looks like this:
Submit Notes ------------ Processes were assigned to specific processors using 'pbind' commands. The config file option 'submit' was used, along with a list of processors in the 'BIND' variable, to generate the pbind commands. (For details, please see the config file.)
Notice in the example that the key phrase (The config file option 'submit' was used) was wrapped across two lines. That's fine. You can break it in the middle, and you can vary the capitalization, just so long as you include it. If you don't include it, then it will be automatically added at the top of the Submit Notes section.
Whether or not you send your result to SPEC, you should fully disclose how you achieved the result. If it requires the installation of the GoFastLinker, you should say so. By setting the appropriate fields in the config file, you can cause information about the GoFastLinker to appear in the reports that are intended for humans.
Here are the fields that you can set to describe your testbed to readers:
hw_accel_connect | Describes how the accelerator is connected to the system under test. Possible values include but not limited to integrated, PCIe, none. | |
hw_accel_desc | Provide further details about he accelerator that are important to performance. Things that might be of interest include memory sizes, number of cores, number of vectors, etc. | |
hw_accel_ecc | This is a Yes or No field stating if the accelerator uses ECC for its memory. | |
hw_accel_model | The model name of the accelerator. | |
hw_accel_name | The name of the accelerator. | |
hw_accel_type | The type of accelerator. Possible values include, but not limited to: GPU, APU, CPU, etc. | |
hw_accel_vendor | The company/vendor of the accelerator. | |
hw_avail | Date hardware first shipped. If more than one date applies, use the LATEST one. | |
hw_cpu_name | Manufacturer-determined formal processor name. | |
hw_cpu_char | Technical characteristics to help identify the processor. (You'll find more information about the intended use of hw_cpu_name vs. hw_cpu_char in the run rules.) | |
hw_cpu_mhz | Normal operating speed of the CPUs, in MHz. | |
hw_cpu_max_mhz | Maximum speed of the CPUs, in MHz. This may be what is referred to as the maximum turbo speed of the CPUs. If turbo is disabled, or there is no other speed of the CPUs, please just list the normal operating speed (same as hw_cpu_mhz). | |
hw_disk | Disk subsystem for the SPEC run directories. Three important Notes:
|
|
hw_fpu | Floating point unit. | |
hw_memory | Size of main memory (and other performance-relevant information about memory, as discussed in the run rules.) | |
hw_model | Model name. | |
hw_nchips | Number of CPU chips configured. See the discussion of CPU counting in the run rules. | |
hw_ncores | Number of CPU cores configured. See the discussion of CPU counting in the run rules. | |
hw_ncoresperchip | Number of CPU cores per chip. See the discussion of CPU counting in the run rules. | |
hw_ncpuorder | Valid number of processors orderable for this model, including a unit. For example, "2, 4, 6, or 8 chips". | |
hw_nthreadspercore | Number of hardware threads per core. See the discussion of CPU counting in the run rules. | |
hw_other | Any other performance-relevant hardware. | |
hw_pcache | 1st level (primary) cache. | |
hw_power_{id}_cal_date | The date the power meter was last calibrated. | |
hw_power_{id}_cal_label | The calibration label. | |
hw_power_{id}_cal_org | The name of the organization or institute that did the calibration. | |
hw_power_{id}_met_inst | The name of the metrology institute that certified the organization that did the calibration of the meter. | |
hw_power_{id}_connection | Description of the interface used to connect the power analyzer to the PTDaemon host system, e.g. RS-232 (serial port), USC, GPIB, etc. | |
hw_power_{id}_label | description | |
hw_power_{id}_model | The model name of the power analyzer used for this benchmark run. | |
hw_power_{id}_serial | The serical number uniquely identifying the power analyzer. | |
hw_power_{id}_setup | A brief description of which devices were measured by this device. | |
hw_power_{id}_vendor | Company which manufactures and/or sells the power analyzer. | |
hw_psu | The number and ratings (in Watts) of the systems power supplies. | |
hw_psu_info | Details about the power supplies, like vendor part number, manufacturer, etc. | |
hw_scache | 2nd level cache. | |
hw_tcache | 3rd level cache. | |
hw_ocache | 4th level or other form of cache. | |
hw_temperature_{id}_connection | Description of the interface used to connect the temperature meter to the PTDaemon host system, e.g. RS-232 (serial port), USC, GPIB, etc. | |
hw_temperature_{id}_label | description | |
hw_temperature_{id}_model | The model name of the temperature meter used for this benchmark run. | |
hw_temperature_{id}_serial | description | |
hw_temperature_{id}_setup | Brief description of how the location of the sensor, e.g. 50 mm in fron of th SUT main airflow intake". | |
hw_temperature_{id}_vendor | Company which manufactures and/or sells the temperature meter. | |
hw_vendor | The hardware vendor. An example of usage of this and related fields is given in the test_sponsor section. | |
license_num | The SPEC license number for either the tester or the test_sponsor. | |
prepared_by | Is never output. If you wish, you could set this to your own name, so that the rawfile will be tagged with your name but not the formal reports. | |
sw_accel_driver | The name and version of the software driver used to control the accelerator. | |
sw_avail | Availability date for the software used. If more than one date, use the LATEST one. | |
sw_base_ptrsize |
Size of pointers in base. Report:
|
|
sw_compiler | Name and version of compiler. Note that if more than one compiler is used, you can employ continuation lines. (This applies to most of the fields discussed here, but is emphasized for sw_compiler because it is common that testers first find themselves wanting to use continuation lines when documenting their compiler set.) | |
sw_file | File system (ntfs, ufs, nfs, etc) for the SPEC run directories. Three important Notes:
|
|
sw_os | Operating system name and version. | |
sw_other | Any other performance-relevant non-compiler software used, including third-party libraries, accelerators, etc. | |
sw_peak_ptrsize | Size of pointers in peak. Report:
|
|
sw_state | Multi-user, single-user, default, etc. The rules have been clarified regarding documentation of system state and tuning, in rules 3.1.1, 4.2.4 (paragraphs c and d), and 4.2.5 (paragraph g). | |
tester | The entity actually carrying out the tests. An optional field; if not specified, defaults to test_sponsor. An example is given in the test_sponsor section. | |
test_date | When the tests were run. This field is populated automatically based on the clock in the system under test. Setting this in the config file will generate a warning and the setting will be ignored. If your system clock is incorrect, then the value may be edited in the raw file (see utility.html). It's better to avoid the necessity to edit, by setting your system clock properly. | |
test_sponsor | The entity sponsoring this test. An optional field; if not specified, defaults to hw_vendor. For example, suppose that the Genius Compiler Company wants to show off their new compiler on the TurboBlaster 9000 computer, but does not happen to own a maxed-out system with eight thousand processors. Meanwhile, the Pawtuckaway State College Engineering department has just taken delivery of such a system. In this case, the compiler company could contract with the college to test their compiler on the big machine. The fields could be set as: test_sponsor = Genius Compilers tester = Pawtuckaway State College hw_vendor = TurboBlaster |
Fields can appear and disappear based upon scope. For example, you can define the compiler optimizations you use for all benchmarks while adding those for peak only in the peak section. You can also have notes which apply to all runs and add additional notes for another section.
Most (*) fields can be continued to another line by appending a numeral, for example sw_os1, sw_os2, sw_os3 if you need 3 lines to fully identify your operating system.
Here is an example that uses both of these features:
default=default=default=default: OPTIMIZE = -fast -xipo=2 notes_000 = System used mega-threading scheduler. default=peak=default=default: COPTIMIZE = -xhyper -lsmash FOPTIMIZE = -xblaster -lpile notes_001 = Smash library was used to allow potater tool optimization. notes_002 = Pile library allowed dynamic adjustment of spin delay, set with BLASTER=17.
In the above example, the information about the mega-threading scheduler will be printed for both base and peak runs. The information about the smash and pile libraries will only be printed for peak runs.
Note that the numerals above will be converted to a more formal style of three-digit numbers with leading zeros in the raw file; as discussed in utility.html, if you edit a rawfile you must use the more formal style of 000, 001, etc.
(*) The fields which cannot be continued are the ones that are expecting a simple integer, such as: hw_nchip, hw_ncores, hw_ncoresperchip, hw_nthreadspercore, license_num; and the ones that are expecting a date: hw_avail, and sw_avail.
In addition to the pre-defined fields, you can write as many notes as you wish. These notes are printed in the report, using a fixed-width font. For example, you can use notes to describe software or hardware information with more detail beyond the predefined fields:
notes_os_001 = The operating system used service pack 2 plus patches notes_os_002 = 31415, 92653, and 58979. At installation time, the notes_os_003 = optional "Numa Performance Package" was selected.
There are 8 different notes sections. If there are no notes in a particular section, it is not output, so you don't need to worry about making sure you have something in each section.
The sections, in order of appearance, are as follows:
Notes about the submit command are described above, with the description of the submit option.
Start your notes with the name of the notes section where you want the note to appear, and then add numbers to define the order of the lines. Within a section, notes are sorted by line number. The NNN above is not intended to indicate that you are restricted to 3 digits; you can use a smaller or larger number of digits as you wish, and you can skip around as you like: for example, ex-BASIC programmers might naturally use line numbers 100, 110, 120... But note that if you say notes_plat782348320742972403 you just might encounter the dreaded (and highly unusual) "out of memory" error, so don't do that.
You can optionally include an underscore just before the number, but beware: if you say both notes_plat_105 and notes_plat105, both are considered to be the same line. The last one mentioned will replace the first, and it will be the only one output.
For all sections you can add an optional additional tag of your choosing before the numbers. Notes will be organized within the tags.
The intent of the feature is that it may allow you to organize your system information in a manner that better suits your own categories for describing it.
For example:
$ cat tmp.cfg size = test iterations = 1 output_format = text teeout = 1 runlist = miniGhost tune = base notes_part_greeting_011 = ++ how notes_part_greeting_20 = ++ you? notes_part_greeting_012 = ++ are notes_part_aname_1 = ++ Alex, notes_part_080 = ++ hi $ runspec --config tmp > /nev/dull $ cd ../result $ ls -t *test.txt | head -1 ACCEL_ACC.101.test.txt $ grep ++ *101*txt ++ hi ++ Alex, ++ how ++ are ++ you? $
You can mention URLs in your notes section. html reports will correctly render them as hyperlinks. For example:
notes_plat_001 = Additional detail may be found at notes_plat_002 = http://www.turboblaster.com/servers/big/green/
If you like, you can use descriptive text for the link by preceding it by the word LINK and adding the descriptive text in square brackets:
LINK url AS [descriptive text]
The brackets may be omitted if your descriptive text is a single word, without blanks.
For example:
notes_plat_001 = Additional detail may be found at notes_plat_002 = LINK http://www.turboblaster.com/servers/big/green/ AS [TurboBlaster Servers]
When the above appears in an html report, it is rendered as:
Additional detail may be found at TurboBlaster Servers
And in a text report, it appears as:
Platform Notes -------------- Additional detail may found at TurboBlaster Servers (http://www.turboblaster.com/servers/big/green/)
Since the text report is not a context in which the reader can click on a link, it is spelled out instead. Note that because the text report spells the link out, the text line is wider than in HTML, PS, and PDF reports. When deciding where to break your notes lines, you'll have to pick whether to plan line widths for text (which may result in thin-looking lines elsewhere) or plan your line widths for HTML/PS/PDF (which may result in lines that fall of the right edge with text). The feature notes_wrap_columns won't help you here, since it is applied before the link is spelled out.
You can cause files to be attached to a result with this syntax:
ATTACH url AS [descriptive text]
Unlike links (described in the previous section), if you use ATTACH, the mentioned file is actually copied into your result directory.
For example, the notes below will cause /Users/john/Desktop/power.jpg and /Users/john/Desktop/fan.jpg to be copied into the result directory:
notes_plat_110 = Stated performance depends on proper cooling, as shown in the notes_plat_120 = ATTACH file:///Users/john/Desktop/power.jpg AS [power supply photo] notes_plat_130 = and ATTACH file:///Users/john/Desktop/fan.jpg AS [fan diagram]
When the above notes are used with an HTML report, they appear as:
Stated performance depends on proper cooling, as shown in the power supply photo and fan diagram
And in a text report, they appear as:
Platform Notes -------------- Stated performance depends on proper cooling, as shown in the power supply photo (ACCEL_ACC.011.test.jpg) and fan diagram (ACCEL_ACC.011.test.1.jpg)
In the text report, you can see that when fan.jpg and power.jpg were copied into the result directory, they were given names to show which result they correspond to: ACCEL_ACC.011.test.jpg and ACCEL_ACC.011.test.1.jpg. Note that since the text report spells the name of the attachment out, the text line is wider than in HTML, PS, and PDF reports. When deciding where to break your notes lines, you'll have to decide whether to plan line widths for text (which may result in thin-looking lines elsewhere) or plan your line widths for HTML/PS/PDF (which may result in lines that fall off the right edge with text). The feature notes_wrap_columns won't help you here, since it is applied before the attachment name is spelled out.
Feedback Directed Optimization (FDO) is an application build method that involves:
The modified program is expected to run faster than if FDO had not been used. FDO is also sometimes known as PBO, for Profile-Based Optimization.
This section explains how various controls interact when using feedback. Those controls are:
The default is to build without feedback. To use FDO, you must add either the PASSn_<language>FLAGS, PASSn_<language>OPTIMIZE, or fdo*n options. The PASSn options let you specify additional flags for the pre-defined sequence of steps within each build pass, where "n" indicates the build pass number. The fdo*n options let you add entire commands to build passes, including setup and cleanup commands.
PASSn*: The most common way of using FDO is to add PASS1_<language>FLAGS and PASS2_<language>FLAGS options, which specify flags to be added to each of multiple compiler and/or link steps. These options, which are summarized in the section on specmake, cause multiple compiles. The first compile creates an executable image with instrumentation. That image is run with the SPEC-provided training workload, and the instrumentation collects data about the run (a profile). Finally, the program is recompiled, and the compiler uses the profile to improve its optimizations.
Use this method if your desired build sequence is compile - train - recompile.
For example, if you set
FC = tbf90 OPTIMIZE = --fast PASS1_FFLAGS = --CollectFeedback PASS2_FFLAGS = --ApplyFeedback
then the tools will use tb90 --fast --CollectFeedback to create an instrumented binary; will run it with the SPEC-provided training workload; and then will recompile with tbf90 --fast --ApplyFeedback to create a new binary.
fdo*n: Although the compile - train - recompile method is common, many other FDO models are also possible, such as compile - train - othertool, where othertool uses the profile to improve the executable image without requiring a recompile. The shell options fdo*n can be used to construct more flexible FDO methods. Unlike PASSn*, which is limited to just adding flags to the commands issued by specmake, the fdo options let you add entirely different commands. Here are two examples of how you can set or change fdo options:
Modifying fdo Example 1: Using a postoptimizer
FC = tbf90 OPTIMIZE = -fast -profile:fbdir fdo_post1 = /usr/bin/postoptimizer --profile:fbdir
In this example, the SPEC tools will use tbf90 -fast -profile:fbdir to create a binary; will run the binary with the SPEC-provided training workload; and then will run the postoptimizer. The compiler is not re-run after the training run. Instead, the postoptimizer consumes a profile from the feedback directory fbdir, and uses this profile to modify the binary.
Modifying fdo Example 2: changing cleanup
101.tpacf,110.fft,120.kmeans=peak: OPTIMIZE = -O2 PASS1_LDFLAGS = -PGINSTRUMENT -incremental:no PASS2_LDFLAGS = -PGOPTIMIZE -incremental:no fdo_make_clean_pass2 = del /q *.exe fdo_make_pass2 = specmake build FDO=PASS2
By default, the tools will clean up all object files from previous passes. This example modifies the action fdo_make_clean_pass2 to delete only the executables (.exe), effectively keeping the object (.obj) files. The line fdo_make_pass2 causes specmake to be invoked to build using a simplified command line. By default, the line would have included the GNU make switch --always-make; the modified line drops that switch.
Before modifying the fdo hooks, you should learn what they do by default, as described above.
The PASSn* and fdo*n options can be freely used together. For example, here's a config file that builds 357.bt331 peak (as specified on the first three lines), with both PASSn and fdo options:
$ cat tmp.cfg action = build runlist = bt tune = peak PASS1_CFLAGS = --CollectFeedback PASS2_CFLAGS = --ApplyFeedback fdo_run2 = $command fdo_post2 = /usr/bin/postoptimizer $ runspec --config tmp --fake | \ grep -e add.c -e bt_out.err -e postopt | grep -v %% cc -c -o add.o -DSPEC_ACCEL -DSPEC -DNDEBUG --CollectFeedback add.c ../build_peak_none.0000/bt > bt_out.log 2>> bt_out.err cc -c -o add.o -DSPEC_ACCEL -DSPEC -DNDEBUG --ApplyFeedback add.c ../build_peak_none.0000/bt > bt_out.log 2>> bt_out.err /usr/bin/postoptimizer $
In the runspec command, we use the --fake option to quickly examine how the build will work. Using --fake is highly recommended when you are trying to debug your feedback commands, because it is much quicker than actually doing the builds. The fake output is searched with the Unix grep command; on Windows you would construct a findstr command. The example picks out lines of interest:
Having done all this, we can confirm that the above config file causes the build sequence to be compile - train - compile - train - othertool.
The sharp-eyed reader may wonder "What is this 'fdo_run2 = $command' line in the config file?" In the second pass, actually running the benchmark is optional, because it is possible that your pass 2 tools might not need another run of the benchmark. One could imagine a build system that notices everything interesting during the pass 1 run, and which causes that interesting information to be carried around by the binary itself, so that it never needs to be regenerated. To say "Yes, please do run it again", you add the above fdo_run2 line.
More complex examples can be constructed. If you mention PASS1, fdo_pre2, fdo_post2, PASS3, and fdo_post4, then you will end up with four passes. The numbers serve to order the passes, but if you skip numbers then the tools will not attempt to invent things to do in the "missing" passes.
If you use either PASSn* or fdo*n (or both), then, by default, feedback will happen. The config file option feedback provides an additional control, an "on/off" switch which allows, or disallows, the options selected with PASSn* and fdo*n.
A common usage model is to use PASSn* or fdo*n to turn on feedback for a whole set of benchmarks, and then use the option feedback=0 to turn it off for individual benchmarks. For example:
$ cat miriam1.cfg action = build tune = peak runlist = md,miniGhost,bt fdo_post1 = /usr/bin/merge_feedback fbdir 350.md: feedback=0 370.bt: feedback=1 $ runspec --fake --config miriam1 | grep -e Building -e merge_feedback \ | grep -v %% Building 350.md peak none default: (build_peak_none.0000) [Thu Jan 9 11:46:18 2014] Building 359.miniGhost peak none default: (build_peak_none.0000) [Thu Jan 9 11:46:18 2014] /usr/bin/merge_feedback fbdir Building 370.bt peak none default: (build_peak_none.0000) [Thu Jan 9 11:46:18 2014] /usr/bin/merge_feedback fbdir $
In this example, we build peak md, miniGhost, and bt (notice the first three lines of the config file) and once again use --fake and grep in a fashion similar to the previous example. In this case, we can see that the fdo_post1 command was used for two of the three benchmarks from the runlist, since by default feedback=1.
If the feedback option is used at more than one level within a config file, the usual precedence rules apply. For example, the next config file specifies the feedback option both for a set of benchmarks and for some individual benchmarks:
$ cat miriam2.cfg action = build tune = peak runlist = md,miniGhost,bt fdo_post1 = /usr/bin/merge_feedback fbdir default=peak: feedback=0 350.md: feedback=0 370.bt: feedback=1 $ runspec --fake --config miriam2 | grep -e Building -e merge_feedback \ | grep -v %% Building 350.md peak none default: (build_peak_none.0000) [Thu Jan 9 12:12:02 2014] Building 359.miniGhost peak none default: (build_peak_none.0000) [Thu Jan 9 12:12:03 2014] Building 370.bt peak none default: (build_peak_none.0000) [Thu Jan 9 12:12:03 2014] /usr/bin/merge_feedback fbdir $
In the above example, 359.miniGhost was built without feedback because of the feedback=0 line in the section named "default=peak:". But 370.bt was built with feedback (from fdo_post1) because it had a higher priority setting of feedback=1 in the section named "370.bt:".
As explained in section II.A, if an option is mentioned on both the command line and in a config file, the command line does not win over named sections, but it does win over the header section. Examples of both follow.
Here, the config file feedback option is used in a named section, and the runspec option --feedback is also used. This is the same config file as in the previous example, with the added switch on runspec.
$ cat miriam2.cfg action = build tune = peak runlist = md,miniGhost,bt fdo_post1 = /usr/bin/merge_feedback fbdir default=peak: feedback=0 350.md: feedback=0 370.bt: feedback=1 $ runspec --fake --feedback --config miriam2 | grep -e Building -e merge_feedback \ | grep -v %% Building 350.md peak none default: (build_peak_none.0000) [Thu Jan 9 12:12:02 2014] Building 359.miniGhost peak none default: (build_peak_none.0000) [Thu Jan 9 12:12:03 2014] Building 370.bt peak none default: (build_peak_none.0000) [Thu Jan 9 12:12:03 2014] /usr/bin/merge_feedback fbdir $
You can see above that the command line option had no effect: it cannot win over an option set in a named section (default=peak:).
But if the runspec option is used with a config file that adjusts feedback in the header section, then the --feedback switch does have an effect:
$ cat miriam3.cfg action = build tune = peak runlist = md,miniGhost,bt fdo_post1 = /usr/bin/merge_feedback fbdir feedback=0 350.md: feedback=0 370.bt: feedback=1 $ runspec --fake --feedback --config miriam3 | grep -e Building -e merge_feedback \ | grep -v %% Building 350.md peak none default: (build_peak_none.0000) [Thu Jan 9 12:17:28 2014] Building 359.miniGhost peak none default: (build_peak_none.0000) [Thu Jan 9 12:17:28 2014] /usr/bin/merge_feedback fbdir Building 370.bt peak none default: (build_peak_none.0000) [Thu Jan 9 12:17:28 2014] /usr/bin/merge_feedback fbdir $
In the above example, 359.miniGhost does get the fdo_post1 command.
The tools implement a configuration file macro preprocessor. The preprocessor can be used in a variety of ways; for example to communicate settings from your shell environment into runspec:
$ cat > jan.cfg notes01 = Today, I am happily running in directory %{MYDIR} on system %{HOST} $ runspec --config jan --define MYDIR=$PWD --define HOST=`hostname` \ --fakereportable openacc --output_format text | grep txt format: ASCII -> /jan/accel/result/ACCEL_ACC.090.txt $ grep Today /jan/accel/result/ACCEL_ACC.090.txt Today, I am happily running in directory /jan/accel/config on system civilized-03 $
The config file preprocessor is called a macro processor because it allows you to define macros, which are brief abbreviations for longer constructs. If you've ever used the C preprocessor, the concepts will be familiar, though the syntax is slightly different.
The preprocessor is automatically run whenever you use runspec. Or, you can run it separately, as configpp, which is documented in utility.html.
Preprocessor directives begin with the percent (%) character. This character must be the first character on the line. Any amount of spaces or tabs may separate the percent from the directive.
The following are okay:
%define foo % define bar % undef hello!
The following are not okay:
# Space in the first column %define foo # Tab in the first column %define foo # This isn't CPP! #define foo
The preprocessor is all about macros. There are no macros defined by default, so unless you define some macros, the preprocessor can't do much for you.
Macros can be defined in two ways:
# Define a simple macro %define foo bar # Now the macro called 'foo' has the value 'bar' # It's not necessary for a macro to have a value to be useful %define baz # Now the macro called 'baz' is defined, but it has no value.
Note that no quoting is necessary when specifying the names of macros or their values.
Macros defined in both ways are entirely equivalent. Because ones set on the command-line are defined first, it's not possible to use the command line to override a macro definition that occurs in the config file itself. It may help to think of a series of '%define' directives, one per --define, as being prepended to the config file.
The values assigned to macros do NOT follow the same quoting rules as variables in the config file. In particular, you may NOT use line continuation, line appending, or block quotes. You may have a value of arbitrary length, but in the interests of config file readability and maintainability, please keep them relatively short.
You will receive a warning if a previously defined macro is re-defined.
Macro names may only be composed of alphanumeric characters, underscores, and hyphens, and they ARE case-sensitive.
A macro that has not been defined will not be substituted. A macro that has been defined, but which has not been assigned a value, has the value "1".
Sometimes you want to make the preprocessor forget about a macro that you taught it. This is easily accomplished.
Macros can be undefined in two ways:
%define foo bar # Now the macro called 'foo' has the value 'bar' %undef foo # Now it doesn't
Note that no quoting is necessary when specifying the names of macros.
Like macro definition, the undefinition requests can't affect macros set in the config file because
those definitions effectively happen after the un-definition. For this reason, command-line undefinition is basically
useless; it can only undo macros also set on the command line.
So why was such a useless ability added to the tools?
The writer likes orthogonality.
By now you're probably over that initial euphoric rush that comes from wantonly defining and undefining macros, and you're looking for something more. This is it!
When you want to use the value of a macro, you refer to it by name. Unfortunately, the syntax for this is not as simple as you might hope. It's not too complicated, though; to have the preprocessor expand the macro 'foo', just write
%{foo}
in the place where you'd like it to appear. Given the following config file snippet:
%define foo Hello_ %define bar baz %define foobar Huh? %define foobaz What? %define Hello_baz Please don't do this
Here's a handy table to see the various ways you can reference these values:
Macro reference | Value |
---|---|
%{foo} | Hello_ |
%{bar} | baz |
%{foobar} | Huh? |
%{foobaz} | What? |
%{Hello_baz} | Please don't do this |
Easy, right? The following is also possible:
%{foo%{bar}} | What? |
%{%{foo}%{bar}} | Please don't do this |
Because macro values can only be one line long, it's not possible to use the preprocessor to macro-ize large chunks of your config file at once, as may be common practice for advanced users of CPP.
A macro that has not been defined will not be substituted. Thus the following case is the expected behavior:
$ cat tmp.cfg OPTIMIZE = %{FOO} $ runspec -c tmp -i test -T base csp ... gcc -c -o add.o -DSPEC_ACCEL -DSPEC -DNDEBUG %{FOO} add.c
Your C compiler probably won't know what to do with with %{FOO}. If you want to substitute an empty string, then assign it one:
$ runspec -c tmp -i test -T base --define FOO="" csp
Note that the following are NOT equivalent:
--define FOO="" --define FOO
The former sets FOO to an empty string. The latter, effectively, sets it to "1".
Defining, undefining, and expanding macros is quite an enjoyable activity in and of itself, and can even be useful on occasion. However, conditionals add an entirely new dimension to config file processing: the ability to include and exclude entire sections of text based on macros and their values.
The %ifdef conditional provides a way to determine whether or not a particular macro has been defined. If the named macro has been defined, the conditional is true, and the text to the matching %endif is included in the text of the config file as evaluated by runspec. Note that the matching %endif may not necessarily be the next %endif; conditionals may be nested.
For example, given the following section of a config file:
%define foo %ifdef %{foo} This text will be included %endif %ifdef %{bar} This text will not be included %endif
The preprocessor would produce the following output:
This text will be included
Note especially the quoting used for the macro names in the conditional; the only time macro name quoting may be omitted is when defining or undefining it.
The %ifndef conditional is the converse of %ifdef; If the named macro has not been defined, the conditional is true, and the text to the matching %endif is included in the text of the config file as evaluated by runspec. Note that the matching %endif may not necessarily be the next %endif; conditionals may be nested.
Given a slightly modified version of the example from earlier:
%define foo %ifndef %{foo} Now THIS text will not be included %endif %ifndef %{bar} This text WILL be included %endif
The preprocessor would produce the following output:
This text WILL be included
Checking whether or not a macro is defined is quite useful, but it's just a subset of the more general conditional facility available. This general form is
%if expression ... %endif
The expression is evaluated using a subset of the Perl interpreter, so the possibilities for testing values are fairly broad. For example,
%ifdef %{foo} ... %endif
is exactly equivalent to
%if defined(%{foo}) ... %endif
Likewise,
%ifndef %{foo} ... %endif
is exactly equivalent to
%if !defined(%{foo}) ... %endif
Using the general form, it's possible to string conditionals together:
%if defined(%{foo}) && !defined(%{bar}) || %{baz} == 0 ... %endif
If a macro contains a string value, you must supply quotes:
%if '%{foo}' eq 'Hello, Dave.' ... %endif
You may also perform basic math on macro values:
%if %{foo} * 2004 > 3737 ... %endif
More precisely, the Perl operations allowed are the :base_core and :base_math bundles, with the ability to dereference and modify variables disallowed. For more details, see the source code for config.pl (the eval_pp_conditional subroutine) and Perl's own Opcode documentation.
It's possible to get by without the "else" part of the classic "if .. then .. else" trio, but it's not any fun. It works as you'd expect:
%define foo %ifndef %{foo} This text will not be included %else This text WILL be included (from the else clause) %endif
The preprocessor would produce the following output:
This text WILL be included (from the else clause)
Only one %else per conditional is allowed.
%elif is another convenience that's been added. For those not familiar with CPP, it's an "else if" construct. You may have as many of these as you'd like. Given:
%define foo Hello! %if !defined(%{foo}) This text will not be included %elif defined(%{bar}) This text won't be included either %elif '%{foo}' eq 'Hello!' This text WILL be included (from the second elif clause) %else Alas, the else here is left out as well. %endif
The preprocessor would produce the following output:
This text WILL be included (from the second elif clause)
It's often helpful to be able to warn or exit on certain conditions. Perhaps there's a macro that must be set to a particular value, or maybe it's just very highly recommended.
%warning does just what you'd expect; when the preprocessor encounters this directive, it prints the text following to stdout and the current log file, along with its location within the file being read, and continues on.
Consider:
%if !defined(%{somewhat_important_macro}) % warning You have not defined somewhat_important_macro! %endif
When run through the preprocessor, this yields the following output:
$ configpp -c warning.cfg runspec v1698 - Copyright 1999-2012 Standard Performance Evaluation Corporation Using 'linux-suse10-amd64' tools Reading MANIFEST... 12932 files Loading runspec modules................ Locating benchmarks...found 14 benchmarks in 10 benchsets. Reading config file '/export/bmk/accel/config/warning.cfg' **** WARNING: You have not defined somewhat_important_macro! on line 2 of /export/bmk/accel/config/warning.cfg **** Pre-processed configuration file dump follows: -------------------------------------------------------------------- # Invocation command line: # /export/bmk/accel/bin/configpp --configpp -c warning.cfg # output_root was not used for this run ############################################################################ -------------------------------------------------------------------- The log for this run is in /export/bmk/accel/result/ACCEL.023.log runspec finished at Wed Jul 25 12:38:48 2012; 2 total seconds elapsed
Like %warning, %error logs an error to stderr and the log file. Unlike %warning, though, it then stops the run.
Consider a slightly modified version of the previous example:
%if !defined(%{REALLY_important_macro}) % error You have not defined REALLY_important_macro! %endif
When run through the preprocessor, this yields the following output:
$ configpp -c error.cfg runspec v1698 - Copyright 1999-2012 Standard Performance Evaluation Corporation Using 'linux-suse10-amd64' tools Reading MANIFEST... 12932 files Loading runspec modules................ Locating benchmarks...found 14 benchmarks in 10 benchsets. Reading config file '/export/bmk/accel/config/error.cfg' ************************************************************************* ERROR: You have not defined REALLY_important_macro! on line 2 of /export/bmk/accel/config/error.cfg ************************************************************************* There is no log file for this run. * * Temporary files were NOT deleted; keeping temporaries such as * /export/bmk/accel/tmp * (These may be large!) * runspec finished at Wed Jul 25 12:41:24 2012; 1 total seconds elapsed $ echo $? 1
Unlike a warning, the error will be close to the last thing output. As you can see from the output of 'echo $?', runspec has exited with an error code 1.
This section describes how the location and contents of several kinds of output files are influenced by your config file.
It was mentioned above that the MD5 section of the config file is written automatically by the tools. Each time your config file is updated, a backup copy is made. Thus your config directory may soon come to look like this:
$ cd $SPEC/config $ ls tmp.cfg* tmp.cfg tmp.cfg.2012-08-11_0842 tmp.cfg.2012-08-11_1631 tmp.cfg.2012-08-10_2007 tmp.cfg.2012-08-11_0847 tmp.cfg.2012-08-11_1632 tmp.cfg.2012-08-10_2010 tmp.cfg.2012-08-11_0853 tmp.cfg.2012-08-11_1731 tmp.cfg.2012-08-10_2048 tmp.cfg.2012-08-11_0854 tmp.cfg.2012-08-11_1731a tmp.cfg.2012-08-10_2051 tmp.cfg.2012-08-11_0855 tmp.cfg.2012-08-12_0921 tmp.cfg.2012-08-10_2054 tmp.cfg.2012-08-11_0856 tmp.cfg.2012-08-13_0846 tmp.cfg.2012-08-10_2058 tmp.cfg.2012-08-11_0857 tmp.cfg.2012-08-13_0846a tmp.cfg.2012-08-10_2105 tmp.cfg.2012-08-11_0858 tmp.cfg.2012-08-13_0849 tmp.cfg.2012-08-10_2105a tmp.cfg.2012-08-11_0903 tmp.cfg.2012-08-13_0850 tmp.cfg.2012-08-10_2106 tmp.cfg.2012-08-11_0904 tmp.cfg.2012-08-16_0957 tmp.cfg.2012-08-10_2125 tmp.cfg.2012-08-11_0905 tmp.cfg.2012-08-18_1133 tmp.cfg.2012-08-10_2125a tmp.cfg.2012-08-11_0906 tmp.cfg.2012-08-19_1626 tmp.cfg.2012-08-10_2126 tmp.cfg.2012-08-11_1348 tmp.cfg.2012-08-19_1627 tmp.cfg.2012-08-10_2127 tmp.cfg.2012-08-11_1349 tmp.cfg.2012-08-19_1634 tmp.cfg.2012-08-11_0811 tmp.cfg.2012-08-11_1349a tmp.cfg.2012-08-19_1638 tmp.cfg.2012-08-11_0823 tmp.cfg.2012-08-11_1349b tmp.cfg.2012-08-19_1718 tmp.cfg.2012-08-11_0835 tmp.cfg.2012-08-11_1553 tmp.cfg.2012-08-19_1720 tmp.cfg.2012-08-11_0836 tmp.cfg.2012-08-11_1556 tmp.cfg.2012-08-19_1731 tmp.cfg.2012-08-11_0838 tmp.cfg.2012-08-11_1557 tmp.cfg.2012-08-21_0611 tmp.cfg.2012-08-11_0839 tmp.cfg.2012-08-11_1627 tmp.cfg.2012-08-21_0622 tmp.cfg.2012-08-11_0840 tmp.cfg.2012-08-11_1629 tmp.cfg.2012-08-21_0652 $
If this feels like too much clutter, you can disable the backup mechanism, as described under backup_config. Note that doing so may leave you with a risk of losing the config file in case of a filesystem overflow or system crash. A better idea may be to periodically remove just portions of the clutter, for example by typing:
$ rm tmp.cfg.2012-08-1* $ ls tmp.cfg* tmp.cfg tmp.cfg.2012-08-21_0622 tmp.cfg.2012-08-21_0611 tmp.cfg.2012-08-21_0652 $
$SPEC/result (Unix) or %SPEC%\result (Windows) contains reports and log files. When you are doing a build, you will probably find that you want to pay close attention to the log files such as ACCEL.001.log. Depending on the verbosity level that you have selected, it will contain detailed information about how your build went.
The SPEC tool suite provides for varying amounts of output about its actions during a run. These levels range from the bare minimum of output (level 0) to copious streams of information that are probably useful only to tools developers (level 99). Selecting one output level gives you the output from all lower levels, which may cause you to wade through more output than you might like.
When you are trying to find your way through a log file, you will probably find these (case-sensitive) search strings useful:
runspec: | The runspec command for this log. |
---|---|
Running | Printed at the top of a run of a benchmark. |
# | Printed at the top of a run of a benchmark for runs with multiple iterations. Useful for finding the ref workloads in reportable runs. |
runtime | Printed at the end of a benchmark run. |
Building | Printed at the beginning of a benchmark compile. |
Elapsed compile | Printed at the end of a benchmark compile. |
There are also temporary debug logs, such as ACCEL.001.log.debug. A debug log contains very detailed debugging output from the SPEC tools, as if --verbose 99 had been specified.
For a successful run, the debug log will be removed automatically, unless you specify "--keeptmp" on the command line, or "keeptmp=1" in your config file.
For a failed run, the debug log is kept. The debug log may seem overwhelmingly wordy, repetitive, detailed, redundant, repetitive, and long-winded, and therefore useless. Suggestion: after a failure, try looking in the regular log first, which has a default verbosity level of 5. If your regular log doesn't have as much detail as you wish, then you can examine the additional detail in the debug log.
If you file a support request, you may be asked to send in the debug log.
The 'level' referred to in the table below is selected either in the config file verbose option or in the runspec command as in 'runspec --verbose n'.
Levels higher than 99 are special; they are always output to your log file. You can also see them on the screen if you set verbosity to the specified level minus 100. For example, the default log level is 5. This means that on your screen you will get messages at levels 0 through 5, and 100 through 105. In your log file, you'll find find the same messages, plus the messages at levels 106 through 199.
Level | What you get |
0 | Basic status information, and most errors. These messages can not be turned off. |
1 | List of the benchmarks which will be acted upon. |
2 | A list of possible output formats, as well as notification when beginning and ending each phase of operation (build, setup, run, reporting). |
3 | A list of each action performed during each phase of operation (e.g. "Building 350.md", "Setting up 359.miniGhost") |
4 | Notification of benchmarks excluded |
5 (default) | Notification if a benchmark somehow was built but nevertheless is not executable. |
6 | Time spent doing automatic flag reporting. |
7 | Actions to update SPEC-supplied flags files. |
10 | Information on basepeak operation. |
12 | Errors during discovery of benchmarks and output formats. |
15 | Information about certain updates to stored config files |
24 | Notification of additions to and replacements in the list of benchmarks. |
30 | A list of options which are included in the MD5 hash of options used to determine whether or not a given binary needs to be recompiled. |
35 | A list of key=value pairs that can be used in command and notes substitutions, and results of env_var settings. |
40 | A list of 'submit' commands for each benchmark. |
70 | Information on selection of median results. |
89 | Progress comparing run directory MD5s for executables. |
90 | Time required for various internal functions in the tools. |
95, 96, 97, 98 | Flag parsing progress during flag reporting (progressively more detail) |
99 | Gruesome detail of comparing MD5 hashes of files being copied during run directory setup. |
--- Messages at the following levels will always appear in your log files --- | |
100 | Various config file errors, such as bad preprocessor directives, bad placement of certain options, illegal characters... |
102 | Information about output formats that could not be loaded. |
103 | A tally of successes and failures during the run broken down by benchmark. |
106 | A list of runtime and calculated ratio for each benchmark run. |
107 | Dividers to visually block each phase of the run. |
110 | Elapsed time for each portion of a workload (if an executable is invoked more than once). |
120 | Messages about which commands are being issued for which benchmarks. |
125 | A listing of each individual child processes' start, end, and elapsed times. |
130 | A nice header with the time of the runspec invocation and the command line used. Information about what happened with your sysinfo program |
140 | General information about the settings for the current run. |
145 | Messages about file comparisons. |
150 | List of commands that will be run, and details about the settings used for comparing output files. Also the contents of the makefile written. |
155 | Start, end, and elapsed times for benchmark run. |
160 | Start, end, and elapsed times for benchmark compilation. |
180 | stdout and stderr from commands run |
190 | Start and stop of delays |
191 | Notification of command line used to run specinvoke. |
In this section, two things are attempted: a guided tour of how to find some of the interesting parts in a log file, and an explanation of how the SPEC toolset implements feedback-directed optimization, which is commonly abbreviated FDO. The technique is also known as Profile-Based Optimization, or PBO.
To use FDO, you typically compile a program twice. The first compile creates an image with instrumentation. Then, you run the program, with a "training" workload, and the instrumentation collects data about the run: a profile. Finally, you re-compile the program, and the compiler uses the profile to improve its optimizations.
The SPEC tools makes all of this relatively easy. Here's a config file that builds 357.csp with FDO:
$ cat brian.cfg ext = blue271 iterations = 1 output_format = text teeout = 1 runlist = csp tune = peak openacc=peak: OPTIMIZE = -fast -m64 PASS1_CFLAGS = -xprofile=collect:./fb PASS1_LDFLAGS = -xprofile=collect:./fb PASS2_CFLAGS = -xprofile=use:./fb PASS2_LDFLAGS = -xprofile=use:./fb $
The PASSxx lines above cause FDO to happen. Each of the profiling switches is specified twice because we need them to be applied both for compiles and for the link. Let's invoke runspec with the above config file, searching the output for lines that contain either "Training" or "txinvr.o", which is a handy string to pick out both a sample compile line and the link line:
$ runspec --config brian --size test | grep -e Training -e txinvr.o cc -c -o txinvr.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xprofile=collect:./fb -fast -m64 -xopenmp txinvr.c cc -fast -m64 -xprofile=collect:./fb add.o adi.o error.o exact_rhs.o exact_solution.o initialize.o rhs.o print_results.o set_constants.o sp.o txinvr.o verify.o -lm -o csp Training 357.csp with the train workload cc -c -o txinvr.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xprofile=use:./fb -fast -m64 -xopenmp txinvr.c cc -fast -m64 -xprofile=use:./fb add.o adi.o error.o exact_rhs.o exact_solution.o initialize.o rhs.o print_results.o set_constants.o sp.o txinvr.o verify.o -lm -o csp $
Above, you can see the basic flow: compile using the switch -xprofile=collect for both the compile and link lines; run the training workload; then recompile with -xprofile=use.
Let's go a little deeper by taking apart the log file. This section uses the actual log file from the above runspec command, but white space has been adjusted.
The first thing to look for when you're trying to make sure you've found the right log file is the line that contains the string runspec:
$ grep runspec: *031.log runspec: runspec --config brian --size test $
Yes, this looks like the right log. To find the section where the benchmark is built, search for "Building", which is soon followed by information about what was written to the makefile:
------------------------------------------------------------------------ When checking options for /export/bmk/accel/benchspec/ACCEL/357.csp/exe/csp_peak.blue271, no MD5 sums were found in the config file. They will be installed after build. Building 357.csp peak blue271 default: (build_peak_blue271.0000) Wrote to makefile '/export/bmk/accel/benchspec/ACCEL/357.csp/build/build_peak_blue271.0000/Makefile.deps': # These are the build dependencies # End dependencies Wrote to makefile '/export/bmk/accel/benchspec/ACCEL/357.csp/build/build_peak_blue271.0000/Makefile.spec': TUNE=peak EXT=blue271 NUMBER=357 NAME=csp SOURCES= add.c adi.c error.c exact_rhs.c exact_solution.c initialize.c \ rhs.c print_results.c set_constants.c sp.c txinvr.c verify.c EXEBASE=csp NEED_MATH=yes BENCHLANG=C ONESTEP= CONESTEP= OPTIMIZE = -fast -m64 OS = unix PASS1_CFLAGS = -xprofile=collect:./fb PASS1_LDFLAGS = -xprofile=collect:./fb PASS2_CFLAGS = -xprofile=use:./fb PASS2_LDFLAGS = -xprofile=use:./fb absolutely_no_locking = 0 abstol =
To tell the tools that we want to use FDO, we set PASS1_<language>FLAGS and PASS2_<language>FLAGS. If the tools see any use of these flags, they will perform two compiles. The particular compiler used in this example expects to be invoked twice: once with -xprofile=collect:./fb and then again with -xprofile=use:./fb.
A useful search string in the log is "specmake". You will have to search for "specmake" a few times until you get down to here:
Compile for '357.csp' started at: Wed Jul 25 13:24:20 2012 (1343247860) Issuing make.clean command 'specmake clean' specmake clean 2> make.clean.err | tee make.clean.out Start make.clean command: Wed Jul 25 13:24:20 2012 (1343247860) Executing commands: specmake clean ----------------------------- rm -rf *.o trainset.out find . \( -name \*.o -o -name '*.fppized.f*' -o -name '*.i' -o -name '*.mod' \) -print | xargs rm -rf rm -rf csp rm -rf csp.exe rm -rf core rm -rf Stop make.clean command: Wed Jul 25 13:24:21 2012 (1343247861) Elapsed time for make.clean command: 00:00:01 (1) Issuing fdo_make_pass1 command 'specmake --always-make build FDO=PASS1' specmake --always-make build FDO=PASS1 2> fdo_make_pass1.err | tee fdo_make_pass1.out Start fdo_make_pass1 command: Wed Jul 25 13:24:21 2012 (1343247861) Executing commands: specmake --always-make build FDO=PASS1 ----------------------------- cc -c -o add.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xprofile=collect:./fb -fast -m64 add.c cc -c -o adi.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xprofile=collect:./fb -fast -m64 adi.c cc -c -o error.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xprofile=collect:./fb -fast -m64 error.c
You can see above that specmake is invoked with FDO=PASS1, which causes the switches from PASS1_CFLAGS to be used. If you want to understand exactly how this affects the build, read $SPEC/benchspec/Makefile.defaults, along with the document $SPEC/Docs/makevars.html.
To find the training run, search forward for Training
Stop options1 command: Wed Jul 25 13:24:28 2012 (1343247868) Elapsed time for options1 command: 00:00:01 (1) Training 357.csp with the train workload Commands to run: -C /export/bmk/accel/benchspec/ACCEL/357.csp/build/build_peak_blue271.0000 -o trainset.out -e trainset.err ../build_peak_blue271.0000/csp (timed) Specinvoke: /export/bmk/accel/bin/specinvoke -d /export/bmk/accel/benchspec/ACCEL/357.csp/build/build_peak_blue271.0000 -e speccmds.err -o speccmds.stdout -f speccmds.cmd -C -q Issuing command '/export/bmk/accel/bin/specinvoke -d /export/bmk/accel/benchspec/ACCEL/357.csp/build/build_peak_blue271.0000 -e speccmds.err -o speccmds.stdout -f speccmds.cmd -C -q' /export/bmk/accel/bin/specinvoke -d /export/bmk/accel/benchspec/ACCEL/357.csp/build/build_peak_blue271.0000 -e speccmds.err -o speccmds.stdout -f speccmds.cmd -C -q Start command: Wed Jul 25 13:24:28 2012 (1343247868) Stop command: Wed Jul 25 13:25:44 2012 (1343247944) Elapsed time for command: 00:01:16 (76) Workload elapsed time (0:1) = 67.886497 seconds Copy 0 of 357.csp (peak train) run 1 finished at Wed Jul 25 13:25:36 2012. Total elapsed time: 67.886497 comparing files in '/export/bmk/accel/benchspec/ACCEL/357.csp/build/build_peak_blue271.0000' comparing 'trainset.out' with abstol=, binary=, calctol=0, cw=, floatcompare=, ignorecase=, obiwan=, reltol=1e-06, skipabstol=, skipobiwan=, skipreltol=, skiptol=
The key lines to notice above are the ones just after "Commands to run:", which begin with -o. These lines cause specinvoke to run the freshly-built csp one time, some benchmarks may have multiple runs here (they will have multiplt -o lines. The -o and -e parameters to specinvoke indicate where it is to send standard output and standard error. Thus, for example
-o trainset.out -e trainset.err ../build_peak_blue271.0000/csp (timed)
will cause this command to actually be run:
../build_peak_blue271.0000/csp > trainset.out 2> trainset.err
For more information on specinvoke, see utility.html.
For some particular benchmarks, SPEC has supplied multiple workloads that are used to train them. Why multiple workloads? Because a training set should provide the compiler with information about usage of the program which is, in the opinion of the developer, representative of real-world use. Of course, if the developer has actual evidence, instead of merely an opinion, so much the better!
You can see the inputs that SPEC has provided for training purposes in the directories nnn.benchmark/data/train/input and nnn.benchmark/data/all/input. In some cases, the training workloads required significant development effort, but as a user of the suite you don't have to worry about that; you can simply apply them. SPEC is aware that there is some variation in the fidelity between the training workloads vs. the timed "ref" workloads. In the real world, also, training workloads used by program developers do not correspond perfectly to how end users apply the programs.
In any case, the tester who employs the SPEC ACCEL suite does not have to come up with his or her own training workloads, and, indeed, is not allowed to do so under the run rules.
Notice that the log file tells us the workload elapsed time of csp: 67.886 seconds; and the total time: 67.886 seconds. If there would have been multiple training datasets, there would be multiple workload elapsed times and the total time would have been the sum of these times.
Finally, the compiler is run a second time, to use the profile feedback and build a new executable, at the second specmake build:
Start fdo_make_pass2 command: Wed Jul 25 13:25:45 2012 (1343247945) Executing commands: specmake --always-make build FDO=PASS2 ----------------------------- cc -c -o add.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xprofile=use:./fb -fast -m64 add.c cc -c -o adi.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xprofile=use:./fb -fast -m64 adi.c cc -c -o error.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xprofile=use:./fb -fast -m64 error.c cc -c -o exact_rhs.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xprofile=use:./fb -fast -m64 exact_rhs.c cc -c -o exact_solution.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xprofile=use:./fb -fast -m64 exact_solution.c cc -c -o initialize.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xprofile=use:./fb -fast -m64 initialize.c cc -c -o rhs.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xprofile=use:./fb -fast -m64 rhs.c cc -c -o print_results.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xprofile=use:./fb -fast -m64 print_results.c cc -c -o set_constants.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xprofile=use:./fb -fast -m64 set_constants.c cc -c -o sp.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xprofile=use:./fb -fast -m64 sp.c cc -c -o txinvr.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xprofile=use:./fb -fast -m64 txinvr.c cc -c -o verify.o -DSPEC_ACCEL -DSPEC -DNDEBUG -xprofile=use:./fb -fast -m64 verify.c cc -xprofile=use:./fb -fast -m64 add.o adi.o error.o exact_rhs.o exact_solution.o initialize.o rhs.o print_results.o set_constants.o sp.o txinvr.o verify.o -lm -o csp Stop fdo_make_pass2 command: Wed Jul 25 13:25:51 2012 (1343247951) Elapsed time for fdo_make_pass2 command: 00:00:06 (6)
This time, specmake is invoked with FDO=PASS2, which is why the compile picks up the PASS2_CFLAGS.
And that's it. The tools did most of the work; the user simply set the PASSn flags in the config file.
If you do a very large number of builds and runs, you may find that your result directory gets far too cluttered. If it does, you should feel free to issue commands such as these on Unix systems:
cd $SPEC mv result result_old mkdir result
On Windows, you could say:
cd %SPEC% rename result result_old mkdir result
As described under "About Disk Usage" in runspec.html, the SPEC tools do the actual builds and runs in newly created directories. The benchmark sources are never modified in the src directory.
The build directories for a benchmark are located underneath that benchmarks' top-level directory, typically $SPEC/benchspec/ACCEL/nnn.benchmark/build (Unix) or %SPEC%\benchspec\ACCEL\nnn.benchmark\build (Windows).
(If you are using the output_root feature, then the first part of that path will change to be your requested root instead of SPEC; and if you change the default for build_in_builddir, then the last part of that path will be run instead of build.)
The build directories have logical names, typically of the form build_<tune>_<extension>.0000. For example, after the command runspec --config jun09a --action build --tune base libquantum, the following directory would be created:
$ cd $SPEC/benchspec/ACCEL/350.md/build $ pwd /spec/joelw/benchspec/ACCEL/350.md/build $ ls -ld build*jun* drwxrwxr-x 2 joelw ptg 1536 Jun 10 14:49 build_base_jun09a.0000 $
On Windows, you would say cd %SPEC%\benchspec\ACCEL\350.md\build followed by dir build*.
If the directory build_<tune>_<extension>.0000 already exists when a new build is attempted for the same tuning and extension, the directory will be re-used, unless:
In such cases, the 0000 will be incremented until a name is generated that is available. You can find locked directories by searching for lock=1 in the file $SPEC/benchspec/ACCEL/<nnn.benchmark>/run/list (Unix) or %SPEC%\benchspec\ACCEL\<nnn.benchmark>\run\list (Windows).
When more than one build directory has been created for a given tuning and extension, you may need to trace the directory back to the specific build attempt that created it. You can do so by searching for the directory name in the log files:
$ grep Building *log | grep build_base_jun09a.0001 ACCEL.380.log: Building 350.md ref base jun09a default: (build_base_jun09a.0001) $
In the above example, the grep command locates log #380 as the log that corresponds to this run directory. On Windows, of course, you would use findstr instead of grep.
A variety of files are output to the build directory. Here are some of the key files which can usefully be examined:
Makefile.spec | The components for make that were generated for the current config file with the current set of runspec options. |
options.out | For 1 pass compile: build options summary. |
options1.out | For N pass compile: summary of first pass. |
options2.out | For N pass compile: summary of second pass. |
make.out | For 1 pass compile: detailed commands generated. |
fdo_make_pass1.out | For N pass compile: detailed commands generated for 1st pass. |
fdo_make_pass2.out | For N pass compile: detailed commands generated for 2nd pass. |
*.err | The output from standard error corresponding to the above files. |
For more information about how the run directories work, see the descriptions of specinvoke, specmake, and specdiff in utility.html.
Sometimes a portability issue may require use of alternative source code for a benchmark, and SPEC may issue the alternate as a "src.alt". The effect of applying a src.alt is to modify the sources in the build directory. This chapter provides an example of applying a src.alt, and introduces development of src.alts.
This example shows the effect of a src.alt, 357.csp.no_line.accel.v1.0.tar.xz, which is an example src.alt. There are not yet any approved for SPEC ACCEL; this is just an example. A src.alt must have a README, which for this example says, in part:
This change reduces confusion about line numbers in compiler error or warning messages. The diff is quite large, but the change is easy to describe. All of the '#line' preprocessor directives in generated sources are removed.
If it were a real src.alt, one would download it from www.spec.org/accel/src.alt, and then apply it per the instructions (found in the same README from which the above paragraphs were excerpted):
$ . ./shrc $ specxz -dc 357.csp.no_line.accel.v1.0.tar.xz | spectar -xf -
Let's set up two build directories - one without the change:
$ cat $SPEC/config/without.cfg ext = without $ $ runspec --action buildsetup -T base -c without -v 1 357.csp runspec v6624 - Copyright 1999-2012 Standard Performance Evaluation Corporation Using 'macosx' tools Reading MANIFEST... 19145 files Loading runspec modules................ Locating benchmarks...Reading config file '/Volumes/ACCEL/accel/config/without.cfg' Benchmarks selected: 357.csp The log for this run is in /Volumes/ACCEL/accel/result/ACCEL.009.log runspec finished at Thu Jul 21 17:20:53 2012; 2 total seconds elapsed $
And a build directory with the change:
$ cat $SPEC/config/with.cfg ext = with 357.csp: srcalt = no_line $ $ runspec --action buildsetup -T base -c with -v 1 357.csp runspec v6624 - Copyright 1999-2012 Standard Performance Evaluation Corporation Using 'macosx' tools Reading MANIFEST... 19145 files Loading runspec modules................ Locating benchmarks...Reading config file '/Volumes/ACCEL/accel/config/with.cfg' Benchmarks selected: 357.csp 357.csp (base): "no_line" src.alt was used. The log for this run is in /Volumes/ACCEL/accel/result/ACCEL.010.log runspec finished at Thu Jul 21 17:22:12 2012; 2 total seconds elapsed $
If we go look at those directories, the "with" directory has the expected change from the src.alt. As mentioned in the README above, preprocessor directives are removed (lines marked "-"). No lines are added, but if they were, they would be marked with "+":
$ diff -u build_base_without.0000/verifyData.c build_base_with.0000/verifyData.c --- build-base_without.0000/verifyData.c 2012-03-05 07:53:16.000000000 -0800 +++ build-base_with.0000/verifyData.c 2012-07-26 07:24:12.000000000 -0700 @@ -31,7 +31,6 @@ * minSeparation - [integer] minimum separation between start and end points */ -/* #line 34 "verifyData.c" */ void verifyData(SIMMATRIX_T *simMatrix, SEQDATA_T *seqData, int minScore, int minSeparation) { @@ -49,7 +48,6 @@ #endif { -/* #line 52 "verifyData.c" */ /* * Map the OpenMP threads or MPI processes onto a rectangular @@ -85,7 +83,6 @@ n = seqData->mainLen; m = seqData->matchLen; -/* #line 88 "verifyData.c" */ iBeg = 1 + (n*myRow)/npRow; jBeg = 1 + (m*myCol)/npCol; $
If you are trying to create a new alternative source, you should become familiar with how to work in a sandbox, temporarily abandoning the tools during your development phase. (Or, you can use convert_to_development to make the whole installation into one big sandbox.) Once you have developed your alternative source, you'll want to package it up with makesrcalt, and you'll need to contact SPEC to get the source approved. Both are much more fully discussed in utility.html.
When something goes wrong, here are some things to check:
Are there any obvious clues in the log file? Search for the word "Building". Keep searching until you hit the next benchmark AFTER the one that you are interested in. Now scroll backward one screen's worth of text.
Did your desired switches get applied? Go to the build directory, and look at options*out.
Did the tools or your compilers report any errors? Look in the build directory at *err.
What happens if you try the build by hand? See the section on specmake in utility.html.
If an actual run fails, what happens if you invoke the run by hand? See the information about "specinvoke -n" in utility.html
Do you understand what is in your path, and why? Sometimes confusion can be greatly reduced by ensuring that you have only what is needed, avoiding, in particular, experimental and non-standard versions of standard utilities.
Note: on Windows systems, SPEC recommends that Windows/Unix compatibility products should be removed from the %PATH% prior to invoking runspec, in order to reduce the probability of certain difficult-to-diagnose error messages.
Try asking the tools to leave more clues behind, with keeptmp.
Windows systems are not officially supported. The tools do work with Windows, so any problems you encounter are probably in the benchmark itself and will require your effort in helping port the problematic codes.
Copyright 2014-2017 Standard Performance Evaluation Corporation
All Rights Reserved