Copyright © 2007 Intel Corporation. All Rights Reserved.
Platform settings
One or more of the following settings may have been set. If so, the "Platform Notes" section of the report will say so; and you can read below to find out more about what these settings mean.
Power Regulator for ProLiant support (Default=HP Dynamic Power Savings Mode)
Values for this BIOS setting can be:
HP Power Profile (Default = Balanced Power and Performance):
Values for this BIOS setting can be:
Power Efficiency Mode (Default=Efficiency)
Values for this BIOS setting can be:
Adjacent Sector Prefetch (Default = Enabled):
This BIOS option allows the enabling/disabling of a processor mechanism to fetch the adjacent cache line within an 128-byte sector that contains the data needed due to a cache line miss.
In some limited cases, setting this option to Disabled may improve performance. In the majority of cases, the default value of Enabled provides better performance. Users should only disable this option after performing application benchmarking to verify improved performance in their environment.
Hardware Prefetch (Default = Enabled):
This BIOS option allows allows the enabling/disabling of a processor mechanism to prefetch data into the cache according to a pattern recognition algorithm.
In some limited cases, setting this option to Disabled may improve performance. In the majority of cases, the default value of Enabled provides better performance. Users should only disable this option after performing application benchmarking to verify improved performance in their environment.
Data Reuse (Default = Enabled):
This BIOS option allows the enabling/disabling of the Data Reuse optimization.
Enabling this option reduces the frequency of L3 cache updates from the L1 cache. This may improve performance by reducing the internal bandwidth consumed by constantly updating L1 cache lines in the L3 cache.
Since this optimization results in more fetches to main memory, in some limited cases, setting this option to Disabled may improve performance. In the majority of cases, the default value of Enabled provides better performance. Users should only disable this option after performing application benchmarking to verify improved performance in their environment.
Turbo Mode (Default = Enabled):
Turbo Boost Technology is a processor feature which allows the processor to transition to a higher frequency than the processor's rate speed if the processor has available power headroom and is within tempereature specifications. Disabling this feature will reduce power usage but will reduce the system's maximum achievable performance under some workloads.
Thermal Configuration (Default = Optimal Cooling):
This feature allows the user to select the fan cooling solution for the system. Values for this BIOS option can be:
Defer All Transactions Mode (Default = Disabled):
When this option is enabled, front-side bus bandwidth may be increased on systems with heavy I/O workload because CPU initiated I/O transactions can be deferred enabling other transactions to make progress while data is retrieved. However, latency for completing transactions may also increase. The system's workload will determine which setting will provide highest performance.
Memory Speed with 2DPC ([email protected])
Sets the memory speed and voltage setting for system when there are 2 DIMMs per channel (2DPC). Values for this BIOS setting can be:
SATA #1 Controller (Default=Auto)
Sets the mode for the embedded controller. The values for this BIOS setting can be:
submit= MYMASK=`printf '0x%x' \$((1<<\$SPECCOPYNUM))`; /usr/bin/taskset \$MYMASK $command
When running multiple copies of benchmarks, the SPEC config file feature submit is sometimes used to cause individual jobs to be bound to specific processors. This specific submit command is used for Linux. The description of the elements of the command are:
Using numactl to bind processes and memory to cores
For multi-copy runs or single copy runs on systems with multiple sockets, it is advantageous to bind a process to a particular core. Otherwise, the OS may arbitrarily move your process from one core to another. This can effect performance. To help, SPEC allows the use of a "submit" command where users can specify a utility to use to bind processes. We have found the utility 'numactl' to be the best choice.
numactl runs processes with a specific NUMA scheduling or memory placement policy. The policy is set for a command and inherited by all of its children. The numactl flag "--physcpubind" specifies which core(s) to bind the process. "-l" instructs numactl to keep a process memory on the local node while "-m" specifies which node(s) to place a process memory. For full details on using numactl, please refer to your Linux documentation, 'man numactl'
mysubmit.pl
This perl script is used to ensure that for a system with N cores the first N/2 benchmark copies are bound to a core that does not share its L2 cache with any of the other copies. The script does this by retrieving and using CPU data from /proc/cpuinfo. Note this script will only work for 6-core CPUs.
ulimit -s [n | unlimited] (Linux)
Sets the stack size to n kbytes, or unlimited to allow the stack size to grow without limit.
KMP_STACKSIZE=integer[B|K|M|G|T] (Linux)
Sets the number of bytes to allocate for each parallel thread to use as its private stack. Use the optional suffix B, K, M, G, or T, to specify bytes, kilobytes, megabytes, gigabytes, or terabytes. The default setting is 2M on IA32 and 4M on IA64.
KMP_AFFINITY=physical,n (Linux)
Assigns threads to consecutive physical processors (for example, cores), beginning at processor n. Specifies the static mapping of user threads to physical cores, beginning at processor n. For example, if a system is configured with 8 cores, and OMP_NUM_THREADS=8 and KMP_AFFINITY=physical,2 are set, then thread 0 will mapped to core 2, thread 1 will be mapped to core 3, and so on in a round-robin fashion.
OMP_NUM_THREADS=n
This Environment Variable sets the maximum number of threads to use for OpenMP*
parallel regions to n if no other value is specified in the application. This
environment variable applies to both -openmp and -parallel (Linux)
or /Qopenmp and /Qparallel (Windows). Example syntax on a Linux system with 8
cores:
export OMP_NUM_THREADS=8
Default is the number of cores visible to the OS.
vm.max_map_count-n (Linux)
The maximum number of memory map areas a process may have. Memory map areas are used as a side-effect of calling malloc, directly by mmap and mprotect, and also when loading shared libraries.